Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode 1: Building machines that think like us image

Episode 1: Building machines that think like us

S1 E1 ยท CogNation
Avatar
47 Plays6 years ago

In the inaugural episode of CogNation, Joe and Rolf talk about artificial intelligence that mimics the way people think. Along the way, they also talk about pneumatic tubes, uploading consciousness, and how we'll spend our time when robots do all the work. They touch on how this all inevetiably leads to robots taking over the world. The discussion is based around the article "Building Machines That Learn and Think Like People" by Brenden Lake, et al. This article focuses on recent progress in cognitive science that suggests that human-like thinking machines should leverage causal models of the world and be endowed with intuitive physics and psychology.

Recommended
Transcript

Introduction to Cognation Podcast

00:00:06
Speaker
This is Cognation, the podcast about cognitive psychology, neuroscience, philosophy, technology, the future of the human experience, and other stuff we like. It's hosted by me, Rolf Nelson. And me, Joe Hardy. Welcome to the show.

Should AI Mimic Human Thinking?

00:00:24
Speaker
All right, so here's a question for you, Joe. Why would you build machines that learn and think like people? Why would you have that goal? And why is that maybe a dumb goal?
00:00:36
Speaker
I think that it seems like the, the two things that people talk about when they talk about this, this topic of, you know, building machines that learn and think like people is number one, perhaps, you know, a model of machine learning or artificial intelligence could help you better understand the brain. So help you better understand neuroscience and cognitive use a model, uh, a neural, you know, uh, nearly related model, whether it be inspired by neuroscience to better understand the brain.
00:01:07
Speaker
And then the other would be, might be a useful guidepost in developing smarter machines. So the idea is that if your goal is to build really smart machines, then, you know, taking the smartest thing around that we know about, which is the human brain, at least that's what we're telling ourselves, you know, looking at that as, as a reference point could be useful.

AI in Popular Culture

00:01:30
Speaker
Yeah, and I mean, how much of it do you think might be even some momentum too? It's like the first, the goal since people have been trying to make any sort of artificial intelligence has been to make science fiction robots that do what we tell them to do. And the first, I mean, it's like Metropolis, remember that old movie? Oh yeah, totally. All of that stuff, the whole goal is to create someone that can serve us. I think from the popular imagination,
00:01:58
Speaker
the goal of artificial intelligence, every movie like Kylie Joel Osment in that movie AI, every goal seems to be to just make something that's a substitute person. And then it always moves on into the ethics of what should we do with this person? Right. You know, what is it ethically wrong to destroy something that has an intelligence like us or something like that? Yeah, no, totally. And I think that for me, why is that like not
00:02:27
Speaker
the right way to go? Or why is that not even the right question?

Enhancing Humans with AI

00:02:30
Speaker
Well, what comes to my mind is like, okay, why are you trying to build someone, build a machine that is just another person? Like we have lots of people. Yeah. And people are good at what people are good at. That's right. So if you want to help people be better at stuff, here's one reason I could think of that sort of strings up, which is,
00:02:54
Speaker
you could have a person that could interface with machines better. So in other words, like C3PO kind of. Yeah, like C3PO that would be like a human in most respects, like be adaptable. You could walk around the world just like Joe Hardy. And then when you sat down to play chess, you just can instantly access deep blue and
00:03:19
Speaker
you can win at chess immediately. And then when you need to do some kinds of specialized tasks, you can just interface with the computer much more easily than people do. Right. Our interface with the computer is just typing into a cell phone or talking to Alexa, right? If we could just learn a foreign language in two seconds, then there's an advantage. Absolutely. Yeah, I mean, that's a great point. It brings up several related
00:03:47
Speaker
topics that I think are super important to this.

Designing AI: Mobility vs. Specialization

00:03:50
Speaker
One is, of course, the role of the effector. The effector. Yeah. So like hands and feet and the ability to do stuff in the world. I mean, I feel like that's where this topic, I think people just sort of automatically assume that you would build something that walks around like bipedal, you know, with hands and arms and legs and all that kind of stuff. And then that's what your robot is going to be. Yeah.
00:04:16
Speaker
And people spend so much time and money trying to build robots that walk around like people. And it's like, I don't know, man. It seems like just the wrong, I think in general, the whole thing is really closely related to the other point, which is what are we good at? What are we not good at? And why don't we build stuff that helps us with the stuff we're not good at rather than just trying to take over the stuff we already are good at? Yeah. I mean,

Subconscious vs. Super-Conscious AI

00:04:43
Speaker
I think also kind of in the popular imagination, there's this idea of artificial intelligence that comes past some particular point, like the singularity or achieving consciousness or something like that. There's this idea that there's a sub-intelligent or subconscious, less than conscious artificial intelligence, and then there's the super-conscious one. And the super-conscious one is totally adaptable and can do anything, but the one that's less conscious is
00:05:14
Speaker
inflexible. I think that's the way that people might consider an advancement in artificial intelligence to be, that you achieve this level of flexibility so that you're no longer just a single module that can help people out doing calculating or doing all the stuff that we suck at. Right. Yeah, that's super good. I mean, it points out the idea of goals. So like the first goal you pointed out was spot on, I think.
00:05:43
Speaker
which is we want to build slaves.

AI as Servants and Transportation Innovations

00:05:47
Speaker
We want to build robots and slaves. When people talk about building robots, that's what they're talking about. Whether it's a slave that just builds cars or
00:06:07
Speaker
whether it's a slave that just does calculations or sort of. Or one that does whatever you tell it to do. I mean, I think that's the goal fundamentally when you talk about like the metropolis type robot. Yeah. This is a bipedal, mechanical person kind of that does all the stuff that you're supposed to do, but you don't want to do so. And can better understand your wishes too. So you can list an arbitrary thing that you want. I want you to figure out
00:06:37
Speaker
how to get me to the moon or how to get me to Mars, or just go to the grocery store and get me some wantons or something like that to figure out any arbitrary number of things that you desire and be able to get them to you. Right. Right. I mean now, you know, that sounds a lot like Amazon right.
00:06:56
Speaker
They have drones, right? That's the drone thing, which is actually in some ways a better effector than the bipedal thing that would take forever to walk from Las Vegas or wherever the closest Amazon delivery center is to hear what the drone, potentially overnight. The bipedal thing is interesting, I think, because
00:07:18
Speaker
It's a it's a really flexible way of getting around and it's evolution figured out this way of being able to navigate through lots of complex terrain and it has to be sort of a naturally evolved process that you know takes a reasonable amount of energy or something like that considers all the constraints of evolution. But of course something like a drone it moves faster just like a car moves faster and a car is a better way to get from
00:07:47
Speaker
one point to another, wheel is a great way to get around. But evolution, for flexibility, a wheel is not perfect because you can't climb up a mountain very well with a wheel, or there are a lot of drawbacks to having a wheel. But for most tasks that you would want it for, you can design something better than bipedal mobility, I'm sure. No, absolutely. No, for sure. And I think the other point there is that it just kind of speaks to the thing about the hyperloop and how we're just organizing our transportation
00:08:17
Speaker
more broadly, is that the walking robot works well in the world that we have right now. Right. And so if we're thinking about, I think so much of when we think about this stuff, it's like we're just trying to tack on whatever it is to the existing framework and infrastructure. And sure, yeah, OK, then that's how you get self-driving cars, which are just so stupid. So dumb, right? I mean, I get it, right? I get it.
00:08:45
Speaker
It is the way to move forward because you can do it politically, whereas you can't do the things you should do politically, which is reorganize things to make more sense and how our transportation infrastructure is set up. Oh, like what? Well, I mean, for example, build trains underground, dig a lot of holes. This is what Elon Musk wants to do. What's the name of his company?
00:09:12
Speaker
The boring company. It makes perfect sense, right? I mean, I heard the criticism of that the other day, I think it was on Vox, their Explained series, but they were talking about it. And they're like, yeah, it was okay. Elon Musk reinvented the subway. Great. Totally valid criticism, I think. It's not really a new concept. But what is new about it is that
00:09:38
Speaker
We've abandoned that whole framework, essentially. We haven't been digging many holes for many, many decades, really. Cities that are putting in stuff, they're putting in light rail, which is not really, not as good. It would make much more sense to dig holes and put lots and lots of fast subways that move people, but also move stuff, right? No reason why you can't move stuff through those holes underground.
00:10:08
Speaker
I mean, I guess it's a real estate that is unexploited. Right. And think about pneumatic tubes, man. We don't do anything with pneumatic tubes anymore. Well, we used to do, at a drive-in bank, you could have a pneumatic tube that would take your check and then send it to the teller. Exactly. Exactly. And when I was at Lumasi, we were at the 140 New Montgomery Building.
00:10:35
Speaker
That was the old AT&T building. They had pneumatic tubes throughout the whole building. That was how they basically communicated instead of sending emails. They would send little notes in these pneumatic tubes. But when you put it that way, actually, emails do seem like a better option. If it's purely communication, then pneumatic tubes, of course, don't make sense. Right. But you could send something bigger that was heavier, like shoes or something.
00:11:04
Speaker
Yeah, it should be a basic service like electricity and gas and pneumatic tubes.
00:11:18
Speaker
I guess it's the opposite, it's because it's a vacuum. You actually take all the air out of the tubes. So yeah, it's not really a pneumatic tube. But it is a tube and you move stuff through it. But that gives me the second goal. So the first goal, of course, is just machines that can basically do all the stuff that we don't want to do. All right, so that's goal number one. Makes sense. I still don't think that walking, talking robots are the way to go. That's like a whole conversation.
00:11:48
Speaker
The second goal is, I think what you hit on already, which is that you can upload your consciousness and live forever.

Consciousness Upload and Identity

00:11:56
Speaker
Yeah. But those two things really necessarily have to even be like very similar developments, right? I mean, it could be very different. Like the, the, the uploading your consciousness, but as soon as we started saying upload, right? I already started thinking about big servers. You don't even need like a body. Yeah. You know, I'm still,
00:12:17
Speaker
I think I used to be enchanted with that idea, and I'm less enchanted after reading a lot of Derek Parfit. Do you read his stuff on the tele-transporter argument? No. Tell me about it. Well, okay, let me see if I can explain this in the right way. Okay, so Parfit, he wrote a pretty influential book on personhood and self in the 1980s.
00:12:45
Speaker
One of his arguments that got picked up and really referenced a lot was the idea of a tele transporter and what a thought experiment with a Star Trek tele transporter could tell you about what you considered your permanent self. So Captain Kirk or Jean-Luc Picard is in the enterprise and they go in the tele transporter. Now what happens when you go in the tele transporter? It destroys every single atom in your body, turns them into pure information,
00:13:16
Speaker
beams them down to another location, let's say another tele transporter, that gets that information and then reconstructs it again. Right. Okay, so functionally, you should be the same exact person. In other words, you walk into the machine, you know, whatever's on your mind, you're thinking about what you're going to have for dinner. And coming out of the machine, there is a person who is thinking about what you're going to have for dinner.
00:13:45
Speaker
OK, so this is the main case. So far, most people have no problem with this. OK, that's me that comes out the other end. Then he introduces a couple different cases, one of which is what he calls the branch line case. What happens if you instead of being destroyed when you get sent through? You just get scanned and then replicated on the other side.
00:14:10
Speaker
So where is your sense of self now, and the case that he uses is, okay, you've got a new scanner, instead of destroying you, send it to the other side reproduces you, and then the guy who's running the scanner says, Oh, sorry about that we actually meant to destroy you but you're still alive, but one problem.
00:14:31
Speaker
you have a fatal heart condition that's going to kill you within five minutes. And it's a fun thought experiment to play off different versions of what if you get scanned and sent to a thousand different places, which one is you? And the two lines of thought are that is me on the other side because it's everything that I could possibly care about in terms of continuing myself. Every goal that I had, I'll
00:14:58
Speaker
Skype with my wife later on the day, and she'll never know the difference because functionally I'm identical. But for me, I get sort of stuck on that.

Artificial Body Parts and Ethics

00:15:09
Speaker
And I don't want to be killed and have another version of myself on the other side. That is fundamentally what you're doing with the singularity upload story, right? Yeah, exactly. I mean, you're fundamentally dying as a body.
00:15:24
Speaker
and you're continuing as a copy of yourself. A copy of yourself. Yeah, and I think where this goes next probably is like, okay, and then there's multiple of you, right? Because you can make multiple copies, no reason why you can't make multiple copies, right? Yeah. I mean, you can have some software that prevents multiple copies from being made or anything like that, but I feel like- You wouldn't, right? You wouldn't.
00:15:51
Speaker
from your body to the computer, there's essentially no difference between you being scanned and appearing on the computer as a separate person and then being killed and you sort of being transferred. If you want, you know, in a movie, somebody uploads themselves onto the computer, their consciousness, their sort of spark goes into the computer, but that's not really what's happening. It's a copy and then it's physical death for your body.
00:16:19
Speaker
It seems like in movies it's usually too that somewhere the body is somehow hooked up to something and it's still there. The body is always there and you always end up going back into the body and that's what people end up caring about is their physical body. Density somehow is really wrapped up in that. Like how many pieces of your body can you replace and all that kind of stuff.
00:16:43
Speaker
Right, and that's another argument. There's another version of this that you can get into is what if you replace a single neuron at a time with some sort of electronic circuit. Eventually, if you do it sort of slowly enough, you might be okay with the transition into your new electronic self. Right. I mean, I'm already like feeling a little bit like this myself. I mean, I've had parts of my body taken aggressively over the past years and I'm feeling
00:17:11
Speaker
pretty good about it, actually.

Motivations for Human-like AI

00:17:13
Speaker
Yeah. Yeah. I mean, you know, like, had one of my burger, Bray, whatever that, that disc or typical disc, you know, place with a cadaver bone. So that's like a part of like someone else, a human being who was alive at one point, that screwed into my, my back. I feel quite good about it, honestly, much, much less pain. Yeah.
00:17:35
Speaker
But yeah, I mean you can imagine how many of those can you do and still feel like yourself, you know Well, and especially if it's if it's parts of your brain, right? We feel somehow that that's different, you know, I I do feel like it's different That's my bias, I think Yeah, well who's telling you that right? Yeah It's your brain But no, I mean, I don't know if it is a different or not I don't know if it is different or not But I feel it I feel like it is
00:18:04
Speaker
Should we jump into the paper? Yeah. So do you want to talk about why you would or would not build machines that learn to think like people? Do you want to move on to a different? I think we hit that pretty well. And then I think maybe we can start from the perspective that they're taking in the paper and talk about what they're trying to do there. So as far as I understand it, certainly the main point of this paper is

AI Benchmarking Flaws

00:18:32
Speaker
trying to figure out what human intelligence consists of and being able to extract that and create machine learning systems that operate in a similar kind of way. Right. I think that was definitely part of it. And I think the other part of it that I sort of extracted as being like a pivotal part of what they're trying to argue for is the idea that when we make these machine learning algorithms and we test them in like an academic setting,
00:19:01
Speaker
In terms of how good they are, we benchmark them. We benchmark them against different visual tests, the touring test, the visual turning test. But we also... Object recognition tests, right? Right. We benchmark them against how they play on video games. There was a nice section about that. Yeah, that's an interesting thing to do at some point. I think that's super cool. And then their point is like, look,
00:19:31
Speaker
You could do it that way. It's kind of not really fair because, you know, human beings come with all of this startup software. We come with, uh, whether it be comfortable, they repeatedly make the point that they're not trying to make a nature or nurture our argument. Right. So they're, they're not saying that this is necessarily learned or necessarily genetic. They don't really care for the purposes of the argument. What they're saying is, you know, look at the time that they go into learning a new task, whatever it may be.
00:20:01
Speaker
You know a bunch of stuff, right? You have intuitive physics, you have a sense of what other people might do in the world based on your previous experiences and just intuition. And it's not really fair to say that this computer algorithm has to learn all that stuff. And it's also not necessary. Like why make the computer algorithm learn all that stuff from scratch when we can just as easily
00:20:32
Speaker
It's not necessarily easy to do right now. And that's, I guess, one of the arguments that some of the commentators make was that they're leaving out the fact that it's actually kind of hard to do this, but you can give the computer some intuitive physics, some intuition about other psychology, some intuition about the structure of the world. You can build that in, basically. And so. And some of this based off of developmental psychology too.
00:21:01
Speaker
Understanding how kids learn and understanding how kids come to represent causal structures. Right, exactly. What they're arguing for is it would make sense to give computers the opportunity to build causal models, but also to give them some intuition to begin with about what the causes of different things in the world might be.
00:21:26
Speaker
Yeah, so I think one of the questions that this brings in is, is this something that is outside of the scope of any kind of current artificial neural network technology?

Challenges in AI Language and Recognition

00:21:38
Speaker
So I'm just thinking, OK, so imagine just an object recognition. So object recognition software is pretty good. And it can get, I don't know what the current number is, 90, 95, maybe more percent correct in terms of, say, natural images.
00:21:56
Speaker
Yep. So if you feed it enough, if you feed a program enough examples, it'll start being pretty good at categorizing images into having different labels for those categories. But there's always an example or two that you can give that says, aha, this program is actually pretty stupid because it can't recognize that this is a dangerous situation. Something really weird is going on here that a human being would be able to understand immediately.
00:22:27
Speaker
A bit in the same sense that natural language processing doesn't work perfectly. For most things, Alexa or OK Google, what's the name of that? Is it just OK Google? The Google speech recognition. For most stuff, it's pretty good. And it understands what you're saying. But when you start using language that it seems like another human being should quickly pick up on,
00:22:55
Speaker
fails at it. So natural language recognition is good enough for most of the things, but definitely falls short of the way that human beings are doing it. Right. Well, I think that's a number of points there because the first point there is that this is an area where human beings are much, much better than machines currently, which is understanding natural language and making natural conversations. So you can have
00:23:24
Speaker
machines that can pass the Turing test in a language setting under like limited circumstances, right? But over an extended period, you can usually figure out that you're talking to a machine and that it's making mistakes that a person should or wouldn't make. I think that's what these authors are fascinated with and concerned with too, is what this tells us about how human intelligence works, that catching the mistakes
00:23:53
Speaker
just like in all kinds of areas of cognitive psychology, seeing where fundamental mistakes are made can tell us about how the process works in general. And it really shows us what intelligence is. So certainly intelligence is not playing chess. We learned that intelligence seemed as though intelligence was being able to recognize patterns and play games like that because intelligent people seem to be better at playing chess.
00:24:22
Speaker
But it turns out that's a fairly specific kind of thing to do. And there are lots of examples where it seems like intelligent people should be able to manipulate numbers and remember lots of numbers, but computers can do that way easier than people, right? Right. Oh, exactly. It seems as though after a long time of understanding how machines are not intelligent,
00:24:52
Speaker
We're definitely getting at a better understanding of the ways in which our intelligence is different than we thought it was.

Comparing AI and Human Intelligence

00:25:01
Speaker
Yes, exactly. And I think that what I always come back to on the question of intelligence is that when we say intelligence, we automatically mean
00:25:15
Speaker
thinking like people. And I think that's why this article was successful in a way is because it reframes the question rather than how do we make intelligent machines, but how do we build machines that learn and think like people? And if you can separate those two things out, I think it makes life a lot more clear, right? I mean, there's reasons to build machines that think and learn like people. And there's reasons to build machines that can help people do things
00:25:44
Speaker
And they're not the same thing. They're not the same thing, right? They're two distinct tracks that have overlap. There's a lot of overlap, no doubt about like learning about how people learn and using that and applying that to how to make better learning machines is helpful in both directions. It helps you understand people and it helps you understand machines. And that's super useful in both directions, but you can do a lot of really cool stuff.
00:26:11
Speaker
with learning algorithms that have nothing to do with how people think. And actually, to your point, we're already so much better with machines doing a lot of tasks that are really important. I mean, I think the self-driving car is a great example, right? Already, really, probably the best self-driving car is probably a better driver than the average driver, right? So already, machines are better drivers than people, right? And if you put them on the road by themselves without people, then it would be way better, way, way better.
00:26:41
Speaker
I think I am amazed every time I get on the road and drive and don't get killed, honestly. Yeah. No, I mean, I'm sort of, yeah, I'm a little bit amazed that I don't kill someone every time I get on wheels. Yeah. You think about the way that, I think that particular example pulls out several of the topics about how people think and then what the distinction is, when you would want to build something that thinks like people and when you would want to build something that doesn't think like people.
00:27:10
Speaker
Self-driving cars don't think like people at all, at all. Very little like machine learning, quote unquote, is really necessary. Certainly there's a part of it that's machine learning based, but a huge part of self-driving cars is just having a lot of sensors. So if you think about the effectors and the input modules, that's where the big distinction is. It makes self-driving cars better.
00:27:37
Speaker
than people at driving because you don't have to have just a bipedal thing with two eyes, two arms and two legs. You can have a thousand eyes and all looking at different parts of the spectrum too, not just in our visible spectrum, but like infrared, ultraviolet. You can also get distance sensors that will give you accurate distance instead of an estimation.
00:28:00
Speaker
Right, exactly, and filtered through all of our really, really effective heuristics. You don't need to switch attention when you're looking in the mirror. Right. So that's where I feel like I think we as human beings need to be a little bit humble about our intelligence and recognize that we have some abilities that are pretty remarkable, but we're not the best at most things.

Goals of Intelligence: Mimicry or Enhancement?

00:28:30
Speaker
There are animals or machines that are better almost every task that you can imagine. So we tend to fixate on the tasks that human beings are uniquely good at. It always comes back to what are your goals. And I think when we think about what intelligence is,
00:28:46
Speaker
I feel like there's such a dichotomy between the way that a psychologist type people like to talk about intelligence. And this is where the chess thing is so funny because, I mean, psychologists of the 1950s and 60s probably thought that chess was the most important thing cognitively that you could ever imagine. Right, because you have to know about life and strategy and act mockurelion and deception and all of that stuff. Exactly.
00:29:16
Speaker
Exactly. And then, you know, well now that we realize that actually it's pretty easy to teach the computer to beat us at that game. You know we have to move on to something else but I think what what what you realize and when you think about what humans are really good at. Therefore what intelligence is, as we define it because intelligence is exactly just thinking like people.
00:29:37
Speaker
It's nothing else more than that. And fundamentally, someone who's more or less intelligent is very simply someone who's able to manipulate and manage the socioeconomic environment more effectively. That's all it is, as far as I'm concerned. And tested intelligence, they work because they ask you to tell me what I want to hear. And if you are good at telling me what I want to hear and I'm in charge,
00:30:05
Speaker
then you're gonna get ahead. And so it's gonna look like these intelligence tests are super effective. But the point that I was starting with this was the goals, what are the goals of intelligence? There seems like there are two really, right? There's food production and that's a big part of how humans have been able to dominate the world.

Animal vs. Human Intelligence

00:30:25
Speaker
Food production. Food production and violence. So war and then defeating other animals.
00:30:33
Speaker
OK, so this is approaching into a Jared Diamond kind of, or Noah Yuval Hariri, is that his name? The guy who wrote Sapiens. Right, right, right. Yes. Sort of a look at maybe an evolutionary take on what intelligence is based on what it has gotten us. Yeah, I think it's even more just like when we talk about intelligence and we think about how humans are so intelligent
00:31:03
Speaker
so often thinking about it from a point of pride, right? Like, look at human beings. We've dominated the planet. Isn't that great? And then so you think about, well, what does that mean that we're in charge? So like, what does that mean? Well, basically that means that we've, through violence, captured most of the land and successfully managed to produce enough food for us to not die and to, you know, project ourselves into the future. And so if you think then about intelligence and what
00:31:33
Speaker
what we're good at that other animals are not good at, it's going to be those two things. Well, let's see if there's any counter example to that. There are certainly animals that are creatures, multicellular organisms that are more successful than us in terms of propagating themselves, insects, bacteria, ants. So ants are incredibly flexible.
00:32:01
Speaker
And in some cases, manipulate food supplies, right? Yep. Yeah, for sure. And I think that's actually a really good example, because in what ways are ants intelligent or not intelligent, or more or less intelligent than us? We talk about communication. Ants have tremendously sophisticated, complex communication systems. You would certainly never call any, I mean, if you're going to describe intelligence, it would be a group of ants, right? Exactly.
00:32:29
Speaker
You never call a single ant intelligent. Right. Right, exactly. But we're happy to call single people intelligent. Right. That's a good point. That's a good point. So it's fundamentally something individualistic intelligence in this way. At least we think of it that way. That's what we're calling it. Do you want to talk about the video games?

AI in Video Games

00:32:53
Speaker
Oh, yeah. This is an interesting one.
00:32:57
Speaker
Maybe you want to introduce the concept of it, or if anyone doesn't know about this research or the kinds of advances that have been made. I'll give it a shot. I don't know all the ins and outs, but I can present it at a high level. I think the idea here is that there are all these different tests that we're giving to machine learning artificial intelligence systems to basically benchmark them. We want to benchmark
00:33:24
Speaker
the performance of these machines against each other and see how we're progressing. And naturally, we kind of do that in the context of things that people are good at and give us somewhat of an intuition about how smart we think the computer is at this point for having succeeded at this task. We talked about a couple of them already. The Turing test, can you fool a person into thinking that you're a person as a machine?
00:33:52
Speaker
Can you recognize objects in complex scenes? Things like that. And games, playing games, is one of the ones that people are excited about benchmarking machine learning algorithms, artificial intelligence algorithms against, and different ways to do that and to think about that. But it gives a sense of how flexible the machine can be in its thinking, how long it takes to learn the game,
00:34:18
Speaker
and things like that. And so it gives you a useful benchmark. So obviously, we talked about chess, where Deep Blue finally beat the very best chess player in the world. And that was a major benchmark. People pointed out that there was some parts of chess that were made somewhat convenient, if you will, for a computer to perform well at. So if you think about it, the board is two-dimensional.
00:34:47
Speaker
very finite number of locations that the pieces can be placed on and the pieces move in a very easily defined and prescribed fashion. So the rules are easy to set down in a few lines of code. And the conditions for winning are very clear too.
00:35:03
Speaker
Correct. Exactly. And there's not a lot of flexibility necessary to think about the conditions for winning or success in the game. That's all pretty well prescribed. And so the idea is that, well, let's look at some other games that maybe computers are not as good at. So Go is one of the examples that I have not really played much enough to know why it's harder than chess. I'm the same way, too.
00:35:31
Speaker
I understand a little bit about how it works, but I don't understand why it's so complex or so combinatorially expensive. Right. But for, yeah, exactly. For whatever reason, that's harder. And then, you know, the video games is something we know more about. We could see more, more, more, more, more, more to video games. Yeah. So it started to be that, that now the machines are beating us at Atari.
00:35:57
Speaker
You know, that's kind of like the next level, right? Atari, like 2600, you know, or whatever, these video game systems from the 80s. And I think the cool thing about the recent advances in this or the way that computers are now able to play video games is that it doesn't mean that an algorithm, given all the rules of the game, can play faster than a human or beat a human.
00:36:28
Speaker
for certain Atari games or I guess, you know, even Super Mario Brothers or some other games that all you need to do is give an input of the game and do a simple reinforcement. So know when the game state is better. You know, if you're moving forward on a level and just let it play the game a whole bunch over and over again, it will screw up tons of time in a row. But after playing enough,
00:36:58
Speaker
it eventually can figure out a whole bunch of different games without knowing anything about those games. And it can do it better than people. So it'll be stupid in the first couple of iterations, dumber than any human would be. But given enough training, it can get through Super Mario Brothers or Pole Position or other kinds of old games, certainly Atari games.
00:37:30
Speaker
There's less explicit programming about what the computer is supposed to be doing. It actually figures out games without knowing anything about them. Exactly, yeah. And I think that's the key, right? Because sure, I mean, every video game can beat you at itself because whenever you have
00:37:53
Speaker
Except the Rock Paper Scissors video game is still not fully developed. It's the only one humans still have an advantage on. We can win 50% of the time. Right, exactly. Because the computer can just be programmed in such a way that the computer's quote unquote agent defeats you in some fashion.
00:38:19
Speaker
So that's trivial, right? So the point is that given, I think it's interesting to think about what inputs are, right? Because I think it's easy to lose sight of that because it doesn't being important, especially when you start talking about this topic of what does the computer know to start with? And what does the computer have as input as it's learning? Because basic point is that like if you told the computer exactly what all the rules were
00:38:49
Speaker
how all the pixels moved and everything added up, then it would automatically beat you because it's already a computer and you're playing a computer game and by definition can defeat you. Because it doesn't have to have reaction time, it doesn't have to process things, to see things. It's working at the same level as the actual game.
00:39:13
Speaker
It's trivial to imagine that it would defeat you in that way. But the question is, what are we giving it? And why is that what we're giving it? And what is that telling us? So in this case, as I understand it, so if you take the Frostbite example, which is that game which sounds like a little bit like Frogger with a... Yeah, I've never played Frostbite before, but I love the fact that it's...
00:39:39
Speaker
It's a game that I don't know if anybody had ever heard of or remembered, but all of a sudden now it's sort of a challenge to beat it because it has different goals than computers are good at.
00:39:58
Speaker
I'll read the description of Frostbite. In Frostbite, players control an agent, Frostbite Bailey, tasked with constructing an igloo within a time limit. The igloo is built piece by piece as the agent jumps on ice flows in water, and it does look kind of like Frogger. The challenge is that the ice flows are in constant motion, moving either left or right, and ice flows only contribute to the construction of the igloo if they are visited in an active state, white rather than blue.
00:40:24
Speaker
The agent may also earn extra points by gathering fish while avoiding a number of fatal hazards, falling in the water, snow geese, polar bears, etc. Success in this game requires a temporarily extended plan to ensure the agent can accomplish a sub-goal, such as reaching an ice flow, and then safely proceed to the next sub-goal. Ultimately, once all the pieces of the igloo are in place, the agent must proceed to the igloo and complete the level before time expires.
00:40:51
Speaker
So apparently, I guess what I am understanding about this is if it's a straightforward game, like Frogger, where you just have to avoid, you know, your short term goal is aligned with your long term goal, you don't have to kind of go forward and go back. And I think telescoping was the name that has been given to the sort of long term planning of different kinds of goals that you have to
00:41:18
Speaker
sort of think into the future and then complete a larger goal and then go back to your smaller goals. So this game apparently computers are terrible at. So in this experiment, so here I'll read this a little bit too. Okay, so the computer was compared to a professional gamer. The professional gamer got two hours of practice on a whole bunch of different Atari games and most of them the computer did pretty well with.
00:41:46
Speaker
The computer was trained on 200 million frames from each of the games, which equates to approximately 924 hours of game time, or about 38 days, or about 500 times as much experience as the human received. And the bottom line here is that the computer achieved less than 10% of human level performance during this. So one huge difference between human ways of playing video games and neural network ways of playing video games is that
00:42:15
Speaker
it takes a lot of iterations. They have to play it a lot of times before you really see improvements. And then with games like this where you have these different kinds of goals, you may never see human level performance. It peaks around 10% and then just kind of stays there because it's not getting the reinforcement to look at bigger goals in the game. Yeah, I mean, it's interesting though because
00:42:43
Speaker
Right. Several things here. One is we're talking about frames. So we're measuring. Right. Yeah. We're measuring the amount of experience in frames, which is like thinking about a kind of a funky way to do it. Frames, I think what we usually say, like 24 frames a second or something that for a video game is pretty normal. The point is that every single pixel is represented perfectly in the computer's memory.
00:43:10
Speaker
and it has exact access to that. And when it moves the guy, where we call him this guy, Frostbite Bailey. When it moves Frostbite Bailey, it moves Frostbite Bailey with perfect fidelity and no reaction time, no slippage. I mean, remember those old 80s and 90s video games? That was the hardest part. Yeah, physically lining up.
00:43:39
Speaker
so that you stepped on the thing at the right time. That was by far the hardest part. And so we're not even asking the computer to do that part. It's just funny what we're asking the computer to do and not do in this world. Because in this world, the computer gets every single pixel on every single frame, and it can move the guy, Frostbite Bailey, arbitrarily, essentially, within the rules of the game.
00:44:04
Speaker
Right, because if you could do this while you were playing, you wouldn't have to play through, like, pitfall, like, 400 times and just barely miss the last jump, right? Exactly. You'd always make that last jump, right? Once you've got it, you get it every time. Exactly. Because you know already, we already know, like, you have to get right up to that. So one pixel of your foot is over the line, and then you jump. And you know what you have to do.
00:44:33
Speaker
because that stupid joystick is like so much slop in that joystick. And then you have to be pixel perfect on your jump. Yeah. Jackson and I are playing a game like that right now. This is Star Wars Lego video game, but it's like not the new one. It's like the old one, you know, it's got a lot of those, a lot of those elements to it. But anyway, the computer's totally has no problem with that. That's easy. But we're not giving the computer anything about
00:44:58
Speaker
the goals of the game. And so, which is kind of funny because it's like the goals of a game like that are just sort of human goals, right? Which wouldn't necessarily be of concern to a machine, right? Unless we wanted them to be. So it seems to be like in the context of this whole conversation, like odd that that's where we would draw the line, right? So no concern about the effector, no concern about the input. It's got perfect inputs and it's got perfect outputs.
00:45:25
Speaker
But the learning has to be somehow open loop outside of anything about the goals, which doesn't to me make any sense. It's easy to tell the machine what the goals are. We already established that we could train a machine. We don't even have to train it. We could build the machine, program it, to just beat this game 100% every single time. Machines are already way better at this game than we are.
00:45:51
Speaker
weird thing that we decide that this is like, this is the level that we're going to be challenging the computer to learn. Well, yeah. And I mean, I care about what happens to Frost by Bailey. Clearly. Well, I mean, you know, it's like this, it's a story. It's a story in the game, right?

AI Adaptability and Societal Impact

00:46:10
Speaker
I don't know. I've never played the game. So I can't say how much I would be into what happens to him or the polar bears or the fish or that. But I mean,
00:46:21
Speaker
Definitely, video game designers, you have to put a lot of effort into the narrative that draws people into the game. Absolutely, the story line. And I think about what makes people good at these games, and they talk about learning to learn, which is totally true. Actually, a lot of it is just tropes. You know what's supposed to happen in the story line. And it's just like how they did it in this other game.
00:46:47
Speaker
And it's not even that you're learning to learn. It's like you're remembering a fact that it was like this in another game. Maybe they did it the same way in this game. So I think there's that idea of like transfer learning. I think that kind of brings up the topic of transfer learning. So the idea is that maybe if you had a system that you let the machine play a bunch of other games, it could get something from that and then basically have some starting point that was helpful for them in learning this game.
00:47:16
Speaker
Yeah, and I think that's kind of the, maybe it's the sort of theory building in this, I guess it's something that shows up in a number of different areas of in the history of cognitive science, this distinction between sort of lots of specialized modules that do different things, or one central module that kind of does everything in a flexible kind of way. Most neural networks are
00:47:44
Speaker
specialized, I mean, they're intentionally specialized modules that can do one thing really well. So in that sense, they work by building up a number of examples, rather than the way that people build up prototypes of things or centralize the way that they understand a particular concept.
00:48:04
Speaker
Right. You know, this is sort of the exemplar versus prototype ideas in cognitive psychology. Absolutely. Yeah, for sure. Yeah. So, I mean, I, you know, when we consider any kind of concept, when we think about, I guess, when we think about, say, a polar bear, just because polar bear is the first thing that's rang to my mind, right? When we think about a polar bear, our idea of it is built up from
00:48:27
Speaker
a number of different examples of it and we maybe have a central prototype of that polar bear that's the best example of it that looks a particular way in neural networks that are playing video games. They're building up their expertise from lots of examples of games that didn't work or games that did work. So when they're getting feedback that something does work or when it doesn't work.
00:48:50
Speaker
No, I mean, I think I think I have an idea. I totally have an idea in there somewhere. I'm not I feel like I'm not expressing that exactly. I think I have a sense of where you're going with this, which is, I had a similar thought when I was complaining this, which is, there's this idea of like, equipotential, right? So that, yeah, substance, yeah, in the brain that basically can do everything. And there's a where a world in which when we're kind of
00:49:18
Speaker
This whole effort and endeavor really of benchmarking in this way, which is fundamentally an academic exercise. Because we're not talking about building something that's useful here. That's a separate topic. We've touched upon that a little bit already. And it's, I think, a very, very interesting question. And in some ways, ultimately, it's what always drives everything forward. So there's plenty of people going to be talking about it.
00:49:47
Speaker
In the context of benchmarking, this is an element of it. What is super impressive to us? Why do we find it impressive that the machine learning algorithm can play this video game? Why is that impressive? It has something to do with flexibility of thinking. So the idea that you take this system that was not built to play the video game,
00:50:15
Speaker
but can somehow learn the context and the goals of the game and be flexible in its development and learning to then become able to play the game. And so we're somehow impressed by the fact that this equipotential system, the system that's generic in its nature can somehow learn to play this game, which it was never built to play. I think that's the part that seems impressive about it.
00:50:46
Speaker
sort of like build a robot to do one particular task, but then it can also generalize and it can do something else that you didn't necessarily explicitly program it to do. Right, exactly. And I guess then the hope is that from a practical perspective is that, oh, well, eventually this thing is doing all kinds of stuff you never even expected it to do.
00:51:10
Speaker
and weren't even trying to do when you started out. And then all of these unintended positive consequences come from that. That's the hope, right? And it saves you work because you don't have to, you can finally get a program that'll do things that you didn't have to explicitly tell it to do, that it can figure out these things instead of having explicit algorithms to describe exactly what it is that you need done. It'll extrapolate a bit. Right, exactly.
00:51:39
Speaker
And I think then that's a question as to whether or not that's something we really want to be doing. Yeah. That is a good question. Should we want that? Right. We clearly do. And I think it's important to recognize that that is what we are trying to do in this type of effort. We're trying to make something that is learning stuff that it wasn't built to learn. That's what we find impressive. And is that something we want? What kind of usefulness would that have been
00:52:08
Speaker
and everyday life besides being able to target new products towards you. That's certainly one application, right? Yeah. Well, how are you thinking about it in that way, just in terms of the way that machine learning is used to sell ads today, or are you thinking about some other? I'm thinking, okay, so currently the kind of flexibility or sort of extrapolation that you get in things like this, maybe,
00:52:38
Speaker
in terms of what Netflix may suggest for you. Yeah, recommendation. What Netflix recommends, what kinds of products are recommended, what sorts of ads you get on Google. But I guess I'm thinking, okay, so let's just forget about any limitations. What is it that you would want if you could do anything? So, you know, what if you could have a robot that could extrapolate anything you wanted? What kinds of things would you want it to do?
00:53:09
Speaker
Right. Yeah, exactly. I mean, do we want to be robots that will be able to peel our grapes for us and then also, I'm trying to think of what the most luxurious lifestyle that we could have that we would be supported by robots that would cater to our every whim. Because that's kind of what it feels like, right? Absolutely.
00:53:34
Speaker
And one of the things that comes out of that a lot of very often when you start going down this line of reasoning, which is you want to build a machine that's flexible in its thinking and can learn the goals of a situation without knowing all the previous context, is that a lot of this stuff becomes social.
00:53:56
Speaker
It's about other people, knowing other people's minds and understanding what they would want or what they would expect or what they're about to do. So when we look at the mistakes that machines make and we're like, aha, the machine is so stupid, a lot of times that's the kind of thing we're looking at. It's like, is this person angry or sad? What's this person doing in this situation? Like a woman riding a horse on a dirt road is this picture of how it was labeled from figure six.
00:54:24
Speaker
which is like a person being dragged by a horse or an airplane is parked on the tarmac. So it completely misses the point of what that image is representing or the emotional content of something like that that you could use to talk to other people about it, right? Right, exactly. So I think part of what you are going to want to be doing is figuring out what other people are wanting or about to do or are likely to do in the future.
00:54:54
Speaker
Well, hey, I mean, we're moving into philosophy here because what would you do with your life if you could automate any aspect of it that you found to be drudgery,

Philosophy of AI-enhanced Life

00:55:05
Speaker
right? Right. So how do you extend your enjoyment of life? How do you flourish? How do you avoid repetitive tasks? Would that be a life that's desirable? It's hard to really imagine it, I guess. This is where I kind of get a little bit worried and where I start
00:55:23
Speaker
You quickly get into the spiral of that's why and how we build machines that ultimately destroy the universe or destroy humans. Because what you're going to do is you're going to build machines that basically anticipate your every need and desire and fulfill those hedonic goals. Which could evolve pretty quickly, right? Super quickly. And then all of a sudden, because you've trained these machines to do everything for you, you can't do anything yourself anymore.
00:55:52
Speaker
What happened in a couple generations? By the way, this machine is super good at figuring out what you're going to do and what you want. When you come back to my two points about intelligence of what it can do really well, the reason why human beings are, we think we're in charge, is produce food.
00:56:18
Speaker
and all the other intended basic life sustaining elements and then war or violence. Obviously a lot of what we're going to be doing with these machines when we make them able to interpret people's goal-directed behavior and anticipate people's goal-directed behavior is we're going to make them into war machines, either intentionally or unintentionally. Almost certainly it will be intentional at first and then it will spiral out of control.
00:56:50
Speaker
That's a bit of a dark forecast. I mean, there's a lot of ways for the world to end. Oh, yeah. There's a science fiction book in there somewhere. No, no, no. I know. I'm not saying that we'll end that way. I'm just saying you could imagine how that would turn out that way very quickly.
00:57:08
Speaker
when you start going down that. That's where my mind always goes when we start going down the social piece, right, which is like figuring out how people, anticipating people's goal-directed behavior and desires, why you would do that, and what society is likely to do with it. Because think about who's funding a project like that. It's either gonna be Google, what are they gonna try to do? They're gonna try to sell you ads, or it's gonna be the military. So how do we get out of that? I mean, that's kind of what,
00:57:38
Speaker
from a practical perspective what I would really like to do is figure out how can we make machines that actually help us in ways that we want them that we really want them to help us in like a going forward basis. I think a key point a key part of that is I think it just rests on the what it is that we actually want I think that is the that is the hard question. I do feel like the part of
00:58:03
Speaker
machines improving enough so that they can fulfill our needs well enough is good, but I think human beings anticipating what human beings want. We have to be the ones that are deciding what it is that we want. We don't want to be stuck in a spiral of other machines figuring out what it is that we want and fulfilling those needs. I want to be a conscious agent here and I want to feel as though it's under my control, I guess.
00:58:31
Speaker
Sure, but you know, it's not. I know it's not. Well, we've already got a couple of things where that's, we already have identified two places where this is already happening and working quite well. One is video games, right? Video games in the sense that like a really good video game is carefully meeting out rewards on a schedule and in a way that is maximally pleasure giving.
00:59:02
Speaker
And that's what sells the video game. But the more video games, the better it does that. And they do pretty well. So you could imagine, you extend that, obviously that's a world that's been explored pretty extensively in science fiction, et cetera. But it's a good one. Yeah, yeah, yeah, absolutely. So it's happening there. It's happening in video games. And it will happen more in better video games.
00:59:30
Speaker
And then, you know, it's already happening as we say in commerce, as to your point exactly of the recommendation engine, and then also just the ads that you see. So much of our brightest best minds in artificial intelligence right now, a huge proportion of them are selling ads, 95%. Yeah, capitalism pays off pretty well.
00:59:52
Speaker
Yeah, and it's just in this very specific way, too. It's not like even building useful products that people buy. It's like selling the same crappy shit more efficiently. Yeah. What a fucking waste of time that is, man. Think about how many really brilliant people are developing algorithms at Google, mostly to deliver ads to you at the right place at the right time so that you're most maximally likely to purchase at that moment.
01:00:21
Speaker
I mean, in a way, maybe it's better that they're doing that than sort of fully advancing the evil possibilities for artificial intelligence.

AI's Socioeconomic Role

01:00:31
Speaker
I mean, ads are bad, but they're not. Well, I mean, yeah, especially in what they're doing with them, it's essentially useless. I mean, not useless, sorry, essentially neutral from an ethical perspective, because they're just delivering them more efficiently.
01:00:45
Speaker
They're not even doing that much of content stuff yet. They're just delivering the right place at the right time to the right person. I think maybe that's a good thing, that all of the brightest minds are working on ads, because it does delay the moment till the Robopocalypse, I think. At least for a couple of years. Right. Exactly. Because as long as those folks have enough of their avocado toast,
01:01:14
Speaker
And they are satisfied. I do. By the way, just as a quick aside, I actually do enjoy avocado toast. I feel like it's a tasty combination. Avocado is good on anything, man. I am not a hater of avocado in any way. I love avocado. I'll eat it on anything. Toast. Yeah. Toast is good. Yeah. Yeah, the challenge with avocado toast is just when you have to pay $8. Right.
01:01:44
Speaker
Then it seems sort of snobby. Yeah, then you're just like, okay, this is ridiculous. $8 avocado toast. Yeah, it's a very easy thing to just take an avocado and put it on a piece of toast. I mean, there are a couple different recipes you can use. Right. Yeah, it's fundamentally just toast. With avocado though. Right. That's the key part. So the toast is $1.
01:02:12
Speaker
and the avocados a dollar, and somehow the avocado toast is $8. Well, that's another thing. This may actually be a useful thing, too, that all of the wealth of Silicon Valley is going to the avocado industry and not necessarily towards the Robopocalypse. Exactly. Exactly. So that is good. It's a useful thing that we keep these people occupied. And so what we need to do is engineer better avocado toast that keeps all of these brilliant
01:02:41
Speaker
but essentially amoral minds occupy. And probably boats are in there somewhere too, right? If everyone had like a boat. A boat. Think about it, if you had a boat. Like a yacht, like something to put money into. Yeah, a really nice yacht, because it's San Francisco, right? And like a place to put it, these guys, they're not going to think about the Robo Apocalypse for quite a while.
01:03:08
Speaker
They'll be fully occupied with their boats and their avocado toast and they'll leave us alone. Well, maybe that's what they really want. Maybe that's what I really want. Maybe that's why I'm thinking about it. You mean right now? Yeah, exactly. If I had a boat and I was eating avocado toast and I'm drinking like a margarita, I don't know. That sounds pretty good. Yeah. You don't have some deeper, um, yeah, I think maybe that's where, that's where we land is that
01:03:37
Speaker
Our needs don't need to necessarily go much farther than that. Right. Right. But yeah, it's funny because it doesn't really work out like that, does it? Because you and I, if we wanted to right now, could be on a fucking boat eating avocado toast. We can be recording this podcast on a boat eating avocado toast and drinking margaritas. Yes. But we're not. We're not. And everybody in the whole world can have a boat and avocado toast every single day.
01:04:06
Speaker
Well, maybe not avocado. It puts a lot of stress on the avocado. Yeah, the avocado might be the limiting reagent there. Yeah, that's probably true. I think there's clearly going to be a push to make artificial avocado. Yeah, exactly. Well, that's the next level of that. OK, so why is that not what we're doing right now? Yeah, to the point of just
01:04:36
Speaker
I guess the bigger, so yeah, it's all couched in the sense of. So in this case, avocado toast just represents pure hedonism. Yeah, I mean, in a benign way, right? Like in like a benign way, right? So it's just like, nice, not something so crazy or ridiculous. We're not just plugged into our orgasmatron. Right, exactly.
01:04:58
Speaker
Exactly. And it's something that we already can produce. It exists out there in the world. I guess the way I would extend this out would be to say, why is it that we're not able as a society to do some basic stuff?
01:05:15
Speaker
make sure that everybody has food that they can eat every single day and a place for them to be. And we've decided that some people, you know, get a big yacht and avocado toast and other people don't get any of that stuff. And this is the thing. I mean, this is a political discussion. Now, I think this is because I'm asking me why I'm not doing that right now. It's a different story because it plugs into the political question because
01:05:42
Speaker
I'm not doing it right now because I can't afford it. I could afford to do it today, so I'm not doing it today, not because I can't afford to do it today. I just couldn't afford to do it every single day for the rest of my life. And if I was able to do that every single day for the rest of my life, it would be a serious conversation about whether that would be what I would be doing, just hang out, eating nice food, going for walks with my family, and hanging out with my friends. Taking up hobbies, community.
01:06:11
Speaker
Exactly, exactly. I mean, that's probably what I would try to do. I don't know if that would work out. Most people don't even get time to think about those kinds of questions. I mean, it's not something that comes up in a day-to-day conversation. You don't have, I mean, you know, your short-term goals mostly overrule your sort of long-term impression in a way. I mean, we're thinking about if you could be relieved of any sort of menial task, if you have artificial intelligence to do most of that stuff for you, well,
01:06:41
Speaker
Who today has the time to think about that stuff? Yeah, exactly. So you're just propelling yourself forward one step at a time in a disordered way. In the future, if you could have all your farmers be robots, that is very possible. Very possible, very possible. And actually, we could do it today. If we really had our shit together, we could do this in a very short period of time.
01:07:10
Speaker
do anything manual in the whole agricultural industry and not very long, including driving the food to the store. I've got to say, I think farming has probably become a less enjoyable way to spend your time these days, too, if you work on a gigantic farm where you're just either driving around a machine or slaughtering animals en masse. Right. Or I mean, if you look at, you know, I was just watching a video of like some farming stuff, the peaking celery, I think,
01:07:39
Speaker
And it was just like the humans were doing the picking, but then they were just tossing it all into this really big machine that was processing everything. And there's a lot of people really fast doing a very repetitive task of just shopping. They're just doing the one thing that AI robots are too expensive to do. Exactly. It's actually just less expensive to have them do it. Humans are just slightly cheaper robots because they're more all purpose for now. Yeah, for this particular task at this moment. Yeah, they're a little bit less expensive.
01:08:09
Speaker
So the question becomes, if we had that, why wouldn't we just all hang out and just do stuff that was like fun? I think because we're fidgety, I think it's basically human beings are fidgety and they don't. It's fidgety, but if you think about the structure of it is that

AI and Meaningful Existence

01:08:30
Speaker
Fidgety to what end? I mean, today, now it's still the case that economically some people are picking crops and need to, but there's still plenty of food. Why is it that some people don't have any? And it's because some people want to have all of it and they're willing to use all of their resources to make sure that they have it and somebody else doesn't have it because they just want more of it.
01:09:00
Speaker
And, you know, that's kind of like, in some sense, I think the fullest extent of intelligence in the sense that when we talk about people who are really intelligent, we always look at people who are successful. And usually when we're successful, a lot of times we'll look at something specialized, you know, like art or science or something. But at the end of the day, it's always part and parcel with socioeconomically successful, right? That is fundamentally
01:09:30
Speaker
I think back to my original point of production and violence being the two big end goals, end states of what ultimately general intelligence is. It's an interesting proposal. Just a thought. Just a thought. Not fully worked out. Well, clearly, you need to read more Anne Rand because obviously, robber barons are the
01:09:57
Speaker
the economic engine that drives our society. Right. Yeah, exactly. So then, yeah, just in terms of like, what do you want from the world? I think that's where things get complicated. So much of it becomes relativistic. I mean, I think the fidgetiness of your mentioning also kind of comes down to that a little bit. Like I was saying, like, you know, I can't afford to like eat avocado toast and hang out in my big yacht. But I could
01:10:24
Speaker
I probably have enough wealth to eat rice and beans every day and not have to work ever again. You could get by on your basic needs for sure if you feel as though having that extra surplus of time is worth it. I think a few themes there, I mean, one is goals. What do we want the machines to do? What are we impressed by that they can do?
01:10:56
Speaker
And then what are machines good at and not good at? And what does that tell us about the nature of our brains? We didn't get into that as much, I guess, our brain stuff. That would be a nice thing to think about. What does that tell us? Because I think that's a real key issue that some of the stuff is bringing surprising results to is how much it's changing how we think of our own intelligence, how
01:11:22
Speaker
how we can understand human intelligence better. And apart from just the chess playing examples, I think there are more fundamental ways in which we can understand shortcomings of human intelligence and advantages of human intelligence through all of this work. That's what got me interested in this topic in the

Understanding Human Intelligence through AI

01:11:43
Speaker
first place. My original interest in the topic, this is probably the thing in the world that I'm the most interested in.
01:11:51
Speaker
use technology to understand the brain and how we use the brain to understand technology. When I started in neuroscience, that was the reason why I got into it. I wanted to build a machine that exactly did this. I wanted to build a machine that was like a brain. I thought it would be cool because it would be a great model to understand the brain. We could do a bunch of stuff with it in terms of understanding, but also maybe even curing diseases.
01:12:22
Speaker
and then could also potentially build some useful tools out of it. At that time, I wasn't even thinking about uploading myself. The comment that I would make too is that one of the really interesting questions in there is whether or not, if you made a machine that thought like a human, would we really understand thinking any better? And I think one of the things that
01:12:50
Speaker
neural networks have demonstrated is that, not necessarily, that if we understood every single neuron in our brain and how it connects to every other single neuron in our brain, and if we could watch it in real time and get perfect imaging, we still wouldn't fully understand how thinking and consciousness and all of that good stuff comes about.
01:13:14
Speaker
We don't get an understanding what sorts of things can we understand from it and what sorts of things can't we understand from it. And I think a lot of these neural network models demonstrate that you can have a really powerful way to solve problems and recognize patterns that you essentially don't understand exactly what it's recognizing or how it's recognizing in the first place. That's a big characteristic of deep learning networks where you don't as a human being
01:13:43
Speaker
looking at the results of the learning process, that the algorithm that the network went through, you can't actually make sense out of the representations you can infer that it's somehow developed representations at different levels throughout the network, but you can't really make sense out of it. It doesn't really give you that intuition about the relation. So for example, if you want to use the algorithm to
01:14:10
Speaker
teach you about the relationship between different variables, it doesn't really do that very well unless you explicitly set up to do that. But if you're interested in the steps that it's going through, it doesn't really give you a lot of intuition into that. It is very much so a black box. And if you've made it bigger and bigger and bigger and bigger and made it more and more and more thinking like a human, it seems like you would actually probably not be getting closer and closer and closer and closer.
01:14:39
Speaker
having an intuition about the different steps maybe farther away. I think that is one thing that is becoming more clear is that you don't necessarily understand everything from being able to build it because it's not quite so algorithmic and it's not quite so straightforward as that. The more powerful you get, the less explanatory power you may have. Yeah.
01:15:05
Speaker
One of the things that I wanted to bring up in this conversation was this comment by Crick talking about on page 20 as Crick 1989 famously pointed out, back propagation seems to require that information be transmitted backward along the axon, which does not fit with realistic models of neural function. I was a little confused by that. I mean, the relationship of the way that the neural network machine learning algorithm is doing something and the way that the
01:15:33
Speaker
the actual neural network, the neurons are doing something. Obviously, it's always a little bit of an analogy, right? It's not that they're actually doing it the same way. And I always assumed that the analogy with backpropagation was just like feedback connections, not that it was up and down the same neuron, that the neuron was the unit there. So there are some neurons that are feed forward and then some neurons that are feed back. Right, exactly.
01:15:58
Speaker
I mean if you even looked at like visual system, for example, like a huge proportion of the inputs to the, you know, the LGN are feedback from the one for example, and that was 90% or something. Yeah, we've always thought about that stage of processing as being essentially like a filter you know it's just like a very straightforward dumb filter but actually
01:16:17
Speaker
a huge portion of the input is actually feedback. So back propagation to me does not seem to be a problem from a neural perspective. It seems like that's actually critical. So I was a little confused as to why they would suggest that that was even a problem for the analogy, right? I don't know. Yeah, I don't know the answer to that. That's a great article too. 1989, Crick, the recent excitement about neural networks
01:16:44
Speaker
You can almost hear the disdain and the title. Yeah, this is clearly cyclical. It's like 3D glasses or something. Yeah. People get into it for a while. Well, that's a whole related topic, which is the virtual reality. I would be interested in that, too. Yeah. Well, that would be another day. Let's do that one on another episode.