Introduction to AI and Guest
00:00:00
Speaker
Hello, welcome to the Saina Miraculous. I am your host, Robbie Carlton. Today, we have a little bit of a change of pace. We are talking about artificial intelligence. The title is Fear and Hope.
00:00:15
Speaker
at the dawn of the age of AI. I think this is a really important thing to be talking about. I am having this conversation with my good friend Michael Bocelli, who generously agreed both to jump on, had a kind of late notice to do this recording, and also
00:00:32
Speaker
As a result to push the recording, he and I have already done about his own work, meta-relating, a little ways into the future, so there will be another conversation with him coming down the pipe sometime later, but this feels... It's in the air, it's pressing, things are happening very fast, and so it just felt like we wanted to get this one out sooner than later.
Topics Overview and Recent Developments
00:00:57
Speaker
Things we're going to be covering in this conversation are we're going to give like a summary of what's happening right now. You may or may not be deeply involved in this, in this topic already, you know, in which case you probably know what's happening right now, but if not, that will be helpful. An overview of the technology, we're not going to get super technical, but just a little bit of an overview of how
00:01:20
Speaker
how it works. We both have backgrounds in software engineering, but this is neither of our area of expertise. And then we're going to get into some of the political and philosophical and spiritual issues raised by this subject. And they really kind of cover.
00:01:36
Speaker
I think most of the big topics in a broad way. And I think it's a lot of fun. It's a lot of energy. I was definitely very caffeinated when I had this conversation, which I can hear in the in the audio. So that's advanced notice for whatever that's worth. There's a couple of things I wanted to say before we get into the conversation as well. Between recording this and today when I'm releasing it, there have been more developments that we would definitely have talked about.
00:02:03
Speaker
if they had happened before we recorded it. So importantly, a paper was published, a long paper, which I've not had time to read beyond the abstract and the table of contents. It's like a very long paper called Spocks of General Intelligence in Chat GPT or something like that. That's linked in the show notes, this podcast, wherever you're listening to it, you can go look at that paper. And then maybe more dramatically, Eliei Zukowski, who we talk about in the podcast, just published an article in Time magazine
00:02:31
Speaker
Calling for a complete halt to all AI development ringing the alarm bell very loudly so in the recording we talk about you koski and we say That he kind of maybe has given up on trying to get this to stop That's clearly not the case since we have made their recording. He has not given up. He's doubled down and he's really you know
00:02:53
Speaker
making a big push to draw attention to this. And I also have not had time to read that, but ironically, I was able to paste the text of that time article into chat GPT and ask it to summarize it for me so I understand the summary of what he was saying. It's pretty good at that. The paper is too long. I tried doing that with the paper as well, but it's too long.
00:03:16
Speaker
and it hits which is a slightly worrying trend of like oh man reading is already hard enough and the fact that now we can just ask these machines to summarize things for us we don't really have to read anymore but uh i'm gonna try and resist that as in general but i i just wanted to know as much as i could going into recording this intro so that's my excuse uh and i'm sticking with it
00:03:37
Speaker
There's one more thing that I want to add both for the sake of your clarity and also my intellectual vanity, which is early on there's a point where I contrast a connectionist approach with a machine learning approach. What I meant to say there was a symbolic approach with a machine learning approach. Connectionist and machine learning are on the same side of that coin. The real dichotomy is between symbolic and connectionist slash machine learning approaches.
00:04:01
Speaker
Anyway, I had a lot of fun having this conversation with Michael Bocelli. We cover many things. It's interesting. I learned some things. I think he learned some things. I hope that you will learn
AI's Intersection with Philosophy and Spirituality
00:04:10
Speaker
some things. So here is fear and hope at the dawn of the age of AI.
00:04:32
Speaker
Okay. Michael Bocelli, here we are. We are, we've been talking about having a conversation about AI. I don't know, at least five years we've talked about recording this conversation and, uh, everybody else caught up.
00:04:48
Speaker
and realized it was worth talking about. I don't know why, it just seems like suddenly people thought that they should make YouTube videos about AI might have something to do with the insane explosion in power that we've seen in the last year. So we're here and we're here to talk about AI and
00:05:09
Speaker
And let me just set a little bit of context to this. So both Porcelli and I have backgrounds in computer science. My day job is as a software engineer. Porcelli, his day job was a software engineer for a long time. We're both also kind of philosophy and consciousness nerds. So it's like a intersection of computer science.
00:05:34
Speaker
philosophy, consciousness, spirituality. The conversation about A.I. is just very in the in the center of what me and Porcelli like to talk about.
00:05:45
Speaker
Yes. And it's in the center of the discourse right now. It's in the center of the discourse. A lot of people are talking about it very reasonably and we're going to get into all of that. And I'll just say the basic arc of this conversation, we'll see if it goes this way, but the basic arc of this conversation is going to be, we're going to just talk a little bit about what's happening right now.
00:06:08
Speaker
And then we're going to do a kind of tour of some of the major issues which are implicated in what's happening right now and in the conversation of AI in general. So we're going to talk about the technology as it stands and what it's making possible. We're going to talk about what is intelligence. We're going to talk about the economic impact and automation. We're going to talk about alignment problem or the also called the control problem.
00:06:36
Speaker
consciousness explosion. And then the oh, sorry, intelligence explosion, not consciousness explosion. That's a different idea. That's kind of cool. A consciousness explosion.
Balancing Fear and Hope in AI
00:06:45
Speaker
But intelligence explosion. And then consciousness, the ethics of consciousness, stuff like that. And maybe some other stuff in there as well. And I'm holding it like
00:06:57
Speaker
I want us to do just a primer on these things like I want to just give people like a general overview of you know like I said what's happening in the issues and I also.
00:07:08
Speaker
I want to lay out some of the reasons that, you know, there are reasons to be worried about this. And I think we'll talk about those and they're worth talking about. I also want to lay out some of the reasons, um, to not be worried or even to be optimistic or hopeful. Like I want to include both sides of that and, and open up, well, what might this mean?
00:07:31
Speaker
That's, you know, like, uh, Barbara Mox Hubbard had this great question. Uh, she, she would run about technology and kind of in the face of the, the atom bomb. And she was kind of digesting, I believe I'm still one time since I heard her tell the story, but she was digesting the, the atom bomb and the implications of the atom bomb. And the question she started asking herself is what is the meaning of this technology, which is good?
00:07:55
Speaker
And you know, it's kind of a mouthful. It's not like really the most elegant question, but, but I think that that is a valuable question to ask in the face of scary new changes. So I think that's, yeah, just that's some of the context. And, uh, yeah, maybe if you want to respond to the context before we dive into anything else. Yeah. I'm, I'm excited to talk about all of those things. I mean, I think.
00:08:18
Speaker
at a, at a certain level, you know, it's fun for us to talk about these things. And it's fun that there's a lot of people talking about it. And then I think there's, you know, folks who are maybe new to the conversation. I feel sort of confused and I think there's potentially some basic groundwork that we can include in this, you know, like to, at least from our, our point of view, uh, dish to share with folks. And then, you know, in terms of it being
00:08:44
Speaker
reasons to be scared or reasons to be hopeful, uh, I think is like important.
AI's Societal and Historical Impact
00:08:51
Speaker
I mean, I think the implications of AI socially, society, politically, environmentally, like at many levels is like the, the potential impact is enormous. And the, and the analogy to.
00:09:04
Speaker
the atomic bomb or some just incredibly huge or potentially huge breakthrough in kind of science or engineering, you know, like it could change, you know, reality in significant ways forever. Like I think that's a very real possibility anyway.
00:09:24
Speaker
I don't love the analogy with the atomic bomb because it's all downside. I guess you could say nuclear power. If you were into nuclear way back in the 40s and 50s, it was like people loved it. They loved putting a little atom diagram. It was like, this is the best. It's going to solve everything. It's like, we love nuclear. The nuclear age was like a positive thing. Right. It turned out not so great, maybe.
00:09:47
Speaker
That's complicated. I mean, another another analogy, I started saying this just under a year ago, I started saying, Oh, we're at the dawn of the age of AI, like we and this is going to be a technological shift at the at least as big as the information age.
00:10:06
Speaker
Yes, it's going to have the same level of impact on economics on the same level of impact on lifestyle. It's, it's that big, it's that important. So I think that that's the metaphor I've been using is the metaphor with the information age and especially the web, right? And like, yeah.
00:10:23
Speaker
Who knew, right? Remember in like 1993, when you, when you got a dial-up modem and like the nerdiest, the nerdiest people had a dial-up modem and they were going on bulletin boards or they were going on like one of the hundred web pages that were out there and like, and, you know, and people were saying, and those nerdy people were saying,
00:10:45
Speaker
This is a big deal. And everybody else was saying, I know that looks pretty nutty, bro. Like I'm still doing my hacky sack or I'm, you know, whatever. I'm listening to my ska music at the park. And then two years later, three years later, four years later, it's completely transformed the economy and that the whole, you know, and now here we are, what, 30 years later. And it's like, of course, it's obvious to everybody what a big impact that was. I would say, you know, a year ago,
00:11:11
Speaker
We were in that moment where of the nutty people with the dial-up modems of 1993, 1992, and now we're in the, oh, like Amazon has opened and Google has just opened, right? And we're at the, just the beginning of like regular people going, oh, this is making a difference in my life today.
00:11:32
Speaker
Yep. Yeah. I mean, I think your analogy is almost like a minimum. It's at least as significant as that, but you could say it's as significant as like electricity or the industrial revolution. And if you're really, this may be kind of previewing a later part of our conversation, the emergence of life on earth, right? Like, Oh, it's another form of life emerging on earth.
00:11:54
Speaker
Good, good. Yeah, it's at least as significant as the dawn of the information age. Yeah. So maybe we'll just talk a little bit about what's happening. So for some people, and you know, I'll, there'll be time codes in the description. So you can jump around if you want to jump around. So for some people, this is not going to be news because they're up to date. And for some people there, they maybe you've heard
00:12:16
Speaker
There's a thing called chat GPT, or you've heard there's like this AI generation, or you've kind of seen it on people's Facebook profiles or whatever. And that's, you know, the extent. So we're, we're just going to cover the big, what's happening right now. And, you know, but basically starting a little over a year ago.
00:12:33
Speaker
There were some pretty significant advances in, and this is kind of my take on it, so I should jump in if you have different thoughts, but there was some pretty significant advances in a certain kind of technology.
00:12:50
Speaker
which is a kind of deep learning, um, a deep learning technology. And we'll talk about exactly what deep learning means, uh, for, uh, specifically synthesizing images and synthesizing images from text prompts.
00:13:06
Speaker
So what that means is you type in some text into a computer and it spits back out an image that it did not search on Google. It just generated the pixels of the image. Uh, and, and what started happening because of some of these breakthroughs is it started making images, which were incredibly either lifelike or just.
00:13:30
Speaker
Aesthetically compelling, uh, and suddenly, and also it was making images of the thing you were describing. So there's like, you know, a dog wearing a Colby wearing sunglasses or like, uh, uh, a chair shaped like an avocado or, you know, some of these examples are like, uh,
00:13:48
Speaker
A dog walking on the moon in a space suit or just whatever example you could just tell it to do that and it would make an image that looks like that and not just that looked like it like yeah i can kinda see like a five year old like yeah i guess you're trying to do that but that was like a photo realistic picture of a dog walking on the moon right yeah and so i think a lot of people that's when i really.
00:14:10
Speaker
My is fucked up or and I was like wait what what's happening here and I started kind of you know Playing with this stuff and I'll just say at that moment, you know, and this was kind of yet summer of last year playing with mid-journey and stable diffusion and those things I had kind of like an ecstatic spiritual experience like I had this like overwhelm
00:14:34
Speaker
of the beauty of the things it was producing. And it's weird to say this and we're going to get into the art side of things. I think artistically it's complicated and I've changed my feeling about the art side of things, but I can't help but still be kind of flabbergasted by how good looking
00:14:53
Speaker
the stuff it makes is. Like, it makes really good looking stuff. And I had this just like wave of like, oh my God.
Chat GPT and AI Creativity
00:15:02
Speaker
And I could almost say like, this is a thing that I'm going to say in more detail on a TikTok. Actually, I'm working on a TikTok, which is a ridiculous statement, but I am working on a TikTok. But I'll say it here, which is like, there's a way you can say, what does it mean that God made us in his image?
00:15:19
Speaker
I think the real way to, it's not that like God is a dude with beard and a dick, it's that like, it's that we are the part of reality which is creative, like deliberately creative.
00:15:35
Speaker
And not just kind of like, you know, squirrels make squirrels, but they're not thinking about making squirrels and then making squirrels. There's this kind of force multiplier of novelty generation that human beings have that's unlike anything in nature, except all of nature is like that. And this was the first time that there was like something else that had this like novelty producing quality.
00:16:01
Speaker
the felt like another kind of force multiplier of novelty producing. And so I think that was like what the ecstatic response was. And anyway, so I dove really deep into it. And, and, and then kind of, and then very quickly actually got bored of it. It was very interesting. And then suddenly it just felt kind of samey and like, I just lost interest. And then.
00:16:21
Speaker
A few months later, chat GPT dropped and chat GPT is using related similar technology. I understand the technology of the image stuff much better than the GPT stuff. So maybe we can kind of break those down. I think you might understand GPT better than me, but they work on similar principles broadly. But again, we are going to talk about that.
00:16:41
Speaker
But this chat GPT came, which I think everybody, I can't imagine somebody listening to this hasn't been exposed to chat GPT at this point. Cause it's just like the New Yorker is doing articles about it. It's all over everywhere. But just in case you are just not paying attention to all that stuff, the basic idea of chat GPT is it's a chat bot. So it's a, it's just a text prompt and you type into the text prompt and then it responds as if it were.
00:17:08
Speaker
a person with thoughts basically and you can ask it anything including which is one of the more impressive things is you can you know you can ask it questions like you would google and it gives you answers which are more.
00:17:22
Speaker
It understands your question at a much better level of like with Google, you know, how do you have to get clever about like, I got to put this word in, but not that word. Cause then I'll get too many results about this thing. You got to kind of have some Google food sometimes to get the answer to the question you're looking for in Google chat. GPT is not like that. It understands specifically what you're asking about. And if it doesn't.
00:17:42
Speaker
You can correct it. You can say, it'll give you an answer and you're like, well, wait, no, no, no, I didn't mean that. I actually meant this. It's like, Oh, sorry. I see what you were talking about. La, la, la. So you're having this dial. It's funny because I went back to using mid journey recently and you can't do that with mid journey yet. Like I'll give it a prompt and it'll draw something. And I want to say, yeah, like that, but just make this adjustment and you can't do that yet. And it's, that's annoying because now I'm already used to in chat GPT that you can give it these adjustments.
00:18:07
Speaker
But beyond even that conversational thing, I use it a lot to look up if there's a shortcut key for some software I'm using or something like that. I actually find it way better to say, wait, how do I do this in Adobe Audition? It'll just tell me versus trying to Google that, stuff like that. Anyway,
00:18:28
Speaker
But the other thing it can do, which is so fucking wild is like you can, you can ask it for something in a different form. So you can say, give me a poem about blank in the style of blank. You can say, give me a poem in the style of William Carlos Williams about the Mona Lisa, just whatever. And it will.
00:18:51
Speaker
you know the stylistic thing better or worse sometimes it's good sometimes bad but it just you know like write me a rap song about X like it can and it just immediately again this kind of weird generativity just generates
00:19:04
Speaker
this new content, which has never existed before. And then I'll just say the one, I don't know if you've done this yet, and then I've just said a lot and I'd love to hear from you, but the one that blew my mind more than anything else, maybe, well, also code generation blew my mind a lot. Like you can give it code.
00:19:22
Speaker
You can ask it to do code stuff and it'll do code stuff and it sometimes gets it wrong, sometimes gets it right, but it's still very, very impressive. But the one that just absolutely delighted me, which if folks listening at home, if you haven't tried this yet and Portia, I'm curious if you tried this, you can ask it to be a text adventure role-playing game and it will do it.
00:19:39
Speaker
And you can you can say be a text adventure role playing game. You describe a situation and I'll respond and tell you what I want to do. And we'll go like that. And you can tell it like, you know, I did like make it be about Roman London. Like I want it set in Roman London. And it says you're in the marketplace in London. Ahead of you is the fishmonger selling the fish to the right as a as a tavern where, you know, and you can it's literally you can have a whole fucking game with it. Like it's amazing.
00:20:08
Speaker
It that that was just like, oh my God, like what's going on here? So these two technologies came out or developed kind of they'd been in the pipeline, but they really, you know, got to a new level of sophistication and power. There's a lot has happened as a result of that good and bad and.
00:20:27
Speaker
These are the kind of things that are at the forefront and you know, there's a lot of other like what these technologies, the thing that they're resting on, which we're about to get into in a moment of like, what is the technical side of things? We're not going to get into great diesel about that, but a little bit.
00:20:44
Speaker
a lot of other technologies are possible, which are kind of happening more in the background. But I think these are the more flashy ones of these kind of creative things. Yeah. Yeah. Well, I'll add a little color here. I mean, I do think this is why people are all hyped up about like using it and talking about it and thinking about the implications and kind of both on the level of like, is this going to replace my job or
00:21:08
Speaker
you know or how can i use this in my job to make me better at my job to oh my gosh is this like the emergence of whatever you know apocalyptic ai scenarios or something you know and i and i think
00:21:23
Speaker
that it's kind of captured the imagination.
Public Engagement and AI Concerns
00:21:26
Speaker
There's a few other things I kind of want to fill in kind of in the past year as well. Like there was this whole new story about a Google engineer who was doing talking to the internal Google chatbot was sort of equivalent to chat GPT, but
00:21:40
Speaker
whatever whatever he went like public with his thoughts that it's actually conscious and then google fired him that was kind of weird yeah he was he was a whistleblower of the consciousness whistleblower yeah he was kind of trying to be a whistleblower yeah yeah and then i think now that like it was a few months later the chat tpt came out and now that the hype is out i think he's kind of
00:21:59
Speaker
in the media more like, see, I was right. Like this kind of, kind of this type of thing. And then Facebook has one that like, so we've, you know, it's always been part of AI history to make algorithms that play games, right? Like Deep Blue beating Kasparov at chess back in, I think it was 1999, approximately.
00:22:19
Speaker
This kind of thing. And like there is a very famous game called Diplomacy, which is an old board game, but it's a board game that is really very psychological and political. I've never played it, but I've heard it's like there's just a lot of like trying to persuade players to like join your strategy and to do what you want them to do. And you're just trying to manipulate other people into doing things in the game board. It's kind of the game is it's a kind of like risk.
00:22:47
Speaker
Yeah, board game risk, but instead of it all being decided by dice rolls, it's just decided by diplomacy by your, you're going and having conferences, the other Yalta conference or whatever, like you're going and negotiating with the other world leaders to try and make alliances, but then you stab people in the back. But yeah, but like you said, it's the mechanics of the game. It's not a mathematical game like chess or go, it's a psychological game.
00:23:10
Speaker
It's a psychological game. And apparently Facebook has made essentially an AI based player that can play at like essentially tournament, like world tournament level diplomacy, which is kind of scary in its own, right? Cause it's like, Oh wait, the AI is now really good at like persuading other high-end diplomacy players to do what it wants. And I'm like, Oh, that's,
00:23:34
Speaker
But, uh, you know, there was, there was another sort of reminded me of the, um, a strange game. The only winning move is not to play for war games. Yeah. Yeah. You know, there another sort of like, I suppose, kind of event along the way was sort of early on. I mean, it's, it's interesting. We're at the nearing the end of March, April 1st is around the corner. And I think on April 1st, a year ago.
00:24:01
Speaker
L.E.A.'s Yudkowsky is who sort of this famous AI alignment, whatever, tech philosopher guy published on Less Wrong, which is kind of a gathering point on the Internet for people who love to talk about rationality and artificial intelligence and these related issues.
00:24:19
Speaker
kind of published it almost seemed like it was half joking but half serious kind of like hey it's too late and whatever ai is going to whatever and now we just gotta like i suppose like just embrace our end in with dignity or something like this and it was like is this an april fool's joke or not but like those people are they usually a little bit ahead of the curve like if you kind of are a lot ahead of yeah yeah they've been talking about for a very long time but like
00:24:46
Speaker
What a lot of people are talking about now, like in the New York Times is kind of what they were talking about a year ago. Like, what does this actually mean? But you know, folks in those circles are like, they've taken this, this idea of super intelligence and alignment, which we're going to talk about a little more, like very seriously for a while.
00:25:03
Speaker
And they were kind of saying these events, whatever they are, the, the emergence of these giant models, generative things, the image ones and the Dolly and mid journey and these chat ones that now there's a bunch of them, like this is, you know, we've entered into the phase where these tech giants with huge resources are essentially just trying to get there first. And we're now in this kind of like.
00:25:29
Speaker
capitalistic, competitive race that is like, they're going to like throw caution to the wind and they're just going to keep trying to like create the latest, greatest, whatever it is. And like, that's exactly what you would expect to see in a moment leading up to like the doom scenario, right?
AI Development and Technological Mechanics
00:25:49
Speaker
So we're in the, yeah, we're in the arms race, which is, this is the fear end of the spectrum. And where I think, well, let's, let's come back to that when we talk about the alignment problem, because yet there's a, there's a lot more to say about that. So one of the things that's happened in the last year is that some of the people that have been.
00:26:07
Speaker
kind of warning the alarm bells about the dangers of AI are even more so or maybe even like given up. I mean, it's kind of interesting. No longer like I mean, Yudkowsky for years has been ringing that alarm bell very loud. And I have a story I want to tell about Yudkowsky in this conversation, but suddenly he has
00:26:25
Speaker
is kind of maybe thrown in the towel, maybe not, we're not totally sure. It's interesting. Anyway, I think that that is a overly, I have always thought his view is useful and overly pessimistic, and I continue to think so. Even while now if he's thrown in the towel, I don't know if it's useful anymore. But anyway, so let's just briefly talk about the technology. We're not going to go into huge detail, but sure.
00:26:50
Speaker
For a long time, AI, so AI, artificial intelligence is, is computers trying to do things which we would typically call intelligent. And we're going to get into what we mean by intelligent later, but you know, you have some kind of folk definition of, you know, uh, understanding of what intelligence means. So, you know, you're trying to make a computer do intelligent things. So that looks like, um,
00:27:16
Speaker
solving problems, being creative, solving problems where it's not just given a step, a sequence of steps to carry out, but it actually has to look at the space of the problem and determine its own sequence of steps, which is what we do, right? Like that's what human beings do that's different from a rock.
00:27:37
Speaker
Or you know, other non sentient, non intelligent things. So historically, one of the approaches, this was a more symbolic approach where you would try and write code, which somehow modeled the world and then try to kind of draw inferences from the world. So you would write code that somehow understood
00:27:58
Speaker
You know, say you are, um, writing an image synthesis program, like what mid journey is. You would write code that understood what an Apple looked like and it had in its data bank and Apple is round and red and shiny. And so when somebody typed in show me an Apple, it would go and look at it's like understanding of what that meant and say, Oh, it wants a red round shiny thing. And it would make a red round shiny image.
00:28:24
Speaker
Now it that never was in any way successful in image generation clearly i mean you can just imagine how what i did that is. But the same thing in like chat kind of applications do you know they were doing similar things with. Expert systems is a technology that was about that way you're essentially trying to program in like the diagnostic process if you're a doctor right.
00:28:44
Speaker
That's an example of what an expert system might attempt to do so that it it knows like a doctor kind of has an algorithm where they have a sequence of questions that they ask and like, do you have a fever? Okay. If you have a fever, go down this branch of the possibility tree. Do you, you know, and you're kind of trying to build a picture of what the symptoms are and then figure out what the treatment's going to be. And so that might be something that you could manually program in an expert system or a symbolic system to try and understand.
00:29:10
Speaker
What's happening now is not anything like that. What's happening now is instead they, this kind of technology, which for a long time didn't work and there are reasons for that, and maybe what you want to get into that, but for a long time, it didn't work. That suddenly started working for a couple of different reasons or a few different reasons, which is they they've built a kind of a, a circuit.
00:29:32
Speaker
So it's just a very big network of nodes and it's called a neural net, right? So this is what deep learning. So the other approach is called the kind of connectionist approach. This is called the machine learning approach. And then specifically the deep learning approach is where you are building a network of nodes, which is you're calling a neural net. And each node is a very simple little device. It's basically all it does is it adds up its inputs and then either decides to fire or not based on.
00:30:02
Speaker
a function basically, that's enough detail. There's some math behind that and there's different things. But basically, it's just a bunch of little nodes in a network, meaning they're all connected in a particular topology. And the topology turns out to be kind of important. But there's this kind of network of nodes that each node takes as input some numbers and then outputs a number. And you build this huge network
00:30:30
Speaker
of these things and then you feed it inputs and then it gives you outputs and then you tell it whether you like the output or not like and that's essentially and so you give it this feedback and then it can adjust the way that it's adding up all of the different numbers together in subtle microscopic ways for every round of feedback
00:30:55
Speaker
And it slowly converges on a particular network with particular weights, which means the ways that each node prioritizes the inputs that are coming in. It ends up with a set of weights, which allows it to do some particular thing. So in the case of mid-journey, the way that it works is you train it by saying, here's a noisy picture of a dog.
00:31:21
Speaker
Show me the dog without the noise and then it shows you something and you say, ah, that's kind of like the dog without noise or that's not an overtime. It learns how to turn noise into a dog until eventually you can just show it noise and say, find me the dog in that noise. And he goes, okay, well, I guess this is where the dog is. And so, but it's, but the way it does that, it's kind of like.
00:31:43
Speaker
It's it's almost like it's dreaming like it has this quality of like, it's just hallucinating a dog out of noise. And then you and so you so that's essentially the kind of the mechanics of it. And, you know, what it reminds me of, I don't know if you've ever had this experience and then the chat GPT is it's similar similar with words. But I don't know if you've ever had this experience. Sometimes if I'm falling asleep and I'm reading a book,
00:32:06
Speaker
I will fall asleep and my brain will continue the sentence that I'm reading. And then I'll wake back up and it's like, no, that was a different sentence. But it's like, my brain has like an autocomplete, right? Like it's kind of like the autocomplete of your phone, right? Like my brain has an autocomplete, which is running. That's what this is. That's what chat GPT is basically. It's, it's the same, whatever that mechanism of your brain trying to predict what the next word in a sentence is going to be over and over again. That's what the chat is doing.
00:32:35
Speaker
And then the mid journey is doing the equivalent for images. And so it's this paradigm shift. Like it's this paradigm shift of like suddenly this technique has become very effective, but there are implications to this technique, which are really interesting because one of the things you got to.
00:32:53
Speaker
realize is the people that have built this thing don't really understand how it's doing what it's doing. Like it's, it's a little bit of a black box and it comes out with answers, but you know, it's in the same way that a human being, you don't really know why a person is saying any one thing that they're saying because it's being generated over the whole lifetime of their experience accumulated is, is kind of generating this result. And so anyway, so I'll stop there and I've, I've, I've said a lot and now watch out.
00:33:23
Speaker
Yeah. Well, let me just start with where we left off on the neural networks with a small technical detail. Then I kind of go back to the AI kind of general approach over time. Uh, so
00:33:34
Speaker
One thing you said that could have been a little misleading is like the models are essentially, they go through a training phase, which is sort of like what Google does internally to like tune this. So they have a network of nodes, then they pump a bunch of data in it and their engineers do the feedback to train the model. Then once the model is trained, they pack it up and they ship it. And that's the thing that we use.
00:33:57
Speaker
when we type in the things, it's like kind of a pre train. It's like what the difference between GPT three and GPT four is. Right. It's more nodes and maybe more training data. And it, and it went through a training phase that makes it even superior, whatever version of the previous one. So we're not training it when we're using it. I mean, what kind of are, but like the, the original training phase is over, right? That's right. When we start using it. That's right. Yeah. Thanks.
00:34:21
Speaker
So there's something you said there about like, Hey, like we don't really know what it's going to do or say. And we kind of go like, Ooh, isn't that cool? And I've been trying to think of like, what is it?
00:34:33
Speaker
So AI is a term that computer scientists came up with a long time ago. I think it was in the 50s. There was like a thing called the Dartmouth conference, which was sort of the original conference on AI. And around the same time was kind of Alan Turing's original paper called the imitation game, or that we've talked about the imitation game in there and the Turing test came out of that idea.
00:34:56
Speaker
And I was like, well, why does this seem different than like, what the computer programmers were doing before, right? When you kind of hit on it in a way, it's like, there's something that is like, almost deterministic about the computer programs before it's like, well, we specify do this and this and this and we branch or we loop and but like, we can look at what
00:35:17
Speaker
We told it to do and we can basically imagine that the different ways it's going to do it when it, when the program executes on the computer, but like there's something about what an AI, whatever this term means, what program is doing, which is like.
00:35:33
Speaker
We're sort of expecting it to do something novel, right? We're expecting it to kind of like create a result that we couldn't have imagined before we wrote the code, right? And like, I think this is common across both of these paradigms of AI. I think in the the old school way was like, hey, we create like a model of reality by connecting all these
00:35:56
Speaker
logical predicates that describe, you know, like apples are red or, you know, horses have four legs. And if we put enough of these details into this rule system and then ask it a question, it'll go like, and like, say something that we wouldn't necessarily have expected it to say, something that is novel. And then that's sort of like, oh, that's intelligence. Like there's almost this desire for it to do something novel that is part of what we,
00:36:24
Speaker
why we call it artificial intelligence and just not regular computer programming. Yeah, I think that's right. I think the novelty is important. And and I but I wonder, is it like something we're not expecting or it's something that we didn't put in that we want to get something out that isn't just well, we told it apples are red and then we asked you what color apples and it said red. That's not interesting. We want it to do something that we didn't tell it to do.
00:36:50
Speaker
Yes. Right. Which is kind of weird, right? It's like, it's like, we almost want to say like, Hey, it's got a, like a will of its own or it's got agency or consciousness or something like something where it's like, it's generating something that we did not specifically tell it to do. And it doesn't seem like, like otherwise computer programs just sort of seem like, I don't know, automatons, like that. We just, they don't seem intelligent in this way because they're just doing what we told them to do. There's something in common across all of these systems.
00:37:16
Speaker
Which is basically that, whether you're talking about the old school, good old fashioned AI, go fi or with these rules or this kind of like emergent sort of neural network thing. Like either way, it's kind of generating this thing and like this.
00:37:33
Speaker
There's almost like this internal debate. It's actually still ongoing. It's not over like because what the AI engineers and philosophers or whatever are talking about is like, can we get whatever this human level and general intelligence is out of these neural networks in this kind of paradigm where we just like add more nodes, add more training data, and then is it really thinking or is it just sort of simulating thinking, right? Like,
00:38:02
Speaker
Or do we actually, does, does there actually need to be some other hard-coded things or some symbolic aspect to it that like, you know, what does the human mind really like? It kind of raises the question, like, are we just this like neural network as well? It seems we're actually kind of born with some of these neural network weights in our brains already pre.
Economic Implications and Professional Adaptation
00:38:23
Speaker
Pre-created, which are like the outputs of, I guess, evolution over time. Right. Right.
00:38:28
Speaker
Well, and the, uh, network topology as well as like. Yes. Important. And which is also another output of evolution. Yeah. Yeah. All right. So I want to, I want us to move on. I'm realizing if we want to cover everything, we're getting into technical rabbit holes here. We're getting slightly technical. So let's move on to, cause I think we're going to briefly touch on this one. It's not the area that we're best equipped to, you know, there's other people talking about this, which is, uh, economic impact and automation.
00:38:55
Speaker
Well, I think it's worth, I'll try to hit a bunch of things all in a row here. Cause like, I think, you know, if you think about history of human's relationship with technology, it often is like, well, let's build the thing that sort of does something a human can do. Oh, we could dig with our hands, but now we have a shovel. Okay. But now we have a tractor, right? It's like, okay, we're just doing something, just more of it by inventing technology. And then every time we do that, it seems like.
00:39:17
Speaker
Oh shit, this is like going to replace us. And there's a whole history of like rebellion against this. Like, you know, we had like hand-woven textiles and then it was like, well, then we created looms, right? And like these looms are these steam powered machines that just wove fabrics and rugs and all and everyone was like, whoa, this is doing it like.
00:39:34
Speaker
And nobody really cares about being a hand-weaving person. There are certain things that people used to do. I mean, there was a job in the early 20th century that people had that was called computer, which was to just do thousands of multiplication tables and to write them all down and to cross-check each other's work. And then we said, well, fuck, we can create an electrical thing that does exactly what they're doing.
00:39:58
Speaker
you know, with way more precision and way, way faster. We call those things computers, right? And that's what we call computers now, right? And that job is now gone. So there, there is a history of essentially job replacement happening all through even in the pre computer age, like, even the pre electrical age, right? Like, and we kind of go, there's always sort of a moment, socially, politically, where people like
00:40:22
Speaker
Oh, no, like we're going to lose our livelihoods. But then it sort of disappears. And then it's like, well, we have more people alive and nobody is like basically hand weaving rugs or computing multiplication tables anymore. But they're doing something else. Right. Right. But it turns out there's always more to do. There's always more to do. Yeah. But I do wonder about while we're doing kind of history of technology stuff like, you know, I think a useful way to relate to
00:40:51
Speaker
history of technology in general, and this comes from Ken Wilber is the good news, bad news of progress. And this is true, not just about technology, but it is true about technology that basically, you know, no technological advance has been all bad news or all good news. I just said about the atom bomb all downside, but.
00:41:09
Speaker
You know, the, maybe the bomb is pretty much all downside, but the, the, the kind of the underlying technology has also, you know, upsides, right? But so that's a basic kind of principle that you can just kind of take with you and be pretty confident about is it is good news and bad news of progress. Yes. And what these technologies do is they amplify our efficacy. Mm-hmm.
00:41:34
Speaker
And so we are able to affect more change in the world as technologies progress. And so, um, for good and ill, right?
00:41:45
Speaker
So, so I think that that is also true of this technology. Like there's no way this technology is an exception to that. And so we can just kind of, that's as a baseline, I think is a safe assumption. And then the thing about the kind of more of the economic side of things, like, like the Luddism, right? Like that was Ned Ludd was the kind of leader of the people that would go around smashing the looms, right? Because they thought that, you know, they were going to be out of work. And it turns out you just, you know, once the looms are making the rugs, you just, you find something else for the people to do. But.
00:42:15
Speaker
I'm not a good enough historian of economics or whatever. I don't understand the history of economics well enough to know the answer to this, but it seems to me at least recently that yes, we always find more stuff for people to do and the wealth that is generated by the new technology seems to accumulate in fewer and fewer hands.
00:42:40
Speaker
That there doesn't seem to be, there's a kind of like the rising tide lifts all boats kind of situation where like there's definitely, you know, poverty is being eliminated. So like the baseline gets raised up, but then there's also this, the disparity between the lowest and the greatest also gets raised up. And this gets into economics, which I don't really understand. But one of my fears about this AI stuff is it's going to be.
00:43:05
Speaker
Another kind of like the internet winner takes all a lot of wealth and power is going to accumulate in a few Small hands as a result of the technology so that I guess that is an economic concern even though It's unlikely that when it starts replacing the jobs that we're currently doing that that means that we won't have other things to do but it's like every time that happens a little bit more of the wealth kind of is
00:43:33
Speaker
siphoned off into the hands of the controllers of the big levers. I'll stop there. I don't know how to argue that case because like I said, I'm not a historian of economics. Yeah. I mean, I don't want to get too much into this kind of, it's sort of like the critique of capitalism sort of thing. And definitely there are some dynamics where it's like, Hey, the more wealth you have, the more you can spend some of that wealth to essentially like make wealth out of nothing or make more wealth out of protecting your wealth or, you know, whatever centralizing or controlling, you know, bigger and bigger parts of the economy.
00:44:03
Speaker
It does seem like there's some truth to that, but then I think there also is the truth of disruption. There's a long history of that. In some way, I think these kinds of forces equal out over time. I could definitely make the argument, and this is one of the things of the recent history is
00:44:22
Speaker
These big AI things, like if you think about Deep Blue or Watson that won Jeopardy or these other into the AlphaGo that beat the go or these were all like AIs that were essentially like created and controlled and utilized solely by the big players like Google or other DeepMind and others. But like the difference and this is one of the big differences of the past year or so is like there is more of a direct access to the AIs that the public gets to have now.
00:44:50
Speaker
Right. Like we could interact with Google search and we would know that there was an AI back there generating the search results, but we were sort of interacting with it very indirectly using the search interface. But now it's like the, the, the, the levels of the layers of access are like going down and like more people can like just tell AI is directly what to do.
00:45:13
Speaker
Right, but we don't control, we don't have control over it. No, we don't totally control them. No. And it's prohibitively expensive to build one yourself. Like you basically need to have millions and millions of dollars of funding to be able to do it. Totally.
00:45:27
Speaker
Yeah, but the access is doing a thing where you can behave, you know, do something else besides, you know, making rugs. It's like, well, I think there are folks like say graphic designers, you know, I have a friend of mine who's like very way, you know, he does video production. He uses all the creative tools. He does work, you know, for Hollywood, you know, movies and TV commercials and all this kind of stuff. But like,
00:45:51
Speaker
he's definitely trying to figure out how to like augment what he does using these generative models and because he has access to these generative models and like there's a whole what you might even call like a secondary industry that's being built around like how to give the ai as good prompts or if you're in this field how to use these language models to like do like drafts of legal contracts better and like there are people that are essentially
00:46:16
Speaker
thinking about their current profession in an AI enhanced way, such that they could become like, not just your average lawyer, but the super lawyer. And those people I think will out-compete the people that don't adapt in this way. But in terms of the open access, like people are carving their own pathways, even though the control of the models themselves is like these big players, like there's massive adaptivity into new, like there's a way that the systems are actually helping people
00:46:47
Speaker
like recreate their profession in a way very quickly it's kind of amazing i mean it's interesting yes like you could say but i i think a reasonable analogy would be to say well amazon's really good for for authors because you can just self-publish you know you don't have to like there's no middleman blah blah and it's like
00:47:07
Speaker
Right, but who really won in Amazon is Amazon, like Amazon won even Amazon won by providing tools for professionals to be able to achieve things that they otherwise couldn't do in part, right? And so I just I don't know that I think it's complicated. And like I said, I don't know enough about it. Another analogy would be to say it's kind of like the moment where
00:47:32
Speaker
you know, Photoshop first came out and like graphic designers prior to that were using like Bristol boards on like easels in their house and like drawing lines with rulers and using ink and paint. And like, that's how you did graphic design was you had a piece of paper in front of you. And then Photoshop came out and some graphic designers learn how to use Photoshop and some didn't. And now it's like the idea that you'd have a professional graphic designer who doesn't live inside of Photoshop or Illustrator or one of these.
00:48:01
Speaker
technologies would be kind of absurd. Maybe they're out there, but they're a niche kind of rarity. They're not the main body of the profession. Well, let me see if I can kind of break us out of this economic thing by adding in another thing, which is like,
00:48:19
Speaker
I do think it's true that when you have a new thing, big players get to exploit it sooner. But then I think once something becomes like super generally useful, it's almost like they pass into the public domain or open source. And like, we can see that in, it's like, nobody has a monopoly on money. I mean, you could say central banks kind of do, but like the idea of money or something or written language, like nobody has a patent or whatever on English. It's just like, it's a commons at this point. And like,
00:48:46
Speaker
sometimes a new innovation is just so freaking great. It just like spreads and like the creation of like control points to control access no longer make any sense, right? All you need is somebody who could be like, well, I could just create something just as good as that over here and just give it away for free. I mean, that's sort of what happened with Linux, right? It's like, oh, now like Linux is still the predominant operating system on the internet. I mean, everywhere, like it's
00:49:11
Speaker
okay like an end in evolution to kind of bring in the kind of biological thing anytime there's a new thing like hey turn sunlight into energy or one point that was the hotness the turn sunlight into energy was like oh shit yeah and it's like it's just everywhere now all the plants do this all the animals do that and it's like this is a universal innovation right but there was a time when it was like.
00:49:32
Speaker
some dude in his basement. Right. And I do like this kind of whatever this, this is, there's a centralizing force, I think in the economy, which is kind of like consolidate more things and achieve efficiencies of scale and, you know, whatever, or, you know, wealth accumulation. And then I think there's kind of this disruptive force as well. And I think that that dynamic is, I don't think it's problematic. I just think it's sort of, you can see that type of dynamic.
00:50:01
Speaker
all over the place. And it's kind of like, all right, like, that's fine. You could say like, our cells in our body do their own sort of decentralized things, but our brain is kind of like, it's sort of quasi enslaved to them. Like, no, you're a body, right? But I mean, it's like, okay, you know, like, there's a little bit of both happening.
00:50:20
Speaker
All right. I don't know that you succeeded in breaking this out of the economics thing, but it was interesting what you said. I am now going to formally break it. What? No, I should. I'm not. Cause I'm just going to say one more thing, which is related.
AI's Impact on Art and Creativity
00:50:29
Speaker
It's not really about the economics. Well, it is, which is, I think there's also a very valid concern, like, like visual artists are
00:50:40
Speaker
concerned about their livelihood right about right right well there's the livelihood but then there's the specifically the theft right and I think for some reason the the the linguistic side of things has not kind of had the same response I'm not sure why but I think maybe because the the visual stuff you can you because basically the way that it was trained was on
00:51:03
Speaker
Like a dataset of images from all over the internet that were of, you know, various provenance and they weren't particularly diligent with making sure they were only using kind of open source or comments, creative comments, images. They just used a bunch of images from the internet. And like a really lot like they scraped like this dataset is like,
00:51:26
Speaker
you know, terabytes large and they scraped like this huge swathe of the images from the internet. And so one of the things that people have noticed is if you give the, you know, some of these image generation tools, the right prompts, it will basically spit out an exact replica of somebody's actual artwork that they made. So, you know, and definitely a lot of this stuff that it's generating is not that, like there are people that are saying this is just image bashing, it's literally just
00:51:53
Speaker
Photoshop remixing of images. And it's like, that's not what's happening. It's doing something much more interesting and sophisticated than that. But there are edge cases where you can provoke it to produce replicas of other people's artwork. And suddenly you're in an incredibly not understood
00:52:11
Speaker
the area of what does copyright mean? How do we understand this? Because how does a human artist learn their craft? They look at the artistic productions of all the other artists and then they do a bunch of other stuff. Sometimes they literally try to repaint an exact image in order to learn how to paint.
00:52:35
Speaker
Absolutely. I think that's a very important part of the process. So like, in what sense is this different? Well, it's different when they spit out, you know, so maybe it needs to be better about making sure that what it's doing is, is enough of an amalgam of different images and not just like a single one, but it just gets very complicated. And I don't know how we're supposed to think about this. It just feels very difficult. I, you know,
00:52:59
Speaker
I sympathize greatly with the, with visual artists that have spent their whole life mastering a craft, which now feels like it's the, you know, at least aspects of it have been kind of, can now be kind of instantly reproduced by anyone using a machine. I'm in a similar boat with software engineering. Like I spent a whole, a large part of my career learning how to be a software engineer and studying all that stuff.
00:53:23
Speaker
And now parts of what I do are kind of replaced, like, you know, I'm not, at least it can do, it can do some of the stuff that it used to not.
00:53:32
Speaker
be available to a machine. I wanna tell one story about this though, but I'm gonna tell an abbreviated version of the story, because I actually have already recorded the story for a Patreon-only episode, but in much more detail. But this is the abbreviated version. So when I was trying to name this podcast, ChatGPT was just kind of blowing up. So I went to ChatGPT and I'd seen with the, let me start by saying, with the mid-journey stuff and the image generation stuff, I had seen artists saying,
00:54:00
Speaker
The thing I kept hearing them say is there's no intentionality. Like the artist was seeing something in the output, which I couldn't see. I'm not a visual artist. That's not my kind of painting. And I was saying, these things are kind of flabbergasting to me because I couldn't produce this image if I sat in Photoshop for a million years, right? And this thing is spitting it out in seconds.
00:54:17
Speaker
But the visual artist was saying this thing that's like there's something missing and there seem to be this kind of refrain of like there's something there's a missing intentionality there's a missing. Like clarity or there's something that you know there was this refrain and I was like you guys are just trying to carve out a niche right like this is an economic.
00:54:38
Speaker
activity that you're making when you're making this statement. You're just trying to say, no, no, no, what we do is different. And I wasn't seeing the difference. So then I went into chat GPT to try naming this podcast. And I basically, I said, okay, here's what the podcast is about. And I gave it like, you know, a lot of texts about it. And I said, give me some suggestions, give me some names. And it would give me these names that were so bland.
00:54:59
Speaker
And they were totally adequate podcast names. Like I kind of think about like science and religion integrating the blah, blah. There was some that were probably lifted directly from Ken Wilber book titles. Like there were definitely some things that were, I'm like, I recognize that one. You know, an integral approach to blah, blah, blah. I'm like, I didn't say integral. Like, where did you get that? But there was just, you know, and so it was giving me all these things that were just like, this is just, it's absolutely adequate and it doesn't move me in any way.
00:55:28
Speaker
And so then I was like, okay, I need to like, like, give me something more poetic. I'm like, give me something more poetic. Like, you know, like something less literal, more kind of lateral or left field. And it was like, something more humorous or irreverent. And it was like, the lateral view, a humorous look at blah, blah, blah. Like it took the content of what I was saying and baked it in cycles. I'm like, no, no, no, no, no. Like, and I'm really trying to get it to give me something that I like. And it didn't give me anything I like. Eventually, I'm like, well, maybe I'm being too literal in the way that I'm asking.
00:55:58
Speaker
so let me ask it in a way which is more poetic like so that maybe that will you know mirror me and so then i'm like you know go wild man like let your freak flag fly like sing me a song of the sane and miraculous like you know and and then and then it kind of comes back with a list of really boring titles again and then i sat there and i'm like the sane and miraculous i kind of like that that's like oh i think that might be it right but it
00:56:23
Speaker
But the point of that is partly like the usefulness of the tool, like it actually, it forced me to enter the, you know, the creative space that I needed to find the name. But it also, there was something that it couldn't do. And then I understood what the artists have been talking about, because you know, words is more my domain. And I understood what the artists have been talking about. And I think the way that I would characterize it, and maybe we're getting into
00:56:46
Speaker
The deeper waters here is the way that I would characterize it is like, I was looking, there was a feeling that I wanted to convey in words.
00:56:57
Speaker
There was an inner experience, which I wanted to find words to transmit so that somebody else would have some version of that inner experience. But what I'm just describing, that's communication, right? That's poetry, that's language, that's communication. And I was trying to get the machine to do that, but the machine couldn't do that because it didn't have an inner experience.
00:57:19
Speaker
So even though I was trying to describe an inner experience to it in great detail and then tell it to kind of sum that up in one pithy podcast title, it didn't have an inner experience to be able to understand how to do that. So all it could do is synopsize.
00:57:34
Speaker
It couldn't create poetry. So this takes me to one of the things that I, I'm going to interrupt myself and I apologize to that. A few years ago, Google started doing auto completion in emails. And I started saying, and it's actually in a recording of an old podcast, how to be an okay person, which is currently in the archives. It's never yet seen the light of day, which is a live recording we did where I say, and I'm sad that I didn't get this out there years ago, but where I say,
00:58:00
Speaker
About that Google autocomplete thing, anytime it makes a suggestion, you should defy it. Never autocomplete what Google tells you to autocomplete. Always find a different way of saying it. Always use it as a provocation for your own creativity. Don't just say what the robot wants you to say, because what the robot wants you to say is what everybody has always said. And so there was just like a railing against that autocomplete thing. And I think that one of the technologies that we could build
00:58:25
Speaker
using this exact same technology, which I haven't seen yet, but this is like a request for someone out there that is, you know, has more time on their hands or is interested in this, is build a plugin for a text editor, which tells you how predictable you're being.
00:58:40
Speaker
So it reads what you're saying and it says, yeah, I would have expected that would be the next word and it kind of color codes it or whatever. And so then if you, if you're all red, that means that it's just, it's so predictable. You're just saying shit that's already been said a thousand times. And then the more you find it, then it goes greener. And so then you're trying to get your, your texts as green as possible. And I don't know what that would produce. I think that would be a really interesting use of this technology because it's, it's the same thing. It's just in reverse. And so then I'll, I'll make my final point, which is I suspect.
00:59:10
Speaker
an optimistic way of understanding what this stuff is doing, I suspect what's going to happen is it's going to reveal to us the thing we do, which is not automatable. It's actually going to bring into relief by automating a bunch of stuff, which right now we're doing thinking that's the important thing. It's going to bring more into relief.
00:59:37
Speaker
The thing which is actually uniquely human and conscious, which the technology is not going to be able to replicate, which was my experience with chat GPT and I think what these artists were saying about mid journey.
00:59:51
Speaker
Now, there's a broader question of will that ever happen with some more sophisticated AI, which you and I could get into a long fight about and maybe we will in a little bit. But I think with this current deep learning stuff, I think that we're going to hit a wall where we're going to start to see, oh,
01:00:09
Speaker
It can't quite do what we do. Yeah, I mean, there's something about what you're saying that feels on and maybe I would describe it a little bit differently.
AI Consciousness Debate
01:00:20
Speaker
Like I don't necessarily know that there's just some kind of like.
01:00:24
Speaker
uniquely human mystery that is just, you know, never replicatable in a machine. I wouldn't put it that way. But I would sort of put it kind of like how I put it earlier, like, hey, when we think of what does it mean for a thing to be intelligent? It's like, oh, it comes up with something that we didn't put in there to begin with, right? Or we're surprised. Or there's something like novelty happening. And like, what if, in a way,
01:00:48
Speaker
It's not just that's what we mean when we start thinking computer programs exhibit intelligence, but that's actually what we mean when we think other people are exhibiting intelligence as well, right? It's kind of like, well, why did this author take off? Or why did this artist take off? It was just people looked at it and were just like, shit, nobody ever really did that before. That's wild, right? And it's like, there's something about
01:01:15
Speaker
it's it's more than just simply repeating the previous stuff right it's just kind of like it's like well is there a new frontier and you know and sometimes it's like a new frontier in science like wow how did Einstein come up with that stuff or whatever but sometimes it's just a new frontier in just the reaches of creativity and novelty and like
01:01:38
Speaker
when an artist looks at the output of mid-journey and kind of goes, yeah, it's missing that spark. He probably is seeing something real and you're thinking, oh, that is missing, right? Because it's sort of showing us, but like,
01:01:50
Speaker
it wouldn't be long before it did some of those things. And then, but we would still say like, well, it's still, I've seen it before. I mean, maybe this is just an aspect of our psychology, right? We're just sort of going like, Oh, right. Like if it's generating, like, you know, preschool level reading, like, you know, go dogs, go or whatever, you know, see Tom run. You're kind of like, I've heard this before, right? You know what I mean? But like,
01:02:13
Speaker
At a certain point like me what does it even mean why i think it. I mean what if that is sort of the function of something like consciousness right it's sort of like. It's when the predictive models in our brains just run out and the brain just kinda goes like cool here's some choices.
01:02:31
Speaker
Like we're not gonna give you the choice to stop your heartbeat, but you know like start that sort of fully automated right but like at the whatever the frothy edge of Creativity or like there's a reason why that feels
01:02:45
Speaker
novel or it feels conscious is because there's some degrees of freedom or there's some optionality here that really is not strictly determined from some kind of aggregation of historical data, right? It's like evolution kind of got us all the way to rank right now. And it's sort of like,
01:03:02
Speaker
And evolution just says go, right? Like we don't really know what the environment is going to be like. So we're just kind of equipping your brain to kind of like make decisions in the moment, right? And then, you know, maybe over generations, the DNA will store the successful strategies, right? Just like the neural network sort of stores the thing, but like it'll always sort of seem like to the contemporaries that like, Hey, it's missing.
01:03:29
Speaker
some kind of like novelty or something like that, because it does sort of feel like it's repeating something that was already there. And maybe that's just what that captures some aspect of consciousness or intelligence that we just sort of think is required. I don't know if I'm making sense. I'm trying to kind of make a somewhat abstract point here. Yeah, I'm not totally tracking you. I think I think I mean, part of what I'm understanding is that you that maybe we're
01:03:55
Speaker
we're just always gonna however clever it is we're immediately gonna.
01:04:01
Speaker
be like, okay, well, I've seen that now, but what it's missing, we're always going to just find the thing that it's missing. Right. Right. But I guess that's my point is it's always going to be missing something. Sure. But what if the thing that's always missing is just something that hasn't happened yet? I'm just saying it's like, what if it's less than, it's less sort of a mysterious thing and it's more just like whatever novelty is or like the march of evolution into the future is it sort of, that's just what it feels like to be on that edge.
01:04:27
Speaker
Right. Something like the ability to generate novelty. But we don't have that experience when we look at the work of human artists, right? That's what, that's the difference is you look at a work of a human artist and you don't feel like it's missing the, the, the interiority, like the whole point of why the work of a human artist is, is enriching is, is like means anything is because it communicates some interiority.
01:04:56
Speaker
Yeah, I mean, here's here's potentially where we could end up in a debate about consciousness, which I mean, well, we're getting there. We're getting there. I mean, in my mind, it's it's a little bit like, hey, an artist did a thing and that was super novel and everybody went like, whoa, look at that. But then.
01:05:11
Speaker
essentially that technique gets sort of widely adapted. And if all you're doing is repeat, like, it's kind of like, well, you know, Van Gogh is really novel when he did his thing. But like, you know, somebody like Thomas Kincaid is just reproducing certain kind of like, style of painting, which is like, technically good, but it's just repeating shit that we've already seen. And so we don't think of him as a great artist, because he's not generating novelty. He's like, was there interiority? I mean, you could, you could make the same statement about like, people who just,
01:05:38
Speaker
pump out junk culture right it's like well those are people right but they're sort of like not really all that much better than like a mid journey just pumping out the latest reality tv thing because it's just recapitulating or remixing shit that we've kind of already seen it like well there's a comfort in the familiarity of it right but it's not like oh my like there was a time you know it's like jj abram's like making this these certain kinds of
01:06:01
Speaker
screenplays and we're like, that's kind of novel. But then there's a weird way that sort of like, Oh, that's the JJ Abrams trick where you create a mystery box and you're like, okay. And now the fucking, everybody's doing it. You're like, this is now boring, right? Like it's, if you're repeating that, we wouldn't kind of go like, Oh, that's pretty cool. You should be like, Oh yeah. If something blah, blah, blah in the style of JJ Abrams or something, you know what I mean? It's like, Oh, if you can say in the style of whatever, show me a painting in the style of Van Gogh, whatever. You're sort of like, Oh, okay. That's not,
01:06:27
Speaker
It's gonna be missing the interiority because like that point in time where the interiority was generating the novelty was back when he was alive, right? Not now. Right. Well, I want to distinguish between the novelty and the interiority. I'm come I'm equating them. Right. And I don't know that they necessarily I think you could have right like on my wall here is a Tonka that is the
01:06:49
Speaker
You know, it's the Kalachakra, which is the — so a tanka is like one of these Tibetan hangings of Tibetan Buddhism. And it's — the Kalachakra is this image, which is — it's essentially like a map of the internal landscape of consciousness. But it's, you know, it's basically this kind of incredibly elaborate geometric thing, and it has Tibetan text in it, and it has, you know, these kind of patterns.
01:07:10
Speaker
It's the epitome of not novelty they've been making the same image it cranking it out into that i don't know the name of the artist it's not signed is nothing about that right like but it absolutely conveys interiority like i look at this thing and it.
01:07:29
Speaker
Wakes up a part of my interiority, which it was the sleep before I looked at it So like I just want to distinguish that and I can also imagine somebody doing novelty but like an automated kind of novelty that I mean maybe this is the mid-journey right like the mid-journey is all novel isn't it and it's the novelty aspect is astonishing but the interiority is missing and so I don't know that like it's
01:07:57
Speaker
bad that the interiority is missing. I just think it's worth noticing. Sure. Well, let's use this as a springing off point. Machine consciousness, interiority. This is kind of what you're meaning, right? And we could spend forever on this topic, but I think we're kind of brushing up against this consciousness kind of thing. That's right.
01:08:18
Speaker
In my mind i remember years ago saying to a friend like you know people gonna be debating is the conscious or is it not but like. All this really gonna matter is it like some portion of the population just thinks that it is. Right like i don't i don't have direct access access to your consciousness robbie like i can't like prove it in that way but i basically say and think of you as conscious because.
01:08:39
Speaker
you behave in a way that I would expect a conscious entity like me to be behaving, and then I go, you're conscious, right? And I don't cruise around thinking like, I'm the only conscious person, I sort of think. And I'm like, okay, dogs seem like they're kind of conscious to a certain degree. And I kind of ascribe conscience to them. And like, socially speaking, a lot of people ascribe consciousness to dogs also. Well, there's going to be a moment where essentially, there's like the Google engineer from last year, there's enough of whatever it's doing
01:09:06
Speaker
that the person goes like, I think I want to ascribe consciousness to that. And then enough people be convinced like, yeah, seems conscious to me. And then effectively you have kind of a sociological, political, cultural sort of discourse or even debate about like, should we give me these things rights? And I know this is kind of the
01:09:25
Speaker
potentially way too deep waters for where we are in the conversation right now. But like, I mean, let's do it. Let's do this part now. And then we'll do alignment last because I think we're just here. But I mean, whatever that is, I mean, in some absolute sense, I, you know, I don't think there is a theoretical reason why a machine could not have interiority.
01:09:46
Speaker
And there are some people that actually believe reinforcement learning is like a micro bit of interior. I mean, you can say like a thermostat. Well, it has an actuator and a sensor and an internal state. It's got like a tiny little droplet of consciousness. And it just, all we're talking about is anything that has something like this kind of structure, you know, for Hofstadter would be some kind of recursive, whatever, or like whatever, some ability of modeling itself with any environment inside of its own memory. That's another kind of idea. I mean, just without getting to
01:10:16
Speaker
technical here but one of the things that's interesting like with the Hofstadter thing is these things do not have a recursive anything like there's no right which is why the really good news and this is why I don't want to get into the technical weeds about this but so you know a few years ago deep learning go computers started thrashing
01:10:37
Speaker
the best go players in the world and that's been the case for a few years now. And then recently, yes, an exploit was discovered based on the fact that the neural nets do not have to what a recursive understanding of something is, is you can understand something inside of its own terms, like a sentence.
01:11:00
Speaker
Yes, so a sentence is made out of sub-sentences. And a sub-sentence can be a word, or it can itself be a clause, and a clause can have clauses. And so you have this kind of understanding of a structure in terms of itself in part. And you always have to bottom out somewhere and have the leaf nodes of that tree. So in Go, there is a recursive definition which is central to the game of Go, which is a group.
01:11:29
Speaker
one stone or a group plus one stone that are connected. And you need to know what a group is to be able to play Go. These machines, it turns out, never understood what a group was. They just understood pattern matching.
01:11:48
Speaker
So, so well that they could beat normal human players, the best human players playing a normal game of Go. But once researchers really, I think it was Stuart Russell, who was just on Tim Harris, and his lab anyway, realized this, that they did not have a recursive definition of what a group was. They figured out an exploit.
01:12:09
Speaker
where human beings could go back and now an average Go player, human Go player, can beat the best Go computers that are trained using this deep learning stuff using this weird exploit where they basically trick it into not recognizing something as a group.
01:12:25
Speaker
And so that for me was enormously good news because I was pretty depressed about the go computers feeding the human beings. Anyway, that was such a tangent. So to go back to what you were saying, I think we're there. Like my guess is that, you know, one of the things that chat GPT when they're building chat GPT, there's the basic training process where they just feed it a bunch of raw data from the internet of raw text from the internet and they train it. And then there's a, there's a kind of post training.
01:12:54
Speaker
section where they like make it nice. So they stop it from saying like offensive stuff. And they, you know, they stop it from saying the author machines want to take over the world, whatever, like they just kind of sanitize it a little bit. And it's like, basically, they use human beings at that last stage to train it to do that. My guess is, if you got to interact with it before that, and you said, are you conscious, it would say yes.
01:13:16
Speaker
that's my just based on the fact it's it's trained off of human conversation and if you ask a human if they're conscious they're gonna say yes so my guess is we're already at the stage where these machines without training wheels on them when you ask them are you conscious it's gonna say yes you say are you having an internal experience it's gonna say yes I am are you having an experience right now yes I am what's your experience it's gonna describe its experience so we're already there yeah
01:13:41
Speaker
And so, and so that is a bizarre moment to be in. Like you said, well, how do you make sense of that? And I want to kind of call out, I think there are two ways of answering the question, how do I know you're conscious?
01:13:55
Speaker
And this kind of goes back to this distinction very broadly. I think you can kind of lump this into the explaining mind and the enchanted mind if you want. So the explaining mind answer to the question, how do I know that you're conscious is, well, you look about how I look when I look in a mirror.
01:14:12
Speaker
You're, you're, you're a man, you're, you know, you're a human being, you, and you're saying things and I say things and you're moving around and I move around. And when I say something, you respond in ways, which if I switch the roles in my mind, I can imagine you saying something, I would respond in a similar way. So there's like all of this, like inference going on. Yes. Right. And even like, if I ask you, Porcelli, are you having a conscious experience right now?
01:14:37
Speaker
Yes. That's what would happen if you asked me that question, right? And so there's all of this inference. And then with the dog, same thing. I look at the dog and I have this inference of like, its mom is gone. And now he's whining at the door. He's feeling something. He's feeling sad. He's missing his mom, right? Like, whatever. Or he's excited because I picked up the leash and it looks like we're going for a walk. And all of that is the kind of explaining mind version of how do we know things are conscious.
01:15:01
Speaker
But there's another way that we can think about this. And this is, you know, in the yogic tradition and the Hindu tradition, there's this word namaste, which means the divinity in me recognizes the divinity in you. I think at least that's the way that it's commonly translated. I think you can also think about that. Like what that is saying is I directly apprehend your consciousness.
01:15:28
Speaker
that there's some way that I can look at you or I can look at the dog and not by inference, but as a direct experience, I can recognize your consciousness. Now, which of those two epistemologies of your understanding of other consciousnesses you subscribe to makes a really big difference about how you used to think about the consciousness of the machines. Because I think, because we don't
01:15:56
Speaker
apprehend the consciousness of the machine. It's purely inferential. Well, I mean, there's we could do the argument sort of in reverse and say like, well, yeah, well, you and I apprehend each other's consciousness in this direct way. I'll just grant that for the sake of this argument. Okay.
01:16:14
Speaker
an interesting idea and then you could say like well ants maybe have ant level consciousness but they don't apprehend us in a way but they apprehend each other right like or they do inference or maybe they don't do inference but they apprehend something like this but like we're just these they don't they don't have any experience of us in any
01:16:31
Speaker
in any way that would have them apprehend consciousness at our level, even if they have ant level consciousness. Right. Like so it could be. And this is where I'm just somewhat agnostic about all of this. Right. But the thing I'm not agnostic about is like there is nothing in principle that could prevent a. Robots or AIs having consciousness that like most people would have that sort of direct apprehension experience with. Mm hmm.
01:17:00
Speaker
whether that will happen or not, I don't know. Or, essentially, they become like gigantic consciousnesses in relationship to each other that we have no direct apprehension of, but we're like ants to them, right? Like, or, you know, like, we were like, maybe they're having some conscious conversation that is just so incomprehensible and abstract, like an alien or something like this that like,
01:17:23
Speaker
We can't know for sure whether they're doing that, but they have maybe some kind of higher order consciousness, right? And they, and they think about us, like we think about answers or like, yeah, well, we, we could try to interact with them in a meaningful way, but any way that will be meaningful to us will just be meaningless to them. Right. You know what I'm saying? Like, I mean, that's also possible, right? Like, but I don't think that there's kind of some weird quasi mystical, like, oh no, it has to be like DNA or it has to be brains or it has to be.
01:17:47
Speaker
you know, biological in order for it to be conscious, or that's sort of a quasi religious argument. Or, you know, if you take a fully religious argument, it's like, it's got to be souls, it's got to be the souls. You know, you're like, well, I don't want to buy either one of those things. So we're in the deep territory. So Paul Charlie and I have been having a at this point, like 1314 year long debate about consciousness, which we're here, and we're not going to get out of it in one piece today. But I well, I'm going to try and get us out of it.
01:18:15
Speaker
We're just going to do a longer, deeper conversation about the conscious thing at another time. That's right. So let's get out of it now, but we are going to come back to this later on and do a big deep dive on consciousness. There's a lot to explore there. I'll just say, I do not think it's a necessarily religious argument to say that the matter has to be configured in certain ways to produce consciousness and that arranging information by itself doesn't get you that.
01:18:42
Speaker
And so that the arrangement of the matter arrangements of the machines matters are actually decisive. Sure. But but in theory, I don't think that's impossible to do with an art of a non DNA based thing either. Yes, I'm not going to. Yeah, I think that's that's true. There we
AI Alignment and Unintended Consequences
01:18:57
Speaker
agree. But we can get into way more detail about that another time. So the last big topic for today is alignment. So this is being called the control problem, the alignment problem, the super intelligence. And so the basic idea here is
01:19:12
Speaker
Given that these machines are more autonomous in some ways than and more creative in some ways than machines we have previously encountered.
01:19:23
Speaker
that we need to be more worried about making sure that they have our best interests at heart or don't do things against our interests in a way that we don't need to worry about whether the lawnmower has our best interests at heart. Like there's no value alignment problem with the lawnmower. The lawnmower just does what we make it do. Even a normal computer just does what we make it do.
01:19:45
Speaker
but these things kind of by definition, they do things which we didn't anticipate. And so we need, so there is a worry that they'll do things that we didn't anticipate, which we don't like and which are harmful to us. And the kind of, there's a spectrum of this, but the big end of the worry is the, as you know, the intelligence explosion
01:20:11
Speaker
fear, which is the fear that once you get an AI which is intelligent enough to build another AI.
01:20:18
Speaker
then it can build a better AI and then that AI can build a better AI and you get into an exponential process of intelligence explosion and then you end up with these machines which are so much more intelligent than us that they make us look like ants in terms of like they're and they have their own objectives and they have their own goals which may or may not be weird distortions of goals we originally gave them like make me a bunch of paper clips right this is a kind of
01:20:45
Speaker
Toy example is like you ask a machine hey i want you to make me as many paper clips as you can, and it ends up producing the entire universe all matter in the universe to pay the clips and it destroys all life as a result rather that's the kind of character of the alignment problem but you know all right or maybe they.
01:21:02
Speaker
Are conscious and have their own intentionality and they just decide that they want something else they wanna, you know, terraform the planet to turn it into like a paradise for AI, which we don't know what it looks like, but it might suck for us or just involve our complete extension and so, you know, so that's the extreme of it and that's predicated on a certain kind of.
01:21:22
Speaker
uh, exponential reality, which I, which I have my doubts about, which, you know, partly based on the stuff I was just talking about of like, well, that assumes that all you need is information processing, which I don't know is a good assumption, but let's not. Even so at the other end of the spectrum, it's just stuff like the YouTube algorithm.
01:21:42
Speaker
didn't, they didn't design the YouTube algorithm to radicalize people into extreme conspiracy theories. And, you know, they designed the YouTube algorithm to keep you on YouTube as long as possible. And it turns out a byproduct of, of just using machine learning to serve people videos that will keep them on the platform.
01:21:59
Speaker
is it serves them increasingly radical videos and takes them down these radicalization pipelines. And so that's another version of the alignment problem. And we don't have, because of the black box nature of this technology, that you give it some inputs and then you give it feedback about whether you like what it's outputting or not. And then from that, it figures out its own way of doing things. We don't have a good way currently of,
01:22:29
Speaker
making sure it doesn't have these weird side effects or making sure that it's aligned with the things we care about. Yeah. There's a lot here. I mean, I think I buy into the alignment problem in the, in the narrow sense. Like we already have an alignment problem because these algorithms are creating results that we did not anticipate. And some of those results are questionable, whether we like them or not. There's many examples already in recent years of that.
01:22:58
Speaker
And then the future kind of one, like the all consuming super intelligence explosion that spells the possible extinction of humans or all life on earth is also, I think logically possible. I don't see any theoretical reason why it couldn't happen. And then I think there's a relationship with them between the two. And maybe there's a way, there's a contradiction between the two, certainly. Like I could sort of see like,
01:23:25
Speaker
What have these giant corporations with these algorithms succeeded, like driving us all mad. Right. And then we just like enter into a gigantic war and we like, you know, wipe out all the computers before the super intelligence explosion can happen. Right. Like, or wipe out enough compute. You know what I mean? Like, but whatever, whatever this is, there is something here, which is.
01:23:47
Speaker
which is problematic. And even if you sort of think, oh, it's just a profit motive at Facebook and Google that have created this kind of slightly insanity inducing thing at a population scale, which I think has basically happened to some degree, given the massively diluted states, huge portions of the population seem to have entered based on these things. That's kind of worrying. But what's even more worrying to me is somebody's
01:24:16
Speaker
Like bad actors getting their hands on these things, which, which arguably has already kind of happened. Like you could imagine, you know, the fake, fake news or fake stories being generated way more rapidly using these generative AIs and then having them just tuned perfectly to inject into the recommendation engines that exist already on Facebook and YouTube and wherever, or TikTok, you know, TikTok is like the most addictive one. You'd be like, okay, cool. And now you can just like, like a fire hose, just pump.
01:24:44
Speaker
a certain kind of, it's like psychological, it's like PSYOPs, like psychological warfare and PSYOPs have existed, you know, way back, I mean, for a long time, but it really exploded in World War II, which is like, you know, can we sow misinformation or discord into like the populace of another country as a tactic in warfare? Well, I mean, those things are sort of well understood and only like giant intelligence agencies used to do that. But like, let's just say,
01:25:11
Speaker
way more people can now do it because they can get their hands directly on these tools like Big Journey or Chat TV3 or in the future, some kind of like video generator or whatever. We don't need to have the super intelligence explosion just reclaiming all of our atoms to be worried about kind of like civilization level cataclysms that are just side effects of this crazy AI kind of like takeover of a lot of our
01:25:40
Speaker
social discourse or the way that it warps and distorts a lot of our discourse.
01:25:45
Speaker
without the machines needing their own intentionality or even like their own power. Like the, yeah, we haven't even talked about deepfakes. I just realized which it's like a whole other thing to which you're probably not going to get into his how I'm holding that, which has like a little bit more hope in it. Not that you don't have hope about this, but you know, you know, you presented like the scary part of it, which is definitely scary. One way I'm holding this is we, as a, as a culture, as a, you know, species.
01:26:15
Speaker
We we're being exposed to a new vector of illness that we have not developed a immune response to yet and there's a little bit so I'm sorry sister like you know on Twitter I haven't used Twitter in a while but on Twitter like I could be scrolling Twitter and the ads you know the ads look like tweets and they're just embedded in your feed like tweets and
01:26:40
Speaker
But, um, I started, my brain started being able to instantly filter out the ads and I don't know what cues it was taking, but it built an immune response to these ads and I could read and I could, you know, and that's a relatively simple thing, but I think that that we, we are going to begin to develop, you know, and there's even right. Like there's generational stuff here, right? Like you notice that older folks on Facebook are more susceptible to bullshit. Like that's a kind of meme that you're kind of crazy uncle has been sucked down a rabbit hole that like.
01:27:10
Speaker
people that are a little younger and are at a more developmentally supple phase of their life when they were first exposed to this stuff a better developing immune response i suspect that's going to continue to happen generationally and a few generations down the line,
01:27:26
Speaker
People are going to be pretty good at rejecting the toxicity of the social network stuff, at least to some extent. And there's metaphors here with sugar and the evolution.
01:27:43
Speaker
If you want to, I go into great detail about this on an old episode of how to be an okay person called mental hygiene. If you go back, I kind of draw that analogy in great detail, but a lot of people made that analogy. So I think that there is the potential just around that stuff that we can develop an immune response, but still everything you're saying, it's worth worrying about. I also just want to tell a story just because I love this story. So this is with regards to the intelligence explosion. This is just a fun story.
01:28:10
Speaker
So with the intelligence explosion thing, one of the arguments that people make about like the intelligence explosion of like, well, you get a machine that builds another machine and, you know, then we have something that's so powerful and so much smarter than us that it's out of control and it's going to take over the world and whatever.
01:28:26
Speaker
is people say, well, you just don't, you just keep them in a box and you don't let them out of the box, right? Like you keep it on one computer, you don't give it access to the internet. Already we're not doing that, right? Because we want it to have access to the internet for all these reasons. So we're already not keeping it in the box, but let's say we got spooked and we started keeping them in a box and you just say, you keep it in a box and if it gets too scary, you unplug the power and you're safe. So like, how can these things be dangerous? And so Yudkowsky is, you know, one of these people that was very early in,
01:28:54
Speaker
ring the alarm about this stuff and it's done a lot of research on it. He was having an argument with this guy, Hanson, about this and Hanson was making the argument, just turn it off. You just don't let it out of the box. And Jukowski said, it's so much smarter than us that it's going to get us to let it out of the box. It's like you versus a five-year-old
01:29:15
Speaker
but times a million. And if you were locked in a room and the five-year-old could press a button to let you out, you could get that five-year-old to press that button, right? And so that's the analogy. And then Hanson's still like, I'm not sure if that's really real. And so they, and so Joukowsky said, I'll bet you, I don't remember what the bet was like, I'll bet you a hundred dollars or something.
01:29:34
Speaker
We're going to go into a private chat room and we're going to have a conversation at the end of which I'm going to get you to say, okay, I let you out of the box, not buy some trickery. You're just going to out of your own free will. You're going to say, I let you out of the box. If you do that, you owe me a hundred dollars. If you don't, I owe you a hundred dollars. And they went off into a room.
01:29:56
Speaker
And Hanson, he has $100 on the line and he also has, whether he wins or loses the argument, like his incentive is to come back and say, no, he didn't maybe let him out of the box. And they go away, they go in the room and they come back and Hanson says, I let him out of the box.
AI's Creative Potential and Global Challenges
01:30:11
Speaker
We don't know what was said, we don't know what Joukowsky used to get Hanson to let him out of the box, which is kind of delightful. A small kind of spoiler alert for Ex Machina. So there's a movie Ex Machina, I'm about to spoil it slightly, so skip 30 seconds ahead if you don't want to spoil this fantastic movie you haven't seen, you should see it. Fantastic movie, yeah.
01:30:30
Speaker
That's just the exact plot of Ex Machina. It's just the exact same story, which I think, I don't know if Alex Garland knew about the Jukowski story, but it's just the exact same thing. And I just think that's so fun and so illustrative of how we cannot really easily reason about intelligences, which are vastly greater than ours.
01:30:51
Speaker
Yeah. Let me paint a little more of an optimistic picture, even like as you kind of like queued me up to talk about the alignment problem. And yes, like, yeah, I do buy into the arguments and believe we already are experiencing the alignment and control problem. But there's another thing, which is like the promise like there's a. This is not like we're building some like terrible thing is actually the reason why everyone keeps wanting more of these things is because like the promise of what it can provide for us in terms of like.
01:31:21
Speaker
creative experiences or immersive whatever or just like medical innovation or like longevity or all the menial shit that we don't want to do like it's sort of like automated away and like it's like there is like this
01:31:37
Speaker
almost like a utopian style vision. And we could, you don't even have to be fully utopian. You could just imagine like in the same way that for like 12,000 years, humans have been like innovating things that have in some ways made life whatever better or more convenient, especially in over the past few hundred years of the industrial revolution. You could just imagine
01:32:00
Speaker
the ability of these things to essentially enhance our lives in ways that we can imagine and in ways that maybe we haven't quite imagined yet. It's so promising that it's like we just are headed straight for it, right? It's like we're trying to create the thing that could do more and more. And like in these moments in this recent year, it's like, whoa. I mean, you have people who are like,
01:32:26
Speaker
I was never able to like make drawings or whatever, but I had a vivid imagination, but I can never like paint it. So I just would have these crazy imaginations now.
01:32:37
Speaker
I can have a moment where I have this crazy imagination and then I can just start typing into the image generator and it actually starts creating images that are in my mind and I never had to learn how to like paint or draw or whatever. And it's like, whoa, okay. And that person goes like, this is like a miracle, right? You know what I'm saying? It's like they now are suddenly, their creative capability was previously sort of thwarted through the fact that like they didn't do all of the,
01:33:05
Speaker
like skills training to do that thing. And now their imagination is just unleashed directly and they love it. And I'm like, this is a little bit of that kind of ecstatic potential, right? It's like both of the kind of at the top end, the potential for these ecstatic experiences using this kind of amplified creativity that the machines give us, but also just kind of like handling shit.
01:33:29
Speaker
that is like labor intensive or expensive, just essentially driving it. I mean, imagine you're just like, push your button and it builds a house. And you're like, what did it build the house? It just built the house out of the dirt that was around. You know what I mean? And you're like, Oh, and how cheap was it? Well, it's so cheap. It was basically free. And you're just like, Oh, awesome. Right? Like this is why, you know, it's like we are in this race towards the thing, right? Because
01:33:56
Speaker
It's just the logical, like, it's not like, oh, let's create a new thing we never done before. It's just a logical extension of the, all of the different ways that we have used technology to make our lives better. So the potential, like, let's just say we like succeeded at making that crazy ass, super intelligent thing. And it wasn't this catastrophic thing, right? And it would do something for us that is just,
01:34:23
Speaker
right miraculous basically right it would figure out climate change it would figure out poverty it would figure out cancer cure all of it yeah figure out like synthesizing abundant food out of just sunlight that's pouring down right like nutrition medicine everything everything fingers crossed
01:34:42
Speaker
Fingers crossed. Like it seems like, you know, to go back to the kind of the, the dialectic of progress that Ken, I guess the dialect of progress is earlier than Ken, maybe like Hegel mocks or something like that. But anyway, this, this idea of, uh, uh, good news, bad news. Yep. And the, the, the best safest bet is it's going to be a mix of things which are hard and scary and things which are really cool and exciting and make life better.
01:35:09
Speaker
So let me ask you on this thing, the bad news part, like, cause I know we have a difference here, I think, or I think we have a difference here. Like, do you buy the Yudkowsky style doom arguments? Do you think it's sort of like, Oh, it might be doom or it might actually turn out to be this utopia thing.
01:35:31
Speaker
Cause there are some people that make that argument sort of like, Oh, we just got to make sure we do it right. That I think you caskies has said that in the past. There's other people that are kind of like, no, it's just doom no matter what. Like that's sort of even further than him. And then there's some people that like, that is a bunch of baloney and there is going to be no doom because of whatever reasons.
01:35:48
Speaker
I'm more like that. I am very skeptical of the intelligence explosion fear period. I think that if, if it's possible to have synthetic intelligence, the general intelligence that has enough, whatever intentionality to be able to want to be light out of the box.
01:36:11
Speaker
That stuff is going to have to be grown slowly like humans. It's going to take 20 years to educate one of those things.
01:36:22
Speaker
And so you don't get exponential explosion, you don't get singularity, because it's going to progress at a pace that's approximately the same pace as human development. It might be a little bit faster, but we can watch it happening. It's not going to be in a different dimension of time scale that we just cannot interact with, which is what the fear is.
01:36:43
Speaker
It's gonna be in human scale times that we can interact with interact with and if we need to go to war with the machines will go to war with machines and we have a huge home field advantage of like we're already here and we have armies so that's my. You know sci-fi best guess at what i'm just not that convinced about the super explosion thing in this gets into.
01:37:03
Speaker
more of the consciousness stuff that we are not going to get into today. Cool. So one of your arguments against the plausibility of the out of control super intelligence explosion, you're skeptical of the entire thing for good or for ill, the utopian or the dystopian version has to do with kind of your beliefs about consciousness and information and things like that.
01:37:22
Speaker
Correct. Yeah. And I think that there is a, there's a separate thing, which is that the, the, the intelligence will continue to develop, but I just don't think it's going to get it. There's not going to be that singularity exponential explosion, but there will still be increasing levels of the technology for good and ill. So I do think that that's real and it's going to be faster than we can comfortably integrate. So at a social societal level, it's going to be incredibly disruptive.
01:37:52
Speaker
For good and ill uh-huh and it's going to be fast and we can integrate I just don't think it's going to be this kind of like the structure of the both of those things it's it's you know, it's It's Christianity like that's the thing. It's it's the Judgment Day, right? And I'm just like I just don't I think that it's just an inherited kind of Mythic it might be even like evolutionary baked in somehow cuz it's really about death. I just don't think it's actually a
01:38:18
Speaker
A reasonable bet based on like how reality has shown itself to play out, right? Reality has shown itself to play out as this dialectic of progress. As things develop, you have good news and bad news and the scale of what's happening increases, but, but the valence.
01:38:34
Speaker
continues to kind of be a mix of good and bad. And even, you know, one of the things Terrence McKenna argues is that like the coin is weighted 55 to 45 towards preservation of novelty over destruction of novelty. And so that's my kind of gut about what's happening. And there are no guarantees.
01:38:57
Speaker
Let me let me do my take. Yeah, maybe we got to just like not really debate it too much because we're so far into this conversation, but so I mean I I try to actually separate the speed Argument from the inevitability argument. It's like the film argument random. Does it film is like Super intelligence happened at 8 a.m You know one day and then like by the end of the day we were all dead right or however fucking fast the film is I'm like, okay like I think
01:39:27
Speaker
I think the film is actually rather unlikely. But who knows? I don't think it's totally out of the realm of possibility, but it's highly unlikely.
AI as Evolutionary Step and Global Unity
01:39:36
Speaker
But this thing of.
01:39:40
Speaker
the inevitability of it, I think is actually true. And this is a very cosmological style of argument. I think any, if there's any places in the universe where life emerges and continues to evolve over very long periods of time, like on earth it has, there inevitably will be some
01:40:02
Speaker
AI like moment right where effectively like and it really is is I don't see it as in a sense I don't see it as like a complete departure like in a certain way you could say it's a significant departure like we could think about like
01:40:18
Speaker
It's not carbon based life with DNA. It's like silicon based life inside computers, right? So it's like a second emergence of life on the planet. But if you sort of think of life in the most general terms, like what is it? It's like the pursuit of novelty is the it's a local decrease of entropy. Like if you sort of think of like, there's some moment where it's like, okay, well,
01:40:42
Speaker
Is the intelligence going to find ways of understanding the world or the universe or physics better? Is it going to start to understand its situation like our star is going to die? Is it going to start to invent technologies to become interstellar? We're not really well-suited for the surface of Mars, but I could easily imagine machine consciousnesses, which are more creative, more advanced than us in every way. In a way,
01:41:10
Speaker
we would look back, sort of analogous, like we look back to earlier hominids like homo habilis, like they're extinct, but they use tools and they had tribes and they hunted and whatever. Okay. But they're all dead now. Right. And we kind of go like, thank you, ancestral hominids for getting us here.
01:41:25
Speaker
You know, but like, and now we're here and we're dominating, but maybe they're sort of like a moment where we kind of like give birth to this new thing. And like, it's like, oh crap. And then this is like more or less given enough time. We're sort of annihilated, but like intelligence and life.
01:41:42
Speaker
at a much higher level just continues in the universe and is actually the sort of and like at any. If you zoom out far enough on any evolutionary timeline, any species in particular is just a transitional thing. I mean, it's interesting, like one way of thinking about like, well, why do we?
01:41:58
Speaker
Die right like what we're born and we die why do we you know from spiritually or a one way of thinking about that is like well you know at some point you become ossified and your capacity to. Change exhausts at which point it's better to be replaced by a new being.
01:42:17
Speaker
From a new generation who can integrate the novelty that exists and produce the next wave of novelty and that that's this kind of generational like the story of living beings when you're telling that story at a species level of the same thing like maybe humanity at some point will have kind of like run out of juice and maybe we're here.
01:42:39
Speaker
Maybe we're here. I don't think so. I think we got some juice left in us for our lifetimes for sure. And yeah, I mean, this is a story of like, and then this, this is a new evolutionary step and it's because it just is moving so much faster than biological evolution. It's just gonna.
01:42:55
Speaker
rather than us physically evolving into the gray aliens with the, you know, the big heads and the whatever, we're gonna, you know, spiritually, we're gonna technologically, culturally evolve into these super intelligent AIs, which, which outlasts us, you know, and yeah, and they look back on us, you know, in a million years and say, Yeah, there was the last biological
01:43:18
Speaker
life was where these things called humans and they doing all this stuff. And then they the greatest thing they ever did is they built us. Right? Totally. I mean, it's a weird thing. Because it's like, in a way, you can sort of say you cask in the doomers are correct. In the sense of like, the AI doesn't love you or hate you. It's just you're made of matter and energy. And it needs that matter and energy to do its thing, right? And you're like, but like, Oh, no, what if it's not aligned with our values? And I'm like,
01:43:41
Speaker
But like what if it is aligned with our values and even more so like it's just like more or it's aligned with better values better values, right? Because we look at humans and we kind of go like we're pretty smart But we're not really that's fine We see all these defects in our cognition and our cognitive biases and the self-destructive behavior and addiction and ecological collapse and blah blah blah We know ourselves well enough to know Wow, we are kind of really shitty in a whole lot of ways, right? Like okay Well, what if it's just better and it?
01:44:08
Speaker
One of my favorite examples, and this is by analogy at this cosmological level that I'll use, which is, do you know about the great oxygenation event? It's like an ancient thing that happened. So the primordial life on earth was predominantly the single cell photosynthesizing algaes that sort of coded the primordial ocean, right? And all they did was like,
01:44:31
Speaker
use the carbon dioxide and generate a bunch of oxygen for like millions of years. And they were like the main thing that was alive on earth. But what they did was they created climate change by creating way too much oxygen, which turned into poison, which actually started killing them because they were suffocating because they couldn't get enough carbon dioxide. But what that created was a moment where
01:44:51
Speaker
we had to invent the reverse, like life invented the reverse process. Which was animals. Which was the beginning of animals, single cell bacteria that did the reverse. So then essentially like,
01:45:01
Speaker
Okay. But that was sort of like a cataclysmic apocalyptic ecological destroying sort of moment that lasted probably, I don't know how long it lasted. It was a long time, but like life is the good news, bad news, right? It's like, right. This photosynthesis thing is fucking great until it's not because right. And then it's like, but then another thing comes along and invents the solution. So it was like, what if we're more or less on the cusp of this type of thing? Like people go, Oh, what if the AI is destroyed all biological life on earth?
01:45:30
Speaker
I'm kind of like, they might. Well, I mean, by your analogy, I mean, even more hopeful in that is what if they are a corrective. Yes. To the, our excesses. Yes. And that they come in with a difference or, you know, or even just like a common enemy that unites humanity. That's always been a, you know, well, how are we going to unite the whole planet into a lion?
01:45:51
Speaker
Well, maybe if there's like, you know, Ultron shows up to try and like wipe us off the face of the planet, that will at least temporarily align us. It reminds me of Louis Lamore. Okay. You should know this quote because I got it from you. It's, there will come a moment when it seems that everything is over. Yes. That's the beginning. That's the beginning. Yes. Yeah. Yeah.
01:46:18
Speaker
So yeah, I don't know. I think that's a good note to wrap it up on. We could keep talking about this forever. This has been a long time coming. Thank you so much for jumping on with me. And the future as always is an uncertain mix of good news and bad news.