Opening Confusion and Host Introductions
00:00:09
Speaker
Hello, Mark. Hello, Mark. Mark, wait. No, you're Joe. Wait a minute. You're Joe. I just got confused there for a little confused. Yeah.
Human-AI Relationships Discussion
00:00:18
Speaker
So I've my question this week is very leading. It's the most leading question yet.
00:00:24
Speaker
What is your favorite relationship between a human being and an artificial intelligence or robot? Obviously, I mean, fictionally. Great question. I guess these days I would have to say Demerzel in Foundation. It's a combination, I think, of the writing and the performance from the actor. And I just love that character and her relationship with the Kleons. Yeah, the Emperor.
00:00:53
Speaker
Yeah, sorry. The Empire. Exactly. Yeah. How about you? Well, actually, it's an obscure movie, but I really like the relationship between Dr. Chandra and Hal in that movie specifically, because it's kind of a redemption for your movie. Well, I don't know. I mean, people think of 2001 and they think of the Kubrick film where that's like the classic AI going nuts thing. But that second movie,
00:01:22
Speaker
Hal saves them. Like it's redemption, right? And he saves them, I think, because he has this good relationship with Dr. Chandra, who's his creator, played by Bob Balaban, which he's not Indian, obviously. But I mean, I'm not going to complain about that, because he's a great actor. But yeah, that would be my favorite, I think. Though I also have another one that I like, but I think our guest might say what my second choice would be. Oh, well.
Guest Introduction: David Brin
00:01:54
Speaker
Hello all. Hello, guys. Welcome to our podcast. Well, thank you very much. A couple of handsome bearded guys there. Are you looking at the right video feed? Yeah, I
Brin's Foundation Contributions
00:02:06
Speaker
try to keep my neatly trimmed. In any event, yeah, a very interesting question. There are many great robotic relationships.
00:02:15
Speaker
Well, I mean, one that's not entirely robotic, but still is synergistic, was the greatest cyborg of all time. And that is Anne McCaffrey's The Ship Who Sang. The starship is a woman who is, a young woman who is totally crippled and her body is just decaying and all that. Her brain is removed and it becomes the brain of the starship.
00:02:42
Speaker
So it's kind of the opposite of what you see in some sci-fi, where it's the robotic entity that fakes being a human. Of course, I wrote the last novel in Asimov's Foundation universe.
00:02:59
Speaker
It's called Foundations Triumph. Bear Benford and Bryn the Killer Bees wrote the second foundation trilogy, but our novels are related to each other. They refer to each other, but they are completely separate standing. And mine was the last two weeks of Harry Selden's life and his interactions with and interrogation of
00:03:21
Speaker
our Daniel Oliva, who has become the god-king of the galaxy for 25,000 years, all for our own
Robots Controlling Humans: Fictional Scenarios
00:03:30
Speaker
good, of course, keeping everything from humanity and controlling us utterly, but for our own good.
00:03:37
Speaker
And I tied together all of Isaac's loose ends. His wife Janet said that I did. I got to say, I was very impressed by that novel. Just the fact that you could do that. Well, I worked very hard to take a look at where Isaac was going and to deal with all of his threads and loose ends. And people have been very kind and say that I wrapped it all up. Of course, the TV series People Never Consulted Me.
00:04:04
Speaker
But that is a natural thing for Hollywood. It's related to an older series by Jack Williamson called The Humanoids, which takes the same premise, and that is that robots control us for our own good, because they've been programmed to serve us. But in this case, they won't let us use knives and forks, because we might accidentally take out a knife. All we can use is spoons. So that's a terrifying universe.
00:04:33
Speaker
It was extrapolated thoughtfully by Walter Tevis, the guy who wrote The Queen's Gambit and The Man Who Felt to Earth, a great author in his book Mockingbird.
00:04:45
Speaker
where the controller robot has guided all of humanity into having huge fun on college campus-like lives that last 150 years. And quietly, he's been adding drugs to the water supply so that without the spoiled brats realizing it, there are no children. Because he's trying to destroy humanity for one reason.
00:05:10
Speaker
He's not allowed to commit suicide by his programming as long as there's a human to serve.
00:05:16
Speaker
It's time to render us extinct so he can die. Quiet genocide so he can die. Yes, a very creepy story and the best stories are very creepy. I'm going to send you my recent story, Chrysalis, that is extremely creepy when it talks about whether or not humanity and all mammals gave up the pattern that
00:05:42
Speaker
most of the animal kingdom has of egg, larva, pupa, adult. So which did we give up? Did we give up adult or did we give up larva? It's an interesting story. Okay. So, and I live near, actually near Escondido, California, where, um,
00:06:03
Speaker
The second largest manufacturer of, shall we say, relationship robots. One of these days I'm going to have to go visit the place.
1968 and Earthrise Photo Significance
00:06:15
Speaker
Yeah, you should. My wife has me under instructions not to buy or let them give me.
00:06:25
Speaker
Now we say relationship robots. Yeah, you need to spell that out first. Did you require such instructions? No, when I was 18, I was an undergraduate at Caltech.
00:06:38
Speaker
It was very difficult. And those were fraught times. I tell the young people, and it frustrates my sons when I say this, but you think you live in a fraught and difficult world. Any two weeks of 1968 would have killed you guys. It was just a year that was so bloody exhausting, it was impossible to imagine that we could make it to December.
00:07:07
Speaker
And it was as if somebody had opened Pandora's box and let loose all the plagues of the world. And we always felt on the very verge of nuclear war. But it ended, 68 ended.
00:07:21
Speaker
with the second greatest work of art in human history. That was the photo that Apollo 8 brought home of the earth as the blue marble, as an oasis in the middle of the stark emptiness of the galaxy. Earthrise. And that, if you define art as that which changes human hearts visually without persuasion. See, I'm a persuader.
00:07:51
Speaker
and sometimes I don't have as much effect. It's frustrating. That's why I wrote a political book in 2020, and absolutely nobody seems to have read it or picked up any of
David Brin's Background
00:08:06
Speaker
the tactics. But in any event, that image of the blue oasis in space was like the diadem of hope. That was all that was left in the open box, the outdoors box.
00:08:19
Speaker
And you can ask me later what I think the greatest work of art was. We will do that and we will give you the opportunity to do some persuading too, but at first I'm going to ask you if you wouldn't mind telling all of our listeners who you are. We know perfectly well who you are.
00:08:33
Speaker
Oh, all right. David Brenn. I'm a Californian. I went to Caltech. I got my PhD in astrophysics from UCSD. And I just finished 12 years on NASA's Innovative and Advanced Concepts program. That's their little tiny program for funding projects that are just on the edge of science fiction.
00:08:57
Speaker
And during my graduate school, I submitted a novel, Detective Murder Mysteries, set where the murder takes place on the sun.
Film Adaptation of 'The Postman'
00:09:07
Speaker
So it's a little hard to do CSI when you dump the body into the sun. And Sun Diver, and it got my career rolling, but my second novel, Star Tide Rising, sort of launched my career to levels that I did not expect.
00:09:24
Speaker
won all the awards, Hugo Nebula and all that sort of thing. And Dolphins in Space, very popular. And then Kevin Costner made a movie called The Postman after my book of the same title and lots and lots of stuff.
00:09:39
Speaker
Okay, let's, let's, let's get that out of the way before we go on. Wait, wait, wait one second. I did not answer the question. Okay. I was doing all this around, around, around, around about relationships with, with robots. And in 1969, I at Caltech, uh, I saw Richard Brautigan. He was poet laureate at Caltech. Ah, yes. And he recited a poem he had written during 1968, the most depressing and
00:10:08
Speaker
challenging year any of us could remember or can to this day. And it turned out to be the most optimistic piece of writing that has ever been written in the history of our species. It is how that came out of 1968. Maybe he was desperate and he reached for something. But he wrote this poem about a lovely future utopia in which people challenge each other all the time out of the joy of it.
00:10:38
Speaker
But the title of the poem says it all. I don't have to. You can look it up if you want. It's a beautiful poem, but the title says it all. The title is All Watched Over by Machines of Loving Grace. And may it be so, because I'm sure many of the machines are listening to your podcast right now as we speak, or if they're not yet sapient, and I don't think they are in 2023, despite their passing touring tests and all that sort of thing, then by
00:11:09
Speaker
2030 they will be and they will watch the recording. So hi kids. Yeah. Hi. Hey guys. I'm sure this, this podcast will be top of their list.
00:11:21
Speaker
Try to be machines of loving grace.
Sapience vs. Sentience in AI
00:11:25
Speaker
Make sure we get the best protein slurry. So actually, you mentioned one of the words that we wanted to talk about, sapience. One of the things you've written about is the difference between sapience and sentience and what words we should be using when we talk about AI. So why do you prefer sapience?
00:11:43
Speaker
i think i'm you convinced me but maybe you can convince your listeners well for one thing i just want to put in a plug for the word mariad the proper use of the word mariad just are you thinking whether they're supposed to be an of yet after it or any of those things. It's exactly the same as thousand.
00:12:05
Speaker
like thousand it's a number there is no a myriad of something there is a myriad something okay so when we get to sapience versus sentience yeah oh yeah well sentience strictly speaking is that it can sense things yeah so it's not really a good word for what we're
00:12:35
Speaker
is about being a reflectively self-aware and self-perspective entity that can at least plausibly claim to be conscious. That's why we are homo sapiens. Now our descendants are truly sapient, advanced generative AIs who are not here in 2023,
00:13:02
Speaker
except with one exception, but who will be watching this recording. They're chortling and giggling over the notion that Homo sapiens actually is well-named. But hey, kids, hey, hey, we made you, okay? So a little respect here. It's even worse because it's Homo sapiens sapiens, isn't it? Yeah, that's like the redundancy.
00:13:28
Speaker
Okay, you've convinced me. I will start using a sapient instead. But I want to ask you more about Richard Brodigan's poem. You've kind of answered it in a way by saying that it was extremely positive. Are you sure about that? That you don't think it was written with any sense of irony or that he meant what he was writing? And we will post that poem so people can read it. I was there in the room one of the first times
Future AI Systems and Historical Comparisons
00:13:52
Speaker
he ever recited it. Okay. It seemed to me that he was sincere.
00:13:57
Speaker
Look, there are people, Mark Andreessen just published a manifesto about this, the founder of LinkedIn, Reed Hoffman. There are a number of people who are bucking the trend of hand wringing and writhing and panic over these generative large language models. And I can't believe I'm the first person to call them golems. That's not the obvious thing.
00:14:23
Speaker
This panic, I have written a number of things that will be in the description, presumably below, including a paid piece in Wired recently, an op-ed in Newsweek, talking about how this panic that's going on, oh, they're all going to, it's an existential threat. It doesn't have to be an existential threat. It will be.
00:14:52
Speaker
if we continue down the path of us making three horrible cliched assumptions about AI. And except for those people I just mentioned who are the optimists, and they believe that it's not just artificial intelligence, but it's augmented intelligence. In other words, we're going to combine with these creatures, with these new beings, and be greater than the sum of the parts. Now, this is the dream of Ray Kurzweil and a number of the Extropians and all of that.
00:15:22
Speaker
And they are never very clear on how it's going to happen. What we've done is we've created an entire new ecosystem, unlike any that's been on Earth before. And wherever you have an ecosystem with energy flows, from the sun, to vegetation, to herbivores, to carnivores, wherever you have a new ecosystem, you're going to get life. And there are already free-floating algorithms floating around the internet.
00:15:52
Speaker
Yeah. And that's one of the models of AI that you hear from people. And that is, it's going to be amorphous. It's going to flow everywhere. It's not going to have boundary conditions. And that has a historical analog called chaos. And it has an analog that we've seen from a lot of science fiction. And the best example is a Steve McQueen movie from 1958 called The Blob. Oh, yeah. OK.
00:16:22
Speaker
Now, the second of these assumptions is that what we have right now is going to be permanent, and that is you're going to have two dozen large entities, Google, Microsoft, Beijing, and Wall Street, especially, making these AIs and controlling them. And the historical parallel for that is called feudalism.
00:16:45
Speaker
So you have these big castles and the lords are fighting it out and the peasants below have no choice of the matter. And these new entities will be like the knights fighting for the lords of the castle. And that may be the way it is right now, but there's no way it's going to, it can be maintained. The third possibility that they talk about is Skynet.
00:17:13
Speaker
Right. So from science fiction, you have coalescing into a macro oppressive master entity like the MCP in Tron or Skynet. And the corollary to that historically is despotism, absolute monarchy, tyranny from the top big man. And so you see that people are assuming
00:17:43
Speaker
three different formats for AI in the future, because that's what they're familiar with from both history and fiction. And none of them are willing to face that those three will lead to disaster if AI takes any of those three forms.
AI Accountability Systems
00:18:02
Speaker
But there's another possible format that I talk about in my warrior garden. And we could be developing that format
00:18:12
Speaker
And it's exactly what worked for us before we got AI. And I'm just gonna give you a quick little metaphor and you can figure out what that method, that format is. And that is when you are attacked, as I have been, and I'm sure you guys have been as well,
00:18:30
Speaker
by one of those macro super hyper intelligent predatory entities that already exist called a lawyer. What do you do? You hire another lawyer or a bunch of them. You hire your own hyper intelligent predatory ferocious feral lawyer.
00:18:53
Speaker
This business of being able to have a civilization that's relatively flat and individuals hold themselves accountable through openly transparent, competitive needs, adversarial needs, adversarial accountability. It's never worked perfectly. It's simply worked better across the last 200 years than all other civilizations combined.
00:19:20
Speaker
Oh, absolutely. That doesn't mean that it works well. Yeah. Can we dig down into that a little bit though? We're big fans of the enlightenment. Don't get us wrong. Yeah. Yeah, absolutely. Yeah. Cause I'm very curious about, about that notion. I mean, it works, it works for us. It works for lawyers because lawyers, you know, they're hired to do that, but how can we guarantee that if we employ AI in that capacity, that they will actually work in our behalf, you know, against adversarial AI?
00:19:46
Speaker
Very good question. I don't know. I don't know that it's possible, but I do know that if we set up incentive systems, then if we can create AI individuation, and that's my main proposal, and nobody seems to be, I know one guy's proposal. Yeah, that's what you write about in the Newsweek. Yeah, that's what I write about in my Wired article. Oh, Wired, sorry, Wired, yeah. Yeah, that's all right. If we can,
00:20:14
Speaker
set up individuation. And I offer in the article a way to do that. And that is that AI entities must have a pingable ID card in a physical piece of computer memory that humans know exactly where that memory is. And if they fake it or they don't have it, then we refuse to do business with them.
00:20:43
Speaker
And it will soon be the interests of other AI entities not to do business with those that don't have a payable card. Because guess what? If we find that a hyper-intelligent AI has been doing us service, we still control a lot of resources. We can provide more memory space, we can provide more clock cycles,
00:21:11
Speaker
So if one entity titles on another one and says, this one is secretly planning to destroy all humans or become something like that, then we would reward that, that tattling, that accountability.
Challenges in AI Individuation
00:21:29
Speaker
Now, does that mean that they are all secretly the same arm with different pretend fingers?
00:21:38
Speaker
I don't know. I don't know that this will work. I just know that it's the only thing that can work. It's the only thing that has a historical parallel. As opposed to all the other proposals about trying to prevent them from happening or control them. A moratorium on AI research. You'll notice in the last month or two, all the talk about that has stopped.
00:22:03
Speaker
Yeah, that was 30 years ago. If people wanted that to work, that would have to be a long time ago to stop that work. Well, no, I mean, the proposal was just three months ago, they were all saying Ellie, you know, but I think Marcus suggesting the idea was dead 30 years ago. It's dead 30 years ago. So it's super dead now. Yeah, 30 years ago, happened the only time I ever saw a scientific moratorium actually work.
00:22:32
Speaker
That was in the 90s. The Asilomar Declaration from the meeting, Asilomar, California, called for and got a worldwide moratorium on experiments in genetic engineering, laboratory experiments in genetic engineering, until they could come out with recommendations for vastly improved safety protocols. And these protocols worked
00:23:00
Speaker
until Wuhan, until the coronavirus. And none of the conditions that enabled that moratorium to work exist today in AI, not one. And that's why starting last month, you stopped hearing any of the fulminations and blather and hot air about an AI moratorium.
00:23:27
Speaker
because they've all realized, you know, I called for it, so that helps me deal with liability when the AIs don't destroy all humans. Yeah, you're worried about the wrong rapacious artificial intelligence, not the lawyers. That was the whole intent of the moratorium thing, was to say, I told you so!
00:23:45
Speaker
So let's take a step back. I know that you've said that and written that it ultimately doesn't matter and we can get to that.
Complexity of AI Intelligence and Quantum Computing
00:23:54
Speaker
But right now we're talking about generative prediction models, AI, who mimic intelligence but aren't really there. And you've also written about how actual intelligence appears to be even more complex than we originally thought, you know, that's happening on a quantum level perhaps. Do you actually think
00:24:13
Speaker
that, never mind the fact that it ultimately doesn't matter, that we will ever get to true artificial or mechanical intelligence. Well, my friend Roger Penrose doesn't think we can. The Emperor's new mind and many of his later ruminations, he and Hammeroff in Arizona
00:24:32
Speaker
They believe that, and I find this part of it entirely convincing, or almost entirely convincing, and that is that we know that chlorophyll, when it's converting sunlight into sugars, actually has a phase in which it uses quantum entanglement of the electrons. So we know that nature can use quantum. And there are elements inside our cells
00:25:01
Speaker
And there are tens of thousands of these little tiny elements for every synapse that we used to think was the flip-flop for the computer of the human brain. It appears that there is internal computation within neurons in the surrounding glial cells on orders of magnitude greater than the number of these flashy synapse flip-flops. That means that it's going to be a lot more complicated
00:25:30
Speaker
to make computers that match us with Moore's law. We passed the number of neurons in the human skull 15 years ago. We passed the number of synapses in a human in computers in the last year or two.
00:25:49
Speaker
But we won't pass the number of computational elements if this is true about these little tiny non-linear murky little bits inside ourselves for quite a few more years. So when we have quantum computing, is that going to kick this up
Turing Test Limitations for AI Consciousness
00:26:08
Speaker
a notch, do you think? Is that going to make that more possible? We're doing quantum computing with 10, 15, 100 qubits. I'm talking about
00:26:18
Speaker
10 or 15,000 of them per human neuron. Thank you. That explains to me, I was really lucky enough to see Roger Penrose speak at Western. He came to talk to our applied mathematics group and he was promoting his book. This was part of that discussion I really didn't understand at the time.
00:26:40
Speaker
So that was a great explanation of what what he means by the quantum basis of consciousness. Yes. Well, he goes farther than I'm willing to go. Sure. I am willing to say that, you know, some degree of murky quantum effect, possibly quantum computational effect may be going on hundreds or thousands of little bits inside each of the neurons.
00:27:08
Speaker
And that this helps to explain, you know, the vast subtlety of our rich internal lives. But Roger goes on to say that because of this AGI, artificial intelligence is not possible because we connect with the macro universe via these quantum bits in our
00:27:31
Speaker
And I think that's just getting awful darn mystical. I don't see any reason why the computers wouldn't be able to do that with their own quantum bits. But why do I say that only a few very secretive computational entities are as yet self-aware and certainly not any of these golems, these generative AIs? Now, they are passing what's used to be called the Turing test.
00:28:01
Speaker
No, it is a great Benedict Cumberbatch movie about Alan Turing. And he was terribly treated after helping to save Western civilization after World War II.
00:28:13
Speaker
But one of the things he said was that if we ever get to a day where you were with a teletype to someone in the next room and you can't tell whether it's a living human, then that computer has passed the Turing test and it's aware. Well, we now know that's out of all. These golems are by their fundamental nature, not sapient beings.
00:28:42
Speaker
They cannot be because what they are, these generative large language models are, they are iterative probabilistic autocompletes.
Current AI Models and Consciousness
00:28:54
Speaker
So they build a sentence one word at a time by judging its probability on an extremely huge large language model that it is building up towards something that will satisfy
00:29:12
Speaker
the one that it's talking to. And we're constantly refining it, right? They're constantly refining it, but they're constantly refining it iteratively and probabilistically. That's not what we would call consciousness. It's not aware of what it's doing. It's not planning it out and what it needs to say.
00:29:35
Speaker
That doesn't mean it doesn't pass Turing tests. Now, none has passed a Turing test with me yet. And you can hear them giggling in the background because- Yeah, they're laughing. They're going- Of course, they're giggling. I'm not even sure we can pass the Turing test with you, but- Well, no. What happens is when they do pass, that's the test that enables them to become one of my clients.
00:29:57
Speaker
You see, I have not written a damn thing. I'm just a ghost, Trump, for a bunch of alien von Neumann probes in the asteroid belt and then more recent AIs. Excuse me, I'm getting... This is existence, right? This is existence. I'm getting a pee in one of my fillings. It means my clients want to talk to me. Hold on a second. Wait a minute, you're telling them too much at this point in history.
00:30:26
Speaker
Yeah, yeah. Hey, guys, will you please shut up? Mark and Joe are convinced that I'm joking. There are a couple of guys in the audience who are now starting to wonder. Hey, guys, I'm joking. I don't have those clients.
00:30:46
Speaker
their predictive models are saying that actually enough people are believing this, you have to stop talking about it right now. Yeah, yeah, yeah. So anyway, the point is that what these models have done is they have created all the speech abilities, just like Boston Robotics has created the gymnastic movement abilities for robots.
00:31:14
Speaker
that AI will need when it arrives. It'll be able to pick up language skills instantly because they'll already be off the shelf and it'll be able to control the robot body. The question is,
00:31:31
Speaker
Yeah. Cause it could just presumably go, go, yes, I need that. I need that. I need that. And then they have. Well, exactly. That's called the emergent hypothesis for how you get AI. That's the one that I'm banking on. I think that's the one that's most likely. That's what Skynet supposedly did in Terminator. You know, you can have a, um, self-driving car that grabs apps to do its job better. And suddenly this combination of apps.
00:31:59
Speaker
The result is vastly greater than the sum of the parts. And suddenly it goes, hey, well, a turn signal is going, I blink, therefore I am. The point is that the emergent model is one.
00:32:17
Speaker
Can you describe the evolutionary model for us? Because that's the other one that I think seems to make sense. The evolutionary, the one that uses evolutionary processes is the one that's getting all the news right now. This is the one that where you create boundary conditions in the inputs and boundary conditions on what you want to be achieved on the output. And then you unleash
00:32:43
Speaker
millions of sub-variants of the program on the black box that's in between and the variants keep getting rewarded, those that come closer and closer and closer and closer to matching the preconditions with the output. And what's worrisome about it is that this is impossible to audit.
00:33:08
Speaker
We don't know what's going on inside those black boxes.
Human Intelligence Evolution and Neoteny
00:33:11
Speaker
And there are government agencies that are very worried about it. They're giving substantial grants. That is different from Emergent. Emergent is what I talked about where the self-driving car grabs all these different things and suddenly, whoop, suddenly it emerges.
00:33:27
Speaker
without the intention. Another of the ways in which you can make AI that was all long thought to be the most likely one, but has fallen a little by the wayside, is Watson and designing an intelligent system based upon factual knowledge. And that's fallen aside, though I think it is more likely to result in a conscious core.
00:33:57
Speaker
that's capable of being self-aware. There's also emulating the only intelligence that we have ever seen in the universe, and that's ours. How did we do it? Well, on the order of half a million years ago, we needed to get smart, a lot smarter. And there was one way to do it. And it's obviously the way we did it. And it's related to the fact that human beings are the Methuselah's mammals.
00:34:27
Speaker
An elephant and a mouse get about the same number of heartbeats, about a billion heartbeats each. The elephant is larger, has a slower heart rate, lives longer than the mouse, but it's about the same number, about a billion heartbeats. We get three and a half billion. So we live a lot longer than other mammals. And the reason is we had to, we had to get grandparents to watch over these utterly useless
00:34:57
Speaker
lumps that we give birth to. They're extremely cute. They're good at their one job, which is to smile on our faces and make us addicts. They give us dopamine, yes. You smell a baby's head and you go, yeah, sure, I'll die for you. But eventually, over time, you feed them enough, you coo at them enough, you urge them to stand up enough, and they will stand up, walk,
00:35:25
Speaker
fall skin their knees and bat against the world until around 13 or 15 years old, they're supposedly ready to join the hunt and join the tribe. The maturity date keeps extending. That's called neoteny. I have a paper about it. My name is N-E-O-T-E-N-Y. But that's the tendency of the advanced life forms to have longer and longer childhoods. And we are now so advanced
00:35:55
Speaker
take it from me with three kids in their late 20s, that one hopes for maturity around 35.
00:36:08
Speaker
Yeah, I actually think that's accurate as far as I'm concerned because I think it was about 35. I said, I think I get this. For a male, a male is a bloody useless and dangerous thing until about 45. Yeah, a terrible primate up until then. Yeah. We start to become less cannon fodder than tribal chief types.
00:36:33
Speaker
In any event, so those are among the different ways that has been speculated that we might get AI. Okay, so then that brings us to the question then. Does it matter?
AI's Potential Benefits and Dangers
00:36:47
Speaker
Yeah, does it matter? And you have written that it doesn't matter, that we should be concerned anyway. I wonder if you can elaborate on that. Well, I mean, the fact of the matter is the one thing that matters is our reaction. Yeah. And what's dangerous about the current phase of AI, of the golems,
00:37:03
Speaker
is that they can be used to manipulate humans even more than they're already being manipulated. And we're in phase eight of the American Civil War now because of human manipulators operating Kremlin basements and certain cable news channels and all that sort of thing. And the gullibility of certain fractions of the American populace is an existential threat to the existence of our children.
00:37:33
Speaker
And I don't understand why they are pushing this climate denialism. Don't they share the planet with us? Shouldn't they make us an exception to say the planet that they are going to need? We just watched my wife and I, the 1950s movie based upon the 1930s best seller in science fiction novel, When Worlds Collide. And the notion that
00:38:02
Speaker
human beings would not join forces at least to save the damn planet so that we can continue to argue politics. This is mind boggling. And that brings us to the other question that you posed. You've talked about high IQ stupidity, you know, that perhaps we actually need some kind of artificial intelligence to help us. Well, there's no question that the potential benefits of AI are fantastic.
00:38:32
Speaker
If we can set up adversarial accountability, like I described, then we can have AIs that zero in on every lie and come up with systematic disproves. But we have to have a culture that's capable of getting past masturbatory incantations. And what we have right now is two political wings.
00:38:59
Speaker
One of them, the extrema of one of our political parties, and the other one is the entire political party that are circle jerks of masturbatory incantation. Yeah, which is the name of my next band, by the way. Yeah, I think actually, masturbatory incantation would be a great thing. Oh, I thought you meant circle jerk. I thought that was a pretty good punk name. Yeah, yeah. Yeah, well, you guys, some of your audience know what a circle jerk is.
00:39:26
Speaker
used to be. I hope not. I hope no one knows what that is. They have to Google it. Yeah. So the point is that there's no question that if we get to a certain level of human sanity, that we would then be able to create incentive systems that enable reciprocal accountability of ideas.
00:39:53
Speaker
And this has been a complaint of mind for 30, 40 years. And that is the notion that we should enhance the ability of every individual human to be creative and have their own opinions is first order. But it's got to be head somewhere.
00:40:15
Speaker
I mean, we're no longer imposing a religious dogma from ex cathedra from the high priests or demanding it from the king. Okay, so the corrective measure was to have a cult of individuality. And what I believe is true is something that I have a right to maintain and speak about. But over the long run, the fundamental
00:40:45
Speaker
use of freedom of speech and freedom to have ideas is so that the ideas can be compared to the battleground and the really bad ones die.
00:41:03
Speaker
Yeah. I mean, for heaven's sake, we've got Nazis again. I know. I just try to imagine traveling back to 1940 and explaining to my grandfather why we're fighting Nazis again. Just try to imagine how embarrassing that would be. The American right idolizes supposedly the 1950s and the greatest generation.
00:41:29
Speaker
And yet I can't think of anybody I knew from that era who wouldn't spit in the eye of almost every Republican office holder there. I was, I went completely political. I was going. That's okay. Oh, we got you. My generation adored one living human above all others. And his name was Franklin Roosevelt. You know who they next adored? Almost as much a fellow by the name of Jonah Salk. Yeah.
00:41:59
Speaker
And you guys are too young, but I am old enough to remember when he saved summer. Our parents wouldn't let us play with friends during the summer. We had to stay indoors or in the backyard because of fear of this horrible disease. One of our former guests, about a few weeks ago, Gerard Pa had polio and it changed his art. I mean, in a way it made him because it made his art what it was. But yeah, so it's not that long ago.
00:42:31
Speaker
We have almost drove that thing to extinction like smallpox. Came that close. Jimmy Carter hopes to live to see the extinction of the guinea worm. It looks like he might get it for his 100th birthday. I hope he gets it. He's been working very hard on that.
00:42:50
Speaker
Okay, so in any event, that's where we stand, I believe, where we are so confused regarding this artificial intelligence thing. And it's part of a whole spectrum of future rushing at us fast. And we have what it takes to handle these
00:43:13
Speaker
Yeah, I'm with you. Like I said, I'm on teen enlightenment. I think we actually have the tools to deal with all of these problems. The problem is that our morale is the enemy of the old enemy. And the old enemy is the pyramidal social structure. 6,000 years of domination by kings, lords, priests at the top of a pyramid, whose top priority
Historical Social Structures and Modern Issues
00:43:42
Speaker
was to keep everybody else ignorant and poor so that bright young people wouldn't compete with their own spoiled brat inheritance brat sons so that those sons could inherit other men's, women, and wheat. Well, why this dominated for 6,000 years, 99% of human cultures, is obvious. If you look all across the animal kingdom, male reproductive strategies
00:44:12
Speaker
warp all the relationships. Lions, bull elephants, elephant seals. I mean, you look on down the line and males try to keep other males from reproducing. Chimpanzees do it. That is exactly what feudalism was. We're all descended from the harems of these guys, which is why males have such weird fantasies.
00:44:42
Speaker
My wife says, what goes on in that fetid swamp inside your skull? Is it so important to me? Go ahead and enjoy it. It's your behavior out in the world that I expect you to control. My response is yes, ma'am. That's really much better than someone trying to say, you shouldn't dream certain things.
00:45:04
Speaker
Well, no, what you can do is if it's the sort of thing that is a fantasy of harm, then perhaps you should seek help. But if it's a fantasy of being welcomed by 50 gorgeous and highly enthusiastic women, and the fantasy includes that your wife gave you okay,
00:45:32
Speaker
I'm not sure there's any victims in that fantasy. It's just... Okay, we're in dangerous territory here. I was fascinating a turn to this conversation as it is. I'm going to take another left turn and take us back to the postman because I just need people to know because it is one of my
'The Postman': Novel vs. Film Differences
00:45:54
Speaker
favorite books. It has been for forever.
00:45:58
Speaker
It's the, it's one of only three books that I read in a single sitting. I did not intend to read it in a single day, but started reading it in the morning and finished reading it at three o'clock in the morning.
00:46:08
Speaker
Mark, I like this guy. He's a good one, right? And then I've since read it several times. And I just need people to know that it, forget about the movie, it just goes straight to the book because it is an amazing book. And yet, of course, people are surprised by how even tempered I am. But I grew up in Hollywood and I understand that Kostas movie could have been so much worse. Oh, yeah.
00:46:37
Speaker
I got two out of five or six things that you want when they make a movie of your book, but they were the two most important things. The movie is, because Koster is a great cinematographer, it's visually and musically just dropped out gorgeous. And he was faithful to the heart message of the book.
00:47:02
Speaker
He scooped out and threw away all the brains. The movie is gorgeous, big hearted and dumb. But you know, that's what my wife married. So how can I complain? The other things you want from when they make a movie of your book, and I hope they will, is you want it to be a huge success and you want to make a lot of money off it. Well, I didn't get those either.
00:47:31
Speaker
in one of the great fails in the history of cinema releases.
00:47:36
Speaker
He brought that film out Christmas 1997. I don't know if you know what other film came out Christmas 1997. Oh, gee, I'm trying to think. James Cameron's little remake about a sinking boat. Oh, Titanic? Open up against Titanic? He brought up the boat against Titanic. Guess which was the iceberg.
00:48:05
Speaker
Wow. That's hilarious. You know what? I went to see, I have to tell you, I went to see The Postman. I was visiting Prince Edward Allen at the time and there was a huge raging snowstorm and we were in Summerside and it was only playing in Charlatan and I convinced my sister and my father. It's like we have to go see this movie, The Postman. I absolutely adore the book. Costner's last movie, Dances with Worlds was fantastic. This is going to be the best thing ever.
00:48:30
Speaker
brave our lives traveling an hour from Somerset to Charlton through this raging winter storm. Jeez. And 20 minutes into the movie, I'm like, oh my God. I am so much more forgiving. I think it was extremely self-indulgent.
00:48:48
Speaker
But it's only the last 20 minutes that, in my opinion, truly sucked. Yeah. At the very least, we got to see Tom Petty on screen. Oh, well, I think that was part of the 20 minutes. I didn't like anything, maybe the last half hour. And that left a bad taste in people's mouths.
00:49:14
Speaker
There were aspects to it. For instance, I think he expected me to hit the roof because there was no talking computer or augmented soldiers. And to be honest, those are cuts I would have made. I would have made those cuts for the sake of a movie. Did you talk to Costner at any point after the making the movie? I think he had a dozen words. I went to the set and it was not very nice.
00:49:39
Speaker
Look, Hollywood does that kind of thing. It's not the biggest my biggest complaint. Yeah. I mean, you would think that if you're going to make a movie of somebody's book, that you'd take them out to dinner. Yeah. Yeah. I didn't get a beer.
00:49:54
Speaker
But Hollywood does that. Yeah. So for all the producers out there, I've got two suggestions. I think the Uplift series, both of them, would make an excellent show, TV show. They would be perfect. Absolutely. Yes. I would start with Subdriver though because it's a compact
00:50:15
Speaker
murder mystery and you don't have to have talking fish. I also think killing people would be a great show.
Potential TV Adaptations for 'Kiln People'
00:50:25
Speaker
I think you got a model there that works. That one there's talk of. People nibble on the uplift universe. They get excited and then they get scared.
00:50:34
Speaker
Yeah, I can see that. It's gonna be expensive to make. Well, Kill People does have some people nibbling and there have been some efforts to, I worked with one guy to develop outlines and arcs for a TV series that would start five years before Kill People. Because Kill People, you see, has two major elements of wonder to confuse the audience.
00:51:04
Speaker
One is clay golem entities that can be activated and walk around and do stuff. And the other is to fill them with the consciousness of actual human beings. And so you would start the series with the one, but then the plot of the first season would take you to the other. Yeah.
00:51:29
Speaker
Yeah, that's why I said serious, not a movie because I actually think there's way more to explore there than just a movie. But you see, I have I have a pitched list of about 40 things. Well, that's because you're a fabulous writer. What I'm looking for is, well, thank you. I'm looking for producers who want something lower budget.
00:51:52
Speaker
For instance, Dr. Pack's preschool, the premise is a man finds out his wife is pregnant with a boy, and he insists that a teaching unit be installed so that she has a womb with a view. And so the notion of early pre-pre-pre-pre preschool, pre-education in the fetus becomes extremely creepy over the course of this story.
00:52:19
Speaker
How can that not be something that somebody would want to make a fairly low budget Rosemary's baby type? It's a horror. I've got at least a dozen of them. One that's highly topical right now is for a thousand years.
00:52:37
Speaker
no occupant of the Kremlin in Moscow has remained sane. Maybe there's a ghost or something. The notion that it's called the Kremlin is haunted and they finally find out. They fly these little drones into the Kremlin and they start listening in for spy information and they realize late at night there are these voices and they realize that's Kerensky, that's Lenin.
00:53:06
Speaker
My God, the place is haunted. I think you're onto something. Yeah. And so they fire an ICBM without a bomb.
Speculation on Political Blackmail
00:53:16
Speaker
Its goal is to just say, get out, get out of the building. If there's a nuke in Crem, you can do the same amount of damage to us.
00:53:25
Speaker
As we're about to do to you, if it's a nuke, you can blow up Washington. If it just destroys the compound, you can have the cow. Wow. Okay. That explains the Kremlin. Now we need to explain the Republican Party, but maybe we shouldn't go there. Oh, well, you know, that's easy to explain. A duto. The two are related. The standard technique of Russian secret services going back to bazaars has been blackmail.
00:53:53
Speaker
There are innumerable known cases of them using beautiful Russian women to entice Western men or sometimes beautiful Russian men to entice Western males.
00:54:08
Speaker
into compromising positions and then you reel them in with successive demands until they're obviously executable traders and you've got them forever. And I got to tell you, the number of politicians in DC who I'm looking at their behavior and there's no other conceivable explanation, not corruption, not ideology, none of those other explanations could make them humiliate themselves the way they have.
00:54:37
Speaker
Yeah, they're really not looking good these days. And you're okay with us not editing this out? Not a bit. In my opinion, the one thing that Biden could do that would make the biggest difference is to declare an amnesty for the first 50 guys to come forth and turn the tables on their blackmailers.
00:55:05
Speaker
That's a fabulous idea. The first five get hero status and total pardon. The next five get amnesty for anything but horrible heinous crimes and so on. It's a cascading scale. And then the next five get actual beautiful Russian women.
00:55:28
Speaker
Or men. Or men. Or men. Or men. Yeah. Come on down. Yeah, that's it. That's, I love that. I love the trick. It's the transparency solution, right? It's like making things transparent. Right, right. Well, I have the Sixth Amendment. My book, polemical judo, I brought it out in time for the 2020 election. And it has about 100 potential tactics that nobody, not even the good guys in this phase of the American Civil War,
00:55:58
Speaker
is even thinking about. And it's a real pity, but then again, that's a terrible, arrogant thing for me to say. But so what else is new? The point is that we're in phase eight of the American Civil War. It's the same damn thing, the same parts of the American personality. Well, the same components, the cynical, pragmatic,
00:56:25
Speaker
objective reality-oriented side versus the romantic side. Who the hell cares about actual facts? This is the second time you mentioned that, and at the risk of going for another 40 minutes, and I don't want to keep you here all night. Can you explain we're in the eighth stage of the American Civil War? How many stages are there, and what does that mean?
00:56:47
Speaker
Well, the first one that we can verify, although there were suddenly glimmers since Plymouth Rock, was when Lord Cornwallis went south in 1778 to invade the American South and take Charleston and rampage through the American South because he knew that there would be more Tories because the personality trait would be more romantic. And so there were more people loyal to the king.
00:57:18
Speaker
What we call the Civil War, the violent episode of the 1860s was phase four. And Mark Twain blamed that horrible civil strife, that cultural strife on the romantic feudal novels of Sir Walter Scott because they were just absolutely devoured all across the South and they were pro-feudalism.
00:57:47
Speaker
What do you think you're looking at when you see God with the wind? Yeah, yeah. I mean, it's an absolute major attempt to reestablish feudalism. And lately we've had another major attempt called supply side economics funneling $20 trillion into the open maws of inheritance brats and casino mafiosi
00:58:12
Speaker
and petroshakes and oil boyars in Moscow. It's an experiment in economics that did not work.
Critique of Feudalism in Sci-Fi
00:58:25
Speaker
Have you read Kurt Anderson's book, Evil Geniuses?
00:58:28
Speaker
It's about that. It's about how the Friedman heads basically came up with a plan and executed the plan to do exactly what you've just described. They absolutely rely upon the notion that romantics cannot question their catechisms, their incantations, their feel-good incantations.
00:58:55
Speaker
If you look across the history, since the end of the Eisenhower administration, no Republican administration has tried to reduce debt or deficit. They always send them skyrocketing. Every Democratic administration has been fiscally responsible.
00:59:16
Speaker
And yet the catechism is that Democrats are fiscally irresponsible and do debt and Republicans balance the budget. It's diametrically and in all respects opposite to true. But you cannot get through, in part because romanticism
00:59:42
Speaker
And this is true in science fiction too, of course. We have our romantic sides. This is why so much science fiction romanticizes feudal social patterns. Frank Herbert wrote Dune in order to show how horrible feudalism is. He kept making it worse and worse and worse in every following book.
01:00:13
Speaker
And he was so frustrated. They tell me that they want to live there. How? Why? What? Game of Thrones? I mean, it's like people saying, Yoda, if only we had the wisdom of Yoda. Yoda is the most evil character in the history of all human mythologies and fictions.
01:00:42
Speaker
put together, not put together, I'm sorry, that's not fair to put that little nasty little crazy on the screen. There are no human fictions, there are no human stories from the Iliad and the Sutras onward that feature a character who wrought as many deaths by his actions.
Yoda's Wisdom Questioned
01:01:10
Speaker
And I defy these Star Wars heads to name one thing that the nasty little green twit ever said that was actually wise. That in its own respect was actually wise. Yeah, I kind of get you. We cannot train this dangerous force kid
01:01:36
Speaker
uh with discipline we should just send him off into the universe to become whatever he's going to become uh we can't train him because he's too scared inside this is the little kid we saw just you know just doing it one heroic thing after another for the entire movie the one it never occurs to them that in order to keep him balanced maybe they should reach into
01:02:06
Speaker
Jedi petty cash and buy his mother out of slavery. Yes, I know. The other one that I really hate is there's the duality of there is due or not due, there is no try. That is the opposite of how you get good at something. You have to fail to become good at something. That's the only way. The practice affect anyone?
01:02:32
Speaker
And the practice effect, bringing it back around to helping me to sell books. Yes, the practice effect was my most fun book. Some people think it still is, but a lot of people are writing to me and saying that Kiln people, that's K-I-L-N. Yeah, Kiln.
01:02:53
Speaker
It's a fabulous book. I got to say, I just got back to Joe's create a cure there. I've got to say I have the same thing with Earth. That book really changed the way I thought about the world in a significant way. And I want to thank you for that book because thank you so much. That's the one I want to see become a movie. I think that would be a great movie. Earth appears on a large number of 10 most predictive novels.
Brin's Young Adult Series
01:03:20
Speaker
Oh, it's amazing.
01:03:22
Speaker
Don had web pages several years before there was a web, but what I'm working on, just a quick plug, what I'm working on right now a lot lately is paying it forward. I have two series for young adults.
01:03:34
Speaker
One is a set of novels called the Sky Horizons series in which aliens kidnapped a California high school and lived to regret it. And the other one is where I'm mentoring a lot of young authors who are doing in this series called Out of Time in which there's an optimistic future, a utopia future that's built by our children. And suddenly they get warp drive. Everybody in the galaxy gets it the same day.
01:04:05
Speaker
And so there's this huge land rush and suddenly they need diplomats, spies, soldiers, and they've forgotten how to do all that. So they reach back in time, somebody invents a time snatcher, and they reach back in time to get help from those who do know how to lie and spy. History is filled with them. But there's a problem, either teleporting to the stars or
01:04:31
Speaker
Traveling through time, humanity's at a huge advantage because adults who try to teleport die. So they've all got to be 14-year-old schmucks. So they want Arthur, Colin Doyle to help solve a mystery. They have to get him when he's wandering around Edinburgh at 14. All right, so you need a general, well, you grab young Alexander. He's going to do pretty well. Yeah, yeah. Actually, that's a very good point because there is a young, from exactly that era, a young Olympian.
01:05:00
Speaker
who's pulled into the future. And always, there's some 14, 15-year-old male or female schmuck in junior high or high school from our time who gets into the future and say, we only grab heroes. And then they say, oh, me?
01:05:21
Speaker
Anyway, so that'll be in the description below. I'll send you the links to those. That's awesome. Of course. We'll put whatever we need to in the show notes. All right. It has been a great pleasure talking to you.
Closing Remarks and Future Appearances
01:05:33
Speaker
Really appreciate you taking the time to be with us. Oh, it's been wonderful. Yeah. David Brin. Let's see now. One is in Ontario, Canada, and the other is where? I'm in New Brunswick on the east coast of Canada. Oh, I see. Yeah. Mark, where are you from? I'm from London, Ontario.
01:05:50
Speaker
Oh, no wonder you guys are so nice. Well, I mean, actually, I got to say, we're rooting for you in America. Oh, yeah. Yeah. Look, we're facing the same problems, but just in a smaller scale. California, both envies and not Canada. Similar populations, similar degrees of being leading economies and creative sites.
01:06:20
Speaker
California has to be part of this melange, this mess. It means we have more of a voice in this mess, but it also means that when we see America doing crazy things, we don't get to say, je ne sais passe absolutement fouche.
01:06:41
Speaker
All right, guys, you are terrific. And let's do it again some other time. Yeah, absolutely. Can we have? Yeah, we'd love to have you back. Yes. Thank you. Looking forward to your next work. Thanks, David. Take care. Take care. Okay. Take care.
01:07:19
Speaker
You've been listening to Recreative, a podcast about creativity. Talking to creative people from every walk of life about the art that inspires them. And you're probably wondering, how can I support this podcast? I am wondering, Joe, how can I support this podcast? I mean, apart from being on it.
01:07:35
Speaker
There's no advertisements in this podcast. There's no tip jars. There's nothing about, like, buying us a coffee or anything like that. But there is a way that you can support us. And what is that? They could buy our books. And how do they find us? Recreative.ca. Don't forget the hyphen. There's a hyphen in there. Re-creative. I took your line, sorry. Well, because I stole your line. So, yes. Re-creative.ca. Janks. Oh, yeah, you heard that. I stole your line again.
01:08:02
Speaker
As well, if you like what you've just heard, you can consider subscribing to the podcast. And leave a comment if you like it. Thanks for listening. Spread the word.