Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Brain-Computer Interfaces, part B image

Brain-Computer Interfaces, part B

S1 ยท CogNation
Avatar
22 Plays6 years ago

In this second part on brain-computer interfaces, discussion goes toward the more speculative. Joe and Rolf talk about Elon Musk's Neuralink project, which aims to fully connect brains with computers. Would this be possible? Could any system really read our thoughts in the way portrayed in science fiction? Should we even want this to happen? And most importantly, how does this affect the coming Robopocalypse?

Recommended
Transcript

Introduction to Neuralink

00:00:03
Speaker
We wanted to talk a bit about Neuralink, which is Elon Musk's effort to develop the next generation of brain computer interfaces. It's a pretty interesting concept and there's really a lot to talk about.

Elon Musk's Vision and Ambitions

00:00:23
Speaker
Elon Musk goes big for sure. He does not play.
00:00:29
Speaker
He thinks of some fundamental problems or some fundamental places that we want to be in the future and tries to take the big steps to move towards it. Yeah, I think it makes sense with this one after we were, you know, got so deep into the details on this one specific approach to maybe start at the end and think about where could this go and then work backwards to how they're going to try to get there. Honestly, how they're trying to get there is not actually that interesting right now.

Feasibility and Scientific Approach

00:00:59
Speaker
compared to where they want to go, because it is so far away, really, really farther away than any of the other things they're thinking about, I believe, personally, and I think I can justify it. I kind of love the approach of going for the end goal, just really thinking of that, what things could look like in the end, just get the smartest people together, start thinking about it,
00:01:24
Speaker
and try to figure out what the, at least for Elon Musk, what the engineering challenges are to getting some of this stuff done and see how you can move it forward. Yes, it's a cool approach. And it's in some sense, what basic science was supposed to be about till we lost the thread entirely on our funding strategy for, for science research in this country. But you know, so now you have to have billionaires doing it, but it's cool that at least somebody's doing it.
00:01:49
Speaker
So how do we want to approach this? Because I guess we could describe what the end goal would be. And I think that's been articulated in different places and talk about how feasible that is and what sorts of things it might or might not lead to. So maybe you can talk about your understanding of what the end goal for something like Neuralink would be.

The Concept of Brain Communication

00:02:13
Speaker
My understanding of the end goal is essentially seamless communication between human beings across a completely interconnected network where all human beings are connected through technology directly. So your brain is connected to my brain, is connected to everybody else's brain, and we interface and communicate directly without even having to think
00:02:44
Speaker
in the way that we think now. So my raw, I don't even know what you'd even call it at that point, but it's not even a thought, but let's use the word thought for the moment. My thoughts would be directly communicated to you and your thoughts would be directly communicated to me at the speed of light over these technology networks. And what's the difference between this and the matrix, or is this almost exactly the matrix?

Fiction vs. Reality: Neuralink and The Matrix

00:03:11
Speaker
Well, no, it's different. It's different, I suppose. It's definitely different because it could be different. Matrix is one potential embodiment of this because it depends on what your interface is, right? If you're directly connected, if I have electrodes planted in my brain that completely represent my neural activity, especially those that take, for example, my cortex, inputs and outputs so I can receive information,
00:03:38
Speaker
through stimulation of electrodes, and I can transmit information through recording from electrodes, and it's really representative of how my brain is really working. Okay, so in this version, would you envision it to be the case that whatever system is recording has a
00:04:00
Speaker
fully a full representation of everything that's going on in your brain. So in other words, can map the state of 85 million neurons and has all of that data that's in the cable coming out of your head. And when it inputs, it can cause any neuron to fire or whatever, whatever change to your brain state it could possibly make.
00:04:29
Speaker
I think so. A full physical picture of everything. Let's go all the way. Let's go all the way. Boy, stop. Boy, stop. Halfway. Go for it, right?

Imagination and Collective Consciousness

00:04:39
Speaker
Yeah. I mean, this technology, of course, does not exist today. We don't have no idea what it would be. But yeah, that's the idea. And then in that world, we're communicating directly. We could imagine ourselves in a holodeck type situation where we're playing
00:04:58
Speaker
tennis together because we like to play tennis. I don't actually, but imagine we did. We could imagine ourselves playing a tennis game. We could do that or we could imagine ourselves conquering the Roman Empire or going to Mars or whatever it is. Unlimited imagination. Anything you can imagine, you can realize.
00:05:19
Speaker
The key thing that Musk pointed out is that it's probably the case at that level that it's nothing like what you're thinking about now. So it's not like imagination now. You wouldn't need to encode these thoughts in images or words or narratives. It could be even more direct in some level of free representation that we can't even describe. That's how direct that interaction would be. And at that level,
00:05:48
Speaker
The network itself has new emergent properties. There's a new emergent consciousness from those interconnected consciousnesses that is different than what we have today and cannot be imagined by our present minds and we have no idea what it could do. I think at that point it becomes really important to think about what would you want it

Challenges in Neural Communication

00:06:13
Speaker
to do? What would be good for it to do and how could it all go horribly wrong?
00:06:18
Speaker
When I'm thinking about this as a fully realized possibility, I think that the way that it's described is impossible. That even if you had the engineering capability to fully understand or fully record every single atom that's going on in your brain and you had, you know,
00:06:46
Speaker
enormously powerful signal analyzers, I would make the claim that you cannot use this representation to directly communicate with another person. That the signal coming out is not translatable to another human being, that you can't just link up to people. And here I'm going to use a quote
00:07:15
Speaker
Let me use a quote, and I think this is applicable, even though it probably wasn't intended for this. Only Muggles talk of mind reading. The mind is not a book to be opened at will and examined at leisure. Thoughts are not etched on the inside of skulls to be perused by any invader. The reason why I feel like this is a realistic issue with this is that the tricky part is not
00:07:41
Speaker
I mean, it's tricky. It's really difficult, the engineering part of understanding where every atom is. But the tricky part here is understanding what experience that relates to, what the subjective experience for that person is. And this is almost where technology meets some philosophy. In philosophical circles, this is referred to as the problem of intentionality. How do you understand what
00:08:10
Speaker
particular idea, a particular thought or a particular perception refers to in the external world. And there's something fundamental about, you know, you've got a huge pile of neural signals without understanding the personal history, the sensory experiences that led a particular individual to attach meaning to these images and thoughts
00:08:38
Speaker
it becomes impossible to interpret. We have a natural way of condensing our messy and subjective thoughts and communicating them with other people and that's language. We're not able to always express precisely what we mean. So I may have a pretty nuanced thought that comes into my head and I'm limited in bandwidth to how I can express it.
00:09:07
Speaker
through language. There are only certain words that I can use that I know that are going to be understood in a way. They'll be misinterpreted in some sort of way too. But that's the common tool that we have to understand each other.

Ethical Concerns and Free Will

00:09:22
Speaker
Now what Musk is talking about or what Musk would consider is that we're essentially talking about an entirely different language, that we're talking about the language of mental thought. And I don't think
00:09:36
Speaker
that this is something that can be directly translated from one person to another person. In all of these brain systems, one of the things that they depend on, and we talked about this before, is that you need some sort of input that lets you know what this is referring to. If you're gonna try and read someone's thoughts by recording from electrodes,
00:10:06
Speaker
you want to see what face that person is seeing at a particular time. So you feed it lots and lots of faces and they tell you what they're perceiving and then later on you can show them an image and from your brain scan you can guess what image they're thinking of. You can guess what image they're thinking of even if you don't show them an image, right? So this is how most of these programs work. What Elon Musk eventually, far into the future, is thinking of
00:10:35
Speaker
is directly reading neural signals and trying to understand the thoughts that are being conveyed without any language structure that's surrounding it. I think that in defense of that way of thinking about the world,
00:10:58
Speaker
in this networked neural place type environment, right? Where everyone is connected into this enormous neural network of brains. You're also experiencing everybody else's sensations. If you made a system where you were born into this, all of your past experiences fed into this so that
00:11:25
Speaker
your experience included the experience of other people as you were connected to this network. That makes some sort of sense. Yeah, I could see that. But then you're talking about biological ethics, where you're moving in the territory of needing to mesh computers and brains together from birth. I think that
00:11:50
Speaker
The issues of ethics come into play right away. You don't have to get very deep into this at all. As soon as you get two brains connected to each other, immediately huge ethical questions come up because your individual free will is immediately compromised in a way that is at least quantitatively different.
00:12:19
Speaker
In the sense that you'd be taking off the any sort of sensor that you've got that you're just right. Sort of flooding the other person with your experiences and thoughts with no stop, right? And I one of the things that occurred to me as we were talking about this at the beginning of this section was what would work and not work about this in the medium term? So.
00:12:46
Speaker
One of the things that's really hard right now is to stimulate, say, a particular thought as corresponds to, say, for example, a word. So if I wanted to make you think the word computer with an electrical stimulation, there's no way in which I can do that now. I don't have even a theoretical sense of how to do that.
00:13:11
Speaker
computer neuron that I can stimulate in your brain that will reliably get you to think computer or visualize a computer. I don't even know what that would even be because you get into the idea of a prototype versus specific exemplars, et cetera, et cetera, et cetera. It gets super complicated, super fast. We have no idea even how to tractably conceptualize the problem, much less attack it.
00:13:35
Speaker
I mean, you have some sense that speech receptive area, maybe put an electrode in there and see what you can get. It's way harder than moving the cursor on the screen. So the input problem is already way harder than the output problem to get started. Because you have the output, which is the brain computer interface part of basically moving a cursor.
00:13:59
Speaker
Then you've got the input part.

Individual Brain Complexity

00:14:00
Speaker
So far we've got, as far as we've gotten so far as like these visual cortical implants that we talked about in the previous episode, where you can kind of have an array of light sensitive elements, like a camera, and that can feed into stimulating different parts of visual cortex, which has that nice spatial arrangement. And you can kind of get vague shadows or outlines of images projected onto that V1.
00:14:26
Speaker
Once you get beyond V1, anything more complicated than that, the input problem starts to be pretty complex. Yeah, and I think a lot of that is just because individual brains are so idiosyncratic that at the level of the eye when we're seeing something directly on our retina, it's fairly clear what the pattern of light is that's going to activate a certain set of neurons. But once you move into the brain a little bit, we all have
00:14:53
Speaker
different ways of this being implemented. It just gets so mushy and so complicated, just a couple layers back that what may cause an input for one person may be totally different than what may cause it for another person. That said, the thought that I had was, all right, what might work pretty well? Think about stimulating a more deep brain part.
00:15:21
Speaker
Like, so we already know that, for example, there are treatments for like very, very severe depressions, for example, that stimulating certain parts of the brain can help with that. Imagine if you hooked someone up to your brain to where you're now emotionally connected to that person. And just think about just like if you imagine just taking one axis of emotion, so like activated versus not activated.
00:15:48
Speaker
sympathetic versus parasympathetic activation. This really gross up and down. You can get hormones activating together or neurotransmitter systems activating together. I'd say, for example, adrenaline, epinephrine or epinephrine.
00:16:12
Speaker
stimulating the release of massive amounts of adrenaline. If you got really scared all of a sudden because you saw a tiger, I'm connected to you. I'm over here in San Francisco, Bay Area, El Cerrito. We're linked. I get scared by this tiger and there's no tiger here. But you can imagine that would work. That would totally work. I think there's something crucial. I think you're onto something here. I think there's something crucial about
00:16:41
Speaker
constructing shared experiences that you could then reactivate again. So if I was thinking of the kinds of things that might work with a setup like this, one of the sci-fi ideas that might work as a kind of mind melding, I guess here's the example that I thought about. If we share no experiences at all, so in 10,000 AD, I
00:17:10
Speaker
grew up on Mars and you grew up on Earth, we didn't really share a whole lot of experiences. It's going to be difficult to have this neural link between us. We don't even have a common language. We don't have very much that's going to be translatable between us. You know, at the other extreme, if we have the exact identical experience, so from birth being plugged into a virtual reality machine where the same exact things happen to us,
00:17:38
Speaker
then our experiences are going to be fairly translatable. If you've seen everything that I have, then the same kind of input is going to cause the same experience in your brain. You could think of that on just a single experience level. So imagine just for example, you know, we've got our
00:18:00
Speaker
brain recorder recording, you and I are walking through the woods and we see a deer in the forest and it comes right next to us and sort of this emotional moment. Now we've got a reading from your brain, a reading from my brain about what the subjective state of that experience is like. We can, we could now translate between us. So sharing that kind of experience. And you have something that, you know, when I think of,
00:18:30
Speaker
when that sort of pattern gets activated in my mind again, gets translated and then causes the kind of activation that you had for that same kind of experience. So in that sense, I could imagine, I think a lot of this kind of technology depends on how translatable an experience you have between one person and another person. It's not going to create, just having
00:18:56
Speaker
all these electrodes isn't going to create a language that allows us to communicate. We still wouldn't have that language unless we knew what was going on in the outside world while these experiences were happening. I think that's right.

Practical Applications and Privacy

00:19:14
Speaker
There's some question, I think what you're driving at, there's some question as to what's the limit of what connectedness could be in these systems.
00:19:26
Speaker
begs the question in my mind right away is what would you want the limit to be? Because I believe it's the case that we would not achieve the limits of what is even possible because it would never be desirable to do so. And therefore we would not expend the energy. There would be no economic incentive to do so because at the end of the day, I mean,
00:19:53
Speaker
One of the things that comes up in this is the ethical questions of connectedness, where in the environment that you're talking about, if you wanted to hurt me, all you would have to do is hurt yourself, and I would feel the same pain that you're feeling. So it's very direct in the way that you lose any kind of individual freedom or control. It's literally just non-existent. Each one of us would be
00:20:24
Speaker
co-creating the other's experiences and we would entirely lose any sense of what we may imagine is our own individual freedom and control. Now, there's a whole other series of pods about whether we actually have any to begin with, but we certainly wouldn't have any in this environment. I think then the question becomes, what do you want to do with this thing? It gets back to the inputs and outputs. You can hook your brain up to a computer,
00:20:53
Speaker
Right now we can do this already. If you implant some electrodes into your brain, it can read out what your brain neural activity is and that can have some, you can attach that to an effector in the world and it can do something. You can have inputs, for example, that we talked about in the visual cortex, you can imagine having inputs into the motor cortex as well that could move your body parts. Ultimately, it seems possible that it's gonna be difficult and you could essentially stimulate
00:21:24
Speaker
more complex visual patterns and and also semantic patterns. So in other words, you could talk to each other through this thing so you could imagine. Any kind of visual and auditory semantic communication potentially possible and then we could debate whether what the limits are, but. I imagine there'd be the whether it be a complex language of thought or whether it's something that needs to be mediated by an existing language already.
00:21:53
Speaker
Correct, correct. So then the question becomes, what about that is in any way useful and or desirable? What would you want to do with this? We know that if you're paralyzed, for example, being able to control your mech with your brain is awesome. Your Iron Man suit. Your Iron Man suit. Exactly. That's totally there. You need that right away. So we're totally doing that. That seems like an easy
00:22:24
Speaker
An easy ethical question. Yeah, the Iron Man suit is totally there. Everybody wants that. Even even even if you're not paralyzed, you want that. You know if you're not paralyzed, it's probably easier ways to control it in the early stages, but ultimately the brain interface is the way to go. So that's totally there. But on the communications side, I mean what do we want that is? I'm trying to think of a use case where I actually want that. Yeah, I think.
00:22:53
Speaker
It's so nice to be able to have a filter as is, to be able to control the stream of information that's going on in your brain and how it gets presented to other people, to have it totally non-private, to just have anything that pops into your head become public property seems undesirable in any sort of way. I can't imagine how it
00:23:22
Speaker
really would destroy the sense of an individual as we think of it now and to the extent that we would value an individual that would be completely destroyed. I noticed there was a quote by Elon Musk that addressed this a little bit and I don't, maybe he has updated thoughts on it now. The question was, would it be possible to read somebody else's thoughts with this if they didn't want you to read their thoughts? And Elon Musk said,
00:23:51
Speaker
people won't be able to read your thoughts, you'd have to will it. If you don't will it, it doesn't happen. Just like if you don't will your mouth to talk, it doesn't. Right, so you would build some, you would build the filters in as part of the system. Someone could hack it. I mean, it'd be so I mean, right, as soon as you start to have things like artificial filters, then you start thinking about workarounds and hacks and all that kind of stuff. Yeah. And
00:24:16
Speaker
I just feel like that's a, it seems like a real, certainly a real issue and something that wouldn't automatically be addressed. I don't think just to say that you would have to will it is a. We don't even know what that means to cop out. I think, yeah, we don't even know what that means. That doesn't make any sense right now. Yeah. So, you know, in the sort of framework of Elon Musk's approach of having this really big problem that you're trying to solve,

Feasibility of Short-term Gains

00:24:45
Speaker
that does something good in the end result in the long run, and then has intermediate sort of artifactual consequences that are positive for business in the short run, his long run goal here doesn't really make sense to me. There are some interesting short run potential big wins, like the mech suit, the Iron Man suit,
00:25:10
Speaker
I'm interested to think a little bit more about what some of these input wins might be. So that's the output wins, the controlling wins. One of the input wins would be obviously for deaf people, for blind people, help them hear, help them see. That seems like those are good wins.
00:25:27
Speaker
Also the treatment of different mental disorders. If you can change, this is already, all this stuff is already in the works and varying degrees of success. If you can treat chronic depression, if you can create schizophrenia, if you could treat Alzheimer's disease, whatever it may be through neural stimulation, those seem like promising directions as well. So it would seem like a lot of the wins on the input side
00:25:56
Speaker
Okay, if it was before sensory systems, these are things that you could simulate through virtual reality. So you're talking about a visual input. Well, yeah, maybe you could input an image into the brain, but the easiest way to input an actual image and the one with the best way to do it is to just show an actual image because our perceptual systems are built around the idea that what we're trying to do is understand the external world.
00:26:26
Speaker
That's where the short term win is for people whose eyes are broken. Right, and you can bypass that and that seems like a realistic conceivable short term win. Yeah, that feels like a win. I mean, cochlear implants already exist. I mean, this exists for deafness. The cochlea is part of the brain. And so you have inputs that have a receiver in the outside world that
00:26:56
Speaker
is taking in sound signals and turning it into electrical signals. That's transmitting this through electrodes into tissue. That is then turning it into an electrochemical signal that's transmitted throughout the brain. People are hearing and using that that sound information in their everyday lives today so that that exists today. Which is cool. One of the things maybe there's some misconception about that I think would not be realistic is the idea of.
00:27:26
Speaker
quick learning. So if we had an interface between computers in our brains that what we could do is, oh, I'm going to, you know, I'm going to France next month, so I'm just going to quickly download French. Right. And now I can speak another language or why go to school for technical reasons? We can just program this into our mind.
00:27:51
Speaker
And this is where I think the answer would be no. This is a misunderstanding that we could do something like this because in order to learn a language, what you really would need to do is fundamentally reorganize neural pathways at a very individual level. So it's not, you know,
00:28:18
Speaker
It's not the same as just changing the input or feeding something in. We're not talking about an interface between a computer and a computer where you put some memory from one computer into another computer. The memory systems of the brain are fundamentally different than the memory systems of a computer. You're talking about connections between neurons whose strength is being increased or decreased in a very distributed way. So we've got
00:28:46
Speaker
billions of neurons, and any particular memory is going to be reflected in strength of a huge network of different neural connections, which is fundamentally different than how we think of memory in computers, which is a particular memory is stored in a particular addressable space.
00:29:08
Speaker
One confusion maybe is from cognitive psychology where we tend to think of memory as existing somewhat like this. We have a simplified model of the mind that talks about memory as being stored and then retrieved. But on a neural level, it just doesn't work like that. In order to implant a memory in the brain, you need to change the strength of neural connections. Now you're talking about not only being able to read from every
00:29:38
Speaker
neuron in the brain, but being able to alter the connections between every single neuron in the brain, which is a totally different thing. And that gets into some really scary ethical issues, because if you even could conceivably do something like that, you would be changing the structure of your brain without any sort of interaction with it. So as you're learning French, you may inadvertently change a lot of other associations that you make.

Self-identity and Information Overload

00:30:05
Speaker
You can't just pick up French without changing all kinds of other parts of your brain inadvertently and fundamentally changing the way that you think. That's true. Well, that's true in one sense and another sense. It's not so different from what we're doing today. So I agree with you that learning French is not just like downloading the French dictionary into like a piece of your hard drive in your brain and then being able to suddenly utilize it.
00:30:35
Speaker
What we have today is, for example, Starkey, which is a hearing aid manufacturer, recently released a product they call the Livio AI, which is the first artificial intelligence hearing aid, is how they're describing it. And basically, it's like the babble fish. Babble fish, yeah. What a prescient technology. Yeah, right? Exactly. Stick this thing in your ear.
00:31:02
Speaker
and you're hearing in your own language when people are talking in another language. The person speaking French and you're hearing it in English. And that's cool that it's fundamentally different than speaking French. Yeah, that seems like the way that you want to implement these sorts of things is to have it externally done. Just like I mean, in the same way that you you can just talk into your phone in English and have French come out.
00:31:28
Speaker
I think it's a great example of exactly what you're talking about, which is it is not the same as having direct experience of someone's French thought. It's not even as direct as speaking to them in French. It's not the same. It's a translation. And no matter how good the translation is, it's always going to be imperfect. It's not exactly right.
00:31:53
Speaker
that's always happening whenever we talk, but it's just more obvious when we're talking about speaking in different languages, because you can pinpoint some of the discrepancies or errors. But that's always happening whenever we talk. We're not really communicating exactly what we mean when we talk to each other. So you're hearing the words that I'm saying, and they mean something to you based on your experience, and they mean something to me based on my experience, and they're not exactly the same. I mean, this ties into all kinds of philosophical
00:32:22
Speaker
issues too. I mean, the concept of how we can communicate one idea that we have this idea of having a thought, it being translated into a language and then being transmitted and then being absorbed by the other person and being translated into another thought. This is something that philosophers of language have struggled with pretty heavily
00:32:50
Speaker
For sure, all through the 20th century. I mean, that's a big question what how these things can be translated. It can Stein in particular. I think it's a good example where the technology is actually going to teach us something that we didn't know before about ourselves as we as we approach the limits. Yeah, there's nothing like technology actually getting into philosophical territory like that. When you can you can see something actually happening that seems like it was only theoretical. Yeah, you can test it.
00:33:19
Speaker
I said that I thought you were partly on the right track. Then I went off. When you talk about changing all the connections globally and having side effects, it's just like any kind of drug. For example, pharmaceutical drugs that you take for depression, for example, Prozac. It has these serotonin
00:33:48
Speaker
Enhancing aspects, serotonin does a lot of stuff. It can help make you less depressed, but it also has lots of other side effects like sleep impacts and side effects are something that we already are dealing with whenever we try to do something to our brains. Well, and you can consciously choose to be on Prozac and be aware of some of the kinds of changes that it makes to your brain.
00:34:17
Speaker
But if you're talking about limiting cases where you get to the point where you say, okay, who has time to learn French or who has time to learn all the interesting stuff in the world, just build that into me. And all of those neural connections get changed. It's the sort of swiftness of it and the extent of it. In those cases, that would just be
00:34:44
Speaker
Absolutely. The biggest problem. We're totally running this experiment on ourselves right now as a global society. I mean, we're producing so much access to information and so much stimulation at all times, 24 seven. So you think it's happening? It's absolutely happening. Yeah. We have no idea what it's going to do to us. We have no idea what it has done to us. You know,
00:35:11
Speaker
We're certainly not the same species that we were 20 years ago. We were globally connected. The whole world, societies that had no idea about other parts of the world, no reason to care 50 years ago. It's not possible. It's really almost not possible to live that way anymore. I would be hesitant to call that the same thing, though, because it's not the same thing.
00:35:39
Speaker
definitely quantitatively different. You experience in order to in order to absorb all the information out in the world, you still have to rely on, you know, traditional ways that your brain is working to process information and it has to happen in real time. Working on, you know, a greater amount of sensory input, but that's different than tinkering with actual connections in the brain without any corresponding input.
00:36:11
Speaker
When it comes to the question of free will though, that's where I start to wonder how different it really is. How much control do you really have over these stimuli? So you can say, well, I can turn on the TV or I don't have to turn on the TV. I can Google this thing or I don't have to Google it. But really, do you have that? I mean, if you want your body to exist in the future, you sort of don't have that choice.
00:36:39
Speaker
There are only so many ways to get by in the society that we live in. You have to take in these inputs to have a job, to make money, to get by, to get from A to B. And so in some ways, we're building this connected, super information rich, super information dense network system into our everyday lives. And we don't have a choice. I feel like I've already
00:37:09
Speaker
I think I've already come to peace with the idea that I don't have free will in the sense that maybe most people come to appreciate the term. I would be more concerned with the rapid changing of self-identity, that I would be comfortable in saying that strings are being pulled, but they've always been pulled that we're not always directly, we don't always directly
00:37:39
Speaker
know the causes of our actions. But the thing that disturbs me is thinking about a really rapid change in my subjective experience so fast that I think I would, you know, you could rapidly change into a totally different person such that your experience is no longer continuous. Yep. Yeah. I want to think about the question of
00:38:09
Speaker
What would be good, right?

Elon Musk's Motivations

00:38:12
Speaker
It's always such a struggle. And I never really feel super comfortable that I'm making progress when I go down this road. But it seems ever more and more important every time we engage one of these types of conversations about the future of technologies. What would be good? What would we want it to do? And can we push things in that direction? Yeah, I think that I feel like that
00:38:39
Speaker
That is what I appreciate about Elon Musk actually is he seems to be thinking in this way of considering how much we can influence larger issues of how the future is going to work. Do you think he's really doing it because he cares or do you think that he's doing that because he knows it's good PR?
00:39:04
Speaker
I mean, in the Silicon Valley sense, right? Silicon Valley always like to talk about how this is helping people. And at the end of the day, building a better world through cloud-based solutions to information network gathering. Right. Building a better world through more rapid advertising. IT protocol, right. Yeah.
00:39:30
Speaker
I don't want to read too much into the psychology of Elon Musk, I guess. But I do feel as though he's got billions of dollars already. He probably spent a good amount of time just trying to get lots of money. Maybe getting more money isn't as meaningful a goal as thinking about some of these bigger issues.
00:39:59
Speaker
putting his money to work in the same way that Bill Gates has so many philanthropic pursuits. I think Elon Musk's philanthropic pursuit is trying to nudge the future in a way that seems positive. I'm not sure. I don't think that. There's something a little sad that
00:40:27
Speaker
the people who get to decide which direction our futures go are the mega billionaires who have, who are the only ones who have the ability to engage in that as kind of a leisure pursuit. Great. Do you think he, this is a question of free will. Let's get back to the free will thing. Do you think he really has control of that? I mean, in other words, I guess my question was, is he doing it for,
00:40:56
Speaker
for altruistic reasons or not. And then the question is, is that even the right question? Is that even a thing? In other words, he's doing it because he wants to do it. Now, why does he want to do it? You know, what does that even mean to ask the question? Yeah, it's ultimately it's ultimately hard to say. He wants, you know, it's the paradox of altruism, I guess, that you can never
00:41:23
Speaker
necessarily attribute motivation to certain acts. And certainly his explicit goal is, in the meantime, to support all these activities with profitable businesses that have artifacts along the way that are themselves profitable. And of course, that's the part of the journey that he's going to be experiencing and benefiting from. Assuming we don't get to the singularity before he dies. Oh, yeah. Well, maybe he gets some benefit of the doubt for
00:41:52
Speaker
trying to aim things towards reduced carbon emissions and things that seem generally good, where it doesn't seem like it would be the way to a quickest profit. Right. I give him points for making a really cool car. I mean, that's something right away you can say for sure. He made a cool car. That's there for sure. And he's making rockets. Yeah. And digging holes in the ground.
00:42:22
Speaker
Yeah, I believe in holes in the ground also. I think underground trains or whatever the hyper loops, however you want them to get around. But yeah, moving around underground makes sense for sure. I feel like the waters are a little more muddied around the neural link stuff, though. Yeah, I mean, the long-term goal seems wrong to me. The short-term winds, I feel like there's some pretty clear short-term winds.
00:42:51
Speaker
helping people who are paralyzed, move, helping people who are blind see, helping people who are severely, severely mentally distressed in different ways. All of that makes sense to me. And they're all good areas and rich areas of research and development. Well, there's no technology that can't be used for some kind of evil though. And this one seems like a particularly dystopian laden
00:43:22
Speaker
That's technology, right? Well, yeah, let's get into it. What could be worse than having brains controlled?

Control, Access, and Security Risks

00:43:29
Speaker
Well, as soon as this is put at this place, as soon as you get the input output loop hooked up to the brain. So you've got an electro that's sending impulses in and you've got electrodes that are receiving impulses out. Now you can control that person. You can do stuff to them. Even the most rudimentary stuff could be pretty damaging.
00:43:50
Speaker
and total control. There's no possible greater control that you could have. Yeah. And who's controlling that network? Like who's the network engineer? Like right now, Neuralink is hiring IT team leads, IT support specialists, you know, process engineer. These could be our overlords of the future. The IT team lead at Neuralink who is
00:44:17
Speaker
previous supervisor or lead experience required while still being extremely comfortable with tactical day-to-day. Manage administrator access to all systems while improving efficiencies and redundancies. Manage administrator access to all systems. When those systems are in your brain, this jackass, IT lead, I mean, these are people we worked with. You think about the IT people that you worked with. Do you want them?
00:44:48
Speaker
having managing administration access to all your systems. This is where the dystopian future goes is towards admin access, who gets admin access to your brain. It seems like an episode of Black Mirror in there somewhere. Right, right, right. Who's controlling who gets in and out? And then who's that weird, weird dude in the truck, you know, controlling the stalker dude, right?
00:45:17
Speaker
It's not even a bloodthirsty dictator. It's just some random IT guy who happens to know Okta, Jamf, AD, AWS, Juniper Gear, and it's comfortable using Slack and Zoom. It's just some random dude.
00:45:42
Speaker
Yeah, that's where it gets. Yeah, I think I mean that when Elon Musk says you know that. Oh, it's just when you will it. That's all dependent on system level access controls, right? So it's like alright. Roth gets will level access to my brain. Some other person gets only you know. You know you have to knock first or whatever. Yeah, I don't see that working out well at all.
00:46:09
Speaker
But I mean I think that yeah, I think this one is so easily so obviously it goes wrong so obviously so fast It's almost not even as fun to think about us as robots taking over because well, it does seem like Elon Musk how do you how do you do electric cars, right and Do rockets right and then just also do brains, right? Right. I feel like happened to get all of the complex issues around that right at the same time. I
00:46:40
Speaker
have that sense where when engineers start getting interested in neuroscience, things go badly, generally. Remember the Redwood Institute thing? What was the Redwood Institute? Remember the guy Hawking who wrote On Intelligence? I don't know if I do. You should read the book. Jeff Hawkins, some rich dude, engineering type,
00:47:10
Speaker
wrote this book on intelligence, how understanding the brain will lead to the creation of truly intelligent machines in 2005. And he's, oh, okay. He created the

Technological Advances: Expectations vs. Reality

00:47:20
Speaker
Palm pilot. It's the man who created the Palm pilot trio smartphone and other handheld devices that I had a Palm pilot got absolutely destroyed by it by the iPhone. He now sounds ready to revolutionize both the neuroscience and computing with one stroke, a new understanding of intelligence itself. And he wrote this book in 2005.
00:47:39
Speaker
I feel like it's maybe not a one-stroke thing. Yeah, it's what I'm saying. He wrote this book in 2005. He started the Redwood Institute. I mean, some really smart people. But anyway, yeah, the point was using this new understanding of how the brain works to basically build smarter machines. It kind of gets into this brain-computer interface thing and all that stuff. At the end of the day, it's like an engineering-type guy who wrote a cool book.
00:48:08
Speaker
It's a little more accessible when you read their thoughts on it. If you're coming out from someone who doesn't know much about neuroscience, it's a little more accessible. So when they explain it, it makes more sense. And then because they're engineers, everybody believes that they can do anything. And it usually just doesn't go anywhere at all. So it's just way more complicated than any of the problems they've been thinking about.
00:48:30
Speaker
It's way more fucking complicated than shooting a rocket and going to Mars. Way more complicated. So much more complicated. I feel like one of the pathways that people might go down is to say things like, well, look at how much faster computers have gotten in the past 50 years or so. We can't even imagine how
00:48:57
Speaker
how much the bandwidth of our brain computer interfaces are going to increase in the next 50 years. We couldn't imagine the iPhone 20 years ago. We couldn't imagine all of the technological marvels that we have now. Therefore, in 20 years, our brain computer interfaces are going to be so advanced we couldn't
00:49:26
Speaker
even begin to think about how. And I think the problem with that is that there are plenty of examples that you can use about technology advancing in unexpected ways, but there are also plenty of examples of technology that never appeared. We don't have anti-gravity boots right now. Maybe we understand a little bit more about why we don't have them, but we don't have faster than light engines right now. There are lots of technologies that we don't have, and just because
00:49:56
Speaker
we've seen technological advances in the past doesn't mean that this is inevitable. This is an inevitable sweep, right? No, absolutely absolutely. That was a great conversation, Rolf. I really enjoyed it. We got into some. Different topics related to how our brains can be hooked up to the machines, what the potential future of that is, why we are skeptical about.

Conclusion on Brain-Machine Interfaces

00:50:23
Speaker
The ultimate vision of Neuralink,
00:50:26
Speaker
are excited about some of the potential short term and intermediate term wins that we can get by building brain computer interfaces that can help people. And obviously plenty of more things to discuss and things that people might be thinking of. But thanks for listening.