Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Brain-Computer Interfaces, part A image

Brain-Computer Interfaces, part A

S1 · CogNation
Avatar
20 Plays5 years ago

In part A of the episode, Joe and Rolf base discussion around "Rapid calibration of an intracortical brain–computer interface for people with tetraplegia" by Brandman et al., thinking beyond the hype to get a realistic picture of how things work in the field. It's not all like The Matrix (yet).

Recommended
Transcript

Introduction to Cognation Podcast

00:00:06
Speaker
This is Cognation, the podcast about cognitive psychology, neuroscience, philosophy, technology, the future of the human experience and other stuff we like. It's hosted by me, Rolf Nelson. And me, Joe Hardy. Welcome to the show.

Motivations Behind the Podcast

00:00:25
Speaker
So Joe, why did we start this podcast? Rolf, I mean, I think both like to listen to podcasts and we think that we have
00:00:35
Speaker
some interesting things to say about psychology and cognitive science and thought maybe some other people might want to listen to it as well. So we figured, hey, let's get together and record a podcast. And if other people think it's cool and want to listen, that's great. And if they don't, we'll have fun talking about interesting topics and the future of brain science and interesting things that are happening in neuroscience and psychology today and how they might relate to some of the big questions
00:01:04
Speaker
that we've contemplated over the years. Yeah, so what kind of things are we interested in? Maybe that's worth talking about a little bit. Sure, yeah.

Hosts' Background in Cognitive Psychology

00:01:15
Speaker
So Rolf and I, for those of you in the audience, we're graduate students together at the University of California at Berkeley in the late 90s and early 2000s and met there in the cognitive psychology program. And at that time, it was
00:01:34
Speaker
a group of us there who are interested in a lot of different things related to not just the sort of nitty gritty details of the specific research projects that we're working on, but also with the implications for understanding the brain might be for society and for the future of humanity. And I think that we've continued that conversation over the years and topics where

Interest in AI and Technology's Societal Impact

00:02:02
Speaker
you know, like artificial intelligence, for example, how that relates to how we understand the brain and how that would impact the development of technology, how that technology development might impact the future of the brain and society. Obviously, we're always interested in the Robo apocalypse. Of course. Yeah, and I mean, we have lots of different interests, and I think we can assemble all of these interests together in a
00:02:32
Speaker
maybe a novel kind of a way. We both certainly have an interest in technology and how technology can affect what it's like to be human. One of the courses that I love teaching the most is a course on human consciousness. And in order to understand consciousness, you need to understand all different sorts of fields, the way that
00:02:59
Speaker
human consciousness is affected by technology is certainly one of them. Yeah, absolutely. And we also like to play video games. We like video games. So it's always a good thing to think about how video games and kind of interactions that you have with computers that are video game like might change the brain or how that might be a model for virtual worlds that we might exist in in the future.
00:03:26
Speaker
how our interactions with computers and multimedia environments like video games might change us, how that might affect society moving forward.

Exploring Visual Illusions

00:03:36
Speaker
Of course, we always like to talk about visual illusions, although I wonder how successful we'll be at bringing visual illusions into podcast. But it's a shared interest and I think it's cool. I mean, the reason why I like visual illusions or illusions of any type
00:03:54
Speaker
is that it really shows us something important about our experience of the world, which is that we're not experiencing the world directly, but rather our experience of the world is filtered through our senses. So our biochemistry, electrophysiology are impacting the way that we experience the world. And that's where a lot of our expertise lies, I think, is in that interface between the world and
00:04:23
Speaker
and the perceptual processes of the brain, so how information gets in.

Psychology's Influence on Economics and Politics

00:04:29
Speaker
And crucially, like you say, how things are interpreted and sometimes misinterpreted. So we often like visual illusions because they show us how the visual system works. And what is constantly coming out
00:04:44
Speaker
are different ways in which the brain processes information incorrectly, or it makes all kinds of mistakes, and these kinds of mistakes are illuminating. We might think of them as limitations that our brains aren't perfect, but it also highlights the unique way that our brain interfaces with the world. And it tells us a lot about how things work every day around us.
00:05:12
Speaker
application of cognitive biases and economics is a huge field. So much of economic theory now has to take into account the way that people actually process information and the mistakes that people make. So I think psychology has had impacts in a lot of domains and areas that may not be obvious to everyone, but are I think fun to talk about.
00:05:35
Speaker
And we could mention, we don't have to mention, I guess, because I don't really want to get into political things too much. But of course, there are some political implications to the way that human thinking and decision making works, that maybe our decisions at the polls aren't necessarily based on the kind of logical thinking that we believe they are. And the kind of impact that technology has, such as Facebook,
00:06:03
Speaker
presenting news to us is not it doesn't really operate in the way that we think it does. We think we're taking in information in an unbiased sort of way. And there's certainly been a lot recently that indicates that technology can manipulate us. No question about that. And, you know, the old
00:06:25
Speaker
economic thinking of like rational actors and also political science sort of thinking where people are behaving in a rational manner. And that's how you end up with the results that you end up with is obviously wrong. And at this point, so obviously wrong, but it's almost not even worth talking about. But I think we should, we should keep our discussions of that to either something that is illuminating about the brain and psychology or somehow leads to the robo-pocalypse.
00:06:55
Speaker
because I can't conscious bringing another podcast into the world that talks about Trump. There's just so many of them. I'm raged out. There's so many of them. I listen to a lot of them and it's amazing. There's a million of them. I'm not even going to comment on my comments on that. I was going to say something about tribalism and that, but I'm just going to stop myself.
00:07:26
Speaker
Yeah. Well, I think we can start thinking about today's episode.

Introduction to Brain-Computer Interfaces (BCIs)

00:07:32
Speaker
Yes. On today's episode, we're going to talk about brain computer interfaces. The inspiration for this was, I think you brought up the company that Elon Musk has that's interested in far future kind of technology. Neuralink. Neuralink. Yes, so Neuralink is
00:07:53
Speaker
this idea that Musk has about connecting brains directly to machines and using that connection in this far future world for some pretty crazy concepts where people can communicate directly over the internet, I guess, if it's probably not going to be the internet at that point, whatever it is, directly through computers, people can have essentially direct communication without any
00:08:20
Speaker
need to do arcane things like speak or type boring stuff boring stuff like like that. But yeah, I mean, there's some really cool concepts that that brings into into into the discussion from the actual interfaces themselves that the brain machine interfaces themselves which are
00:08:42
Speaker
interesting to talk about even in today's technology. And then some really, really interesting philosophical topics. What this means for the future and the present actually of how we think and communicate. What I think might be a good way to go about this is to first consider the realistic state of the art in the field to give a little bit of grounding about how. Brain computer interfaces operate now.
00:09:12
Speaker
the best kinds of brain computer interfaces operate now. Then think about how that might apply to Elon Musk's vision and what some implications could be if it was successful or what some limitations, some absolute limitations might be that would stop this from progressing. The name of the paper that we were interested in talking about is one that's done by a larger collaboration of

BrainGate Study Overview

00:09:42
Speaker
scientists who are working with a system called BrainGate, which is a neural implant system that implants electrodes directly onto the surface of the brain and records brain activity as it's happening. And the immediate goal for this kind of thing is for impaired patients.
00:10:11
Speaker
at people who have some kind of paralysis that doesn't allow them to move their limbs so that they can interface with a computer cursor or some sort of technology. So let's see. So the name of the paper, and it's a pretty technically oriented paper, but the ideas behind it are pretty cool. It represents a pretty substantial collaboration. There are, I don't know how many. 29.
00:10:39
Speaker
29 authors. Yeah, so it's David Brandman and 28 of his closest friends. 28 of his very closest friends, a bunch of whom are from Brown University still, and then some from Mass General in Boston, some from Stanford, Case Western Reserve University, all over the place. Yeah, the title is Rapid Calibration of an Intracortical
00:11:02
Speaker
brain computer interface for people with tetraplegia. Yeah, it's just earlier this year in Journal of Neuroengineering. Yep, so it's cutting edge latest and greatest stuff and it's something that is been around since we were in graduate school. People were talking about this exact topic doing this exact work in 1997 and here we are in 2018 and there's still
00:11:31
Speaker
making progress, but I have to say, honestly, rather slowly, relative to other things, if you think about what can practically be done with these interfaces at the moment. So I think it's worth just diving into the paper bed and getting at what can we actually do today with systems that directly connect the brain to machines. That's what this brain computer interface is all about. Yeah, so let's just dive right into it.

Challenges in Recording Brain Activity

00:12:00
Speaker
What's the major finding of this paper? What are they really trying to do here? So what they're really trying to do is take these brain computer interfaces. In this case, these are intracortical brain computer interfaces. These are electrode arrays that they're implanting directly in the brain and recording activity from the brain.
00:12:23
Speaker
and using that recorded activity to control a cursor on a screen. So like a computer cursor on a screen in patients who are paralyzed. And what they're trying to do in this particular paper is they're trying to come up with a system that more rapidly gets that cursor control into closed loop control by the patient. So essentially what they're doing is testing out their way of calibration
00:12:53
Speaker
So a better way of calibrating so that these patients can more quickly interface with the computer. That's right. It takes them a little less time. And that's a substantial thing. Here we have where they're putting these arrays. So both arrays were placed on the dominant pre-central gyrus in
00:13:16
Speaker
T10 one array was placed in the dominant pre-central gyrus, and a second was placed in the dominant caudal-middle-frontal gyrus. Now, this just means that they were placed on the motor cortex, the motor cortex being the strip on the center front of the brain that is just preceding a motor movement. So if you were to take an electrode and zap it,
00:13:41
Speaker
you could get a motor movement somewhere on your body and it's mapped in a particular sort of way such that things that are next to each other on the motor cortex are next to each other on the body. So this is the homunculus. We can't skip over this part of the conversation without talking about the homunculus because it's just such a great word. The homunculus is basically, yes, say it many times.
00:14:10
Speaker
This is the representation in the brain spatially of the body. If you think about it, there's a relationship, as Rolf said, between a place in the brain and a place on the body. If there's a neuron that's representing a movement in a particular part of the body, say your thumb, then a neuron close by might represent
00:14:37
Speaker
a movement in your index finger, for example, on the same hand. This is a homunculus in the sensory cortex and there's also a homunculus in the motor cortex. Yes. The sensory cortex is just nerves from all over your body. If you were to, say, pinprick the end of your thumb, you'd get some action in your sensory cortex at a particular place. If you were to
00:15:06
Speaker
pinprick the end of your index finger, you'd get some action just nearby. And it's, again, laid out in that homunculus. And places with more sensory input have more representation on the sensory cortex. So in other words, your tongue, pretty sensitive part of your body, lots of discrimination there, has a pretty big representation.
00:15:35
Speaker
on the other hand, has a pretty sparse representation, even though your back is fairly large on your skin, doesn't take up much room in your sensory cortex. And the fact that we know that the brain is laid out kind of like this helps us a little bit as we think about building interfaces to control machines because we have a sense of where to put devices that record electrical activity and how those devices may be stimulated by things that you think about
00:16:05
Speaker
which is how these interfaces work basically. And it's worth knowing this too about these interfaces because it's different than just recording from just anywhere in the brain because again, you're recording from the area just prior to where the motor movement is being sent out to the body. So if there are connections from the motor area that go out through your
00:16:35
Speaker
through your neck and through your arm and all the way down to your finger where that motor movement might be happening, you can stimulate anywhere along this pathway. And if you stimulate it right, you can cause that kind of motor movement. So we're selecting the first place in the brain that we can get to, which is the sort of the simplest representation that we could have. Absolutely. And it's helpful from a surgical perspective that this motor strip
00:17:05
Speaker
is located in a place that you can access, which is, you know, this outer cortical layer where you can, if you cut open the skull and you just lay down this electrode array, you can kind of just place it right, almost right on top of the brain, be able to record some pretty good activity related to motor movements. Let's see. On this paper, they say that the electrodes are 1.5 millimeters in length. So that's how much is going,
00:17:35
Speaker
directly into the cortex. You're just kind of popping this chip on top of your motor cortex, and it's recording from it. Exactly. And 1.5 millimeters into the brain with silicon. And the idea is that you don't want to destroy too much brain tissue. And this would be something that I don't want to skip ahead to the Elon Musk vision of this. But if you can imagine covering up more cortex, you're really going to
00:18:05
Speaker
Using something like the current technology, you're just going to destroy a lot of the cortex by plugging this stuff in. And in addition, you're really only reaching that outside. Biomaterials is a big part of this. The development of biomaterials that are biocompatible is huge. This particular paper doesn't get into that, but maybe we can talk a bit more about biocompatibility when we talk about the Musk topics.

Decoding Neural Signals for BCI

00:18:32
Speaker
Yeah, and that's I think the point of view that he's coming from is we try to engineer that if you can engineer some really cool technology to play around with, then it'll get used in interesting ways. And so the limitations here around these electrodes, if you can fix that, then then maybe all of the scientists working on brain computer interfaces like this have something better to play around with. Okay, so
00:18:57
Speaker
Then the next big step, and this is an area where there's a lot of progress, and well, I guess this is where a lot of progress comes from, is the mathematical bit, so signal processing. So I'll read the raw description so you can see how this actually sounds. So further signal processing and neural decoding were performed using the XPC target real-time operating system. So they're using something from, I think, MATLAB, and then they
00:19:27
Speaker
do a whole bunch of signal analysis. Here, raw signals were downsampled to 15 kilohertz for decoding and denoised by subtracting an instantaneous common average reference using 40 of the 96 channels on each array with the lowest root mean square, et cetera, et cetera. Then they'd bandpass and do all this stuff. So there's a lot of work here in just filtering out the irrelevant stuff that's going through this 93, or sorry, 96 electrodes.
00:19:57
Speaker
into something that could be useful. Here's a great line, I like this. The denoised signal was bandpass filtered between 250 Hertz and 5,000 Hertz using an eighth order non-causal Butterworth filter, which is what I usually use. That's my favorite type of filter. Basically, that's just saying that they're trying to select bits that matter. The signal that they're trying to
00:20:27
Speaker
work with is essentially these neural signals, which is electricity. These neurons in the motor cortex, we always talk about firing, the idea of firing of a neuron. As I'm trying to explain this, I'm going down this metal rabbit hole of like- I have a mental image of how this stuff is happening too, and it's hard to, it's maybe a little hard to describe.
00:20:57
Speaker
for those who may be less familiar with it. So the way that most neuroscientists think of how the brain works is there's a functional unit, the neuron, and neurons essentially take in a signal from other neurons. And then if they get activated enough, send the signal on to the next neuron.
00:21:20
Speaker
Now neurons are constantly firing in your brain. So every neuron in your brain is basically firing just about all the time. It's just increasing or decreasing the number of times that it's firing. And this is sort of the language of thought. This is what constitutes activity in the brain. So the neuron is reaching a certain level. I'm missing the word right now.
00:21:48
Speaker
polarization polarization. That was the word I was looking for. Thank you. Yeah. So the idea is that when the neuron quote unquote fires and action potential, this occurs when the electrical valence of the inside of the cell to the outside of the cell reaches a certain threshold. And at that point, this triggers a chemical cascade that results in
00:22:13
Speaker
essentially electrochemical signal being sent from one part of the neuron to another part of the neuron. And this is crucial. This is crucial that you say electrochemical too, because it's. I remember as an undergraduate student. Trying to wrap my head around the idea that it's electrochemical, so it is not. Crucially, it's not just an electrical signal that's traveling from one point to another. One of the
00:22:43
Speaker
I think one of the best ways of explaining it that I've had is that if you took a blue whale, largest animal on Earth, it would take about a second for a neural signal to go from the brain to the tail, and then another second for that signal to go back to the brain. So it is not an essentially instantaneous process like electricity traveling along a wire.
00:23:08
Speaker
It's not traveling at the speed of light. It's not electrons traveling. It's ions traveling down this biomechanical tube. So somewhat limited, and the brain can take advantage of this in certain sort of ways. But it's different than an electronic circuit. And I think that might be a critical point that I think some engineers are less appreciative of. It's not an idealized signal that just
00:23:38
Speaker
zips from one neuron to the next. The timing element is tricky. Right, and not all neurocommunication is done with action potentials either. Probably when they're talking about these very high frequency changes, I mean, even their bandpass is like at pretty high frequencies. In other words, activity that's happening many times a second changes in electrical activity that are happening many times a second.
00:24:04
Speaker
Yeah, so this is a question for you. This is something I wasn't as maybe I am not interpreting this in exactly the right way. So OK, so this is that. Complex sentence that I said the denoise signal was bandpassed filtered between 250 Hertz and 5000 Hertz. So a neuron. Fires something in the order of one Hertz to 1000 Hertz. In other words, between one time a second,
00:24:34
Speaker
and 1,000 times a second. Now what they're doing here is they're taking out, from what I'm understanding, if it's bandpass filtered, they're taking out everything below 250 hertz and above 5,000 hertz. So what kind of signal are they really working with here? It doesn't seem as though it's the same as actual neurons firing. Well, they're not looking at individual neurons firing. It's definitely an ensemble kind of approach.
00:25:03
Speaker
and they're looking at, even so if they have the, what's it, 96 electrodes? Yeah, 96 electrodes. Each of those electrodes is recording from essentially the activity of many, many, many neurons, and they're overlapping the output or the input into these arrays, overlapping
00:25:26
Speaker
sets of input from many, many, many, many neurons across this area. There's lots and lots of cells in this small space. I guess what I was surprised at is that I know that there's a lot of neural synchronization at lower frequencies than 250 hertz. So certainly there's lots of synchronization at 40 hertz. And if you look at EEG signals that people are looking at, they certainly look at a lot below 250 hertz. But here they're just kind of chucking all of that
00:25:55
Speaker
Yeah, I mean, it must just be the case that they're able to find interesting signals there. I think that might make sense then to jump right into how the system is set up.

Training BCIs: Imagining Movements

00:26:10
Speaker
In other words, what people are doing that is leading to the building up of the features that they're using to control the device. So you're talking Butterworth filters.
00:26:24
Speaker
No, I'm talking about like I don't know. I'm just I just like saying Butterworth. I know Butterworth filters is. Yeah, I mean, I think it's kind of cool to think about. What people are actually doing so the patients who are all paralyzed essentially from the neck down. Two of them essentially completely paralyzed from the neck down and one of them severely impaired. The presumably they can still.
00:26:50
Speaker
So OK, so if they're all paralyzed in the neck down, they can still use their eyes so there might be. Other ways that they're communicating, they could talk to. That's important because that is important. What they're saying here in the paper is that this is they keep talking about locked in patients and how this could really help locked in patients. Why are they doing that? Why are they talking about helping locked in patients? Well, I mean the patients that they're quote unquote helping with this research don't really need it because.
00:27:18
Speaker
there are way better ways to control devices than what they're doing, right? Because speech recognition, they could just use speech recognition. You're trying to spell out a word with a cursor that has eight input points. We could just say the word. One thing I noticed is, maybe it was in this paper, maybe it was in a separate paper, but the top speed that we're talking about here,
00:27:47
Speaker
in terms of characters is somewhere between three and six characters per minute. So imagine typing at that speed. Yeah, exactly. The whole point here is that you're controlling some sort of cursor that can point to things on a computer monitor with your mind, which is amazing. And then you can basically spell out words or control a robotic arm or whatever it is that you can do with a cursor.
00:28:16
Speaker
but you can do it very coarsely and very slowly because the representation that we can effectively capture with our current technology is gross and not that awesome. So yeah, but I mean, the point I was trying to make there was just play that these patients are paralyzed from the waist down. So it's cool to be able to control devices. Or from the neck down, right? Sorry. Yeah. These patients are paralyzed. These patients are paralyzed from the neck down.
00:28:46
Speaker
So it's cool for them to be able to control devices that can help them in the world. However, this is not the most efficient way to do that. They could control them with their neck, you know, with their chin, for example, moving their chin, their chin and controlling a joystick like that. Sometimes people do that or even their tongue, eye movements, speech,
00:29:10
Speaker
The point that I think of using these types of patients, however, is that they don't need their motor cortex anymore. So that's how you can get human subjects approval for doing this research on these patients because they do not need this part of their brain anymore. It's not doing them any good. That's a really interesting point. I'm guessing that's how they got this done.
00:29:38
Speaker
And then, but then they have to justify the research by saying, this is really something that would help locked in patients. So patients who can't speak, can't move their eyes really effectively. So it's still on the order of basic research and it doesn't, it's still not reaching that useful applied stage. This is the state of the art and there's no practical application for it.
00:30:04
Speaker
And this is, again, more than 20 years on from when this exact field of research and line of thinking started to take very serious steps forward. So it's very slowly to say that. Although I think that when we saw this stuff, they were only using third order non-causal Butterworth filters and now they're up to eighth. So that's like one order every four years.
00:30:33
Speaker
OK, so let's move on in the paper now just to get some more details of what it's like for the actual patients and what they're actually doing here. Yeah, maybe describe the way you know what they're. Yeah, what the patient is doing. OK, so. Here's a description from the paper, so the calibration tasks. Task queuing was performed using custom built software.
00:31:00
Speaker
running that lab, the participants used standard LCD monitors placed about 60 centimeters from them. Participants engaged in the radial eight task as previously described. Now, if I'm thinking of this correctly, they have a cursor in the center of the screen and their task is to move it to one of eight locations that's directly out from them. Correct.
00:31:30
Speaker
Briefly, targets were presented sequentially, alternating between one of eight radially distributed targets and a center target. Successful target acquisition required the user to place the cursor within the target's diameter for 300 milliseconds, about a third of a second, before a predetermined timeout. And they did this calibration task for a few minutes.
00:32:00
Speaker
Yeah, and the idea is that the patient imagines some motion that they would make that would correspond in a intuitive way, intuitive for them. It's only important that it's intuitive for them. It doesn't matter that anybody else would understand it. In some intuitive way, a motion that would correspond to somehow moving in this space. So for example, they could imagine, I think one of the examples they used was
00:32:29
Speaker
Moving your whole arm from left to right outstretched pointing. With your finger pointing straight so point taking your taking your arm sticking it out. Pointing straight and then moving it left or right. Or controlling a joystick. So imagining yeah, imagining, imagining moving that hand.
00:32:51
Speaker
Yeah, imagine some motor movement, whether it be moving your whole arm, moving a joystick, or moving a mouse, or there was another approach to these. There you think they use five different approaches. And basically what they try to do is figure out when a person thinks about moving their hand or their arm in a particular direction, what signals can we extract
00:33:18
Speaker
from this electrode array that corresponds to that thought. Then how do we use those features, those signal features to then direct the cursor in that specific direction that we're indicating? They're indicating that you should move the cursor to the left. You think about moving your hand to the left or your arm to the left.
00:33:46
Speaker
And then based on what we're extracting from the neural signals associated with that thought. We're training an algorithm to tell the cursor to move in that direction. Now it's interesting to think about the limited amount of motion that you're really talking about, given the huge amount of signal that you're processing.
00:34:11
Speaker
Yeah, I know I thought it was super interesting that like for example one patient. Shows to use the idea of a joystick. And they talked about some other research where it was patients were using the thought of moving a mouse and then actually clicking a mouse like a mouse click was actually something that they could extract features from like an imagined mouse click. They could feel themselves going through that motion, right? Try to simulate it by by thinking about it in as much detail as possible.
00:34:42
Speaker
Yeah, and then somehow at the end of all that, the researchers were able to extract a neural signal that they could process to then cause a reliable action on the computer screen. Now, obviously this is a league apart from
00:35:02
Speaker
decoding a complex thought that's recruiting a substantial portion of your brain. I mean, this is condensing all of the neural activity that's going on in those 85 billion neurons in your brain down to a couple bit signal. Yes. And also it's interesting that there's no sense in which
00:35:30
Speaker
This approach is trying to make any meaning out of any of these neural signals, which is very different than what you would need to do to do what you're talking about, to decode a thought. Right, there's no way that you can look at that signal and say, oh, huh, funny, now he's thinking about a joystick, whereas before he was thinking about moving his arm over that way. At all. It's purely algorithmic in the sense that when that person is thinking about thought,
00:36:00
Speaker
they're extracting some signal and they're trying to figure out what is in that signal that we can use to reliably do something every time that person has that thought. So it could be that when they're looking up or when they're making the cars to move up, they could be thinking about folding paper airplanes. When they're looking down, they could be thinking about watching the big bang theory. Could be anything.
00:36:28
Speaker
be anything. It doesn't matter for the purposes of this approach. There's literally no sense of even trying to use, for example, the way that this information is laid out over space in a meaningful way. It's all just purely abstracted mathematical representations. Whatever captures the most information is what's being used. Now, I guess the most interesting part of this paper, at least to me, and maybe
00:36:56
Speaker
This is something you know better than I do, but maybe I can get you started talking about it. So the distinction between open loop and closed loop systems, and as I'm understanding it, most interfaces work in an open loop system.

Practical Performance of BCIs

00:37:16
Speaker
So basically, the person, you know, you ask the person to have this thought.
00:37:24
Speaker
Think up, think up, think up, just think up for a while. We'll record everything that's going on as you're thinking up. And then we'll start tying that to a particular movement of the cursor. That's right, and then. So you'd go through an open loop session where you'd imagine moving your hand up. Or moving up or joystick up. And then. Yeah, exactly, you'd encode that.
00:37:54
Speaker
and then apply that encoding to a test where you would have the person say, OK, now try to move the cursor to the up location. And they'd think the up thought. And then if everything was working well, you'd see that it was working or not working. And based on how well it was working or not working, you'd try to take the successful instances from that test and
00:38:18
Speaker
refine the algorithm based on differentiating the successful versus not successful instances and improve the algorithm. The idea of the closed loop system is simply that they are doing training and refinement at the same time. So they just hook them up to the recorder. They're looking at that screen. No trials where they're just thinking up and nothing happens. They're just jumping into it.
00:38:47
Speaker
The part that I didn't exactly understand was they were talking about how they start with some computer assistance. Somehow, there is some successive approximation happening where the earliest trials, the cursor is being guided in the correct direction. Then over time, they release that assistance. It sounds a little bit like biofeedback so that
00:39:16
Speaker
Right closed loop. Yeah, they they they're just trying to in a Bayesian kind of way of dynamically updating the system in response to ongoing feedback. Build the algorithm as the train is running down the tracks, if you will. And all all towards the goal of just being faster. Maybe a.
00:39:44
Speaker
An intuitive way to think about it is imagining Luke Skywalker trying to lift that X-wing up. So when he's training with Yoda, he's internally thinking about how it's done. That's an open loop system where he just is thinking about it. Then he goes out there in the swamp and Dagobah. And this is a nerdy podcast, by the way. No, this is, I mean, you hit the absolutely
00:40:14
Speaker
That is the killer app for this technology, is the force. The force. 100%. 100%. And that's been known for a long time. And there's a slight digression here. I played around with, there's the force trainer EEG system. So it's a... That's a neuro focus, right? NeuroSky, I think is the maker of the tip. So the basic idea is the same.
00:40:41
Speaker
as this complicated brain gate system. It's just, you're looking at two electrodes that are placed out on, that are much less invasive that you just put on basically a forehead in your temple and look for a signal and you've got a display where a, you know, a ball can rise or fall down by how much you're concentrating.
00:41:08
Speaker
Everything I can figure out from those kinds of systems are that it's almost entirely responsive to how tense your temples are. And it doesn't have much to do with your actual brainwaves. It has to do with just the electrical impulses coming off of the muscles on your forehead. So as you're concentrating, you're tensing up a bit. And it seems as though that's the signal that you're getting, but it's not.
00:41:37
Speaker
Yeah, I remember when I met with the NeuroSky guys way, way, way, way, way, way back. I don't remember when this was. I mean, it must have been 2004 or something like that in San Francisco. And they were showing me the Star Wars trainer thing. I mean, it's like the first video game thing that they ever tried.
00:42:04
Speaker
Makes perfect sense. Yeah, of course. Yeah, of course. The course has what you do. And it didn't work like it worked like shit then. And it works like shit now. So it doesn't work. It just doesn't. Doesn't work. You can't extract that sort of signal from something as non-invasive as a toy would have to be. Right now. Yeah, I mean, EEG doesn't work that well. I mean, it works. It works. But it doesn't work well enough to do what we're talking about here.
00:42:34
Speaker
And the reason is the skull is really thick. So the electromagnetic signal coming from a neuron changing its potential, the amount that it does to fire an action potential is very small, very tiny, small. It's just not, it's not like you're going to stick your finger in someone's brain that you're going to get electrocuted. I mean, it's nothing like that. You would never feel any electricity at all.
00:43:01
Speaker
And by the time you're getting to the outside of the skull, you're really looking only at those signals that are summed responses from huge numbers of neurons that happen to be producing a signal at the same time, right? So you're anything that's in any way emergent from that has to be such an incredibly powerful signal. There's absolutely no way of knowing that those signals are in fact coming from neurons. It's philosophically. I can't think of even a way that you would
00:43:31
Speaker
unequivocally ever know that and in practice you certainly don't know that and your point exactly so much of what you're probably recording from is the electrical activity of your muscles contracting because you have neurons in your muscles muscles also produce their own electrical signals there's no there's no almost no way that that that
00:43:56
Speaker
what you're doing when you're using the force with the NeuroSky band has anything to do with what's happening in the brain, except for your brain is telling your forehead to tense up. Well, in that sense it is. Yeah, yeah, yeah. Okay, so let's see. Okay, so where are we in the paper right now? So we got to talking about how they encode
00:44:24
Speaker
the features to control the cursor. I think the last thing to talk about is how well it works. Yes, OK, so they've got this open. So the new thing here is they've got this closed loop system where someone just gets popped in and interacts with it with these using these complicated statistical models and eventually. Can get they can get some signal is good enough so that they can
00:44:54
Speaker
move that cursor around. OK, so how well does it work? Right, so. One of the things that they mentioned is that at the very beginning is that it's you know the calibration to be able to successfully move the cursor to one of these eight locations with just purely with brain control is faster than it was in other methods. So for example.
00:45:20
Speaker
one patient was able to successfully do this on a first ever day of closed loop BCI use, acquiring a target 37 seconds after initiating calibration. So with under a minute, they're able to successfully calibrate this machine to be able to move the cursor in a desired direction. And that is not nothing. That's impressive.
00:45:44
Speaker
Pretty cool. And I mean, if you look at the traces, if you're looking at figure two D, they basically have these traces of how the cursor was moved around on the screen as the patients were trying to go to certain targets. And they kind of meander around a little bit, but they're generally, the cursor is generally moving in the right direction most of the time. And it almost always ends up at the target eventually. And then the idea is that you keep the cursor on over the target. And after it's been there for 300 milliseconds,
00:46:14
Speaker
it registers a hit essentially. I didn't go into like a really elaborated signal detection analysis, like how many hits misses false alarms, whatever were happening here, but definitely get the sense, take-home is that it basically works. Yeah, so it seems as though it happens fairly rapidly and then asymptotes.
00:46:44
Speaker
So calibration, if you're looking at how long it takes to calibrate these systems, after about three minutes, it's as good as it's going to be. And as good as it's going to be is three seconds to move that cursor to its target. Right. So every three seconds or so, you can select a new target. So if you imagine each of those
00:47:10
Speaker
You could imagine all different interfaces that you could create. You can move in a robotic arm. You could type out a word. You could ask for one of eight different pre-programmed scripts to run off. Anything that you can imagine. It's a useful interface, potentially. But now also three seconds is a bit of a lag, too, if it's- It is. I mean, if you just made yourself an Iron Man suit and
00:47:40
Speaker
It takes you three seconds to move your arm up. That's kind of slow. It is. It is. It is. And it's certainly slower than saying, hey, you know, bring me a sandwich, please. Yeah. And I mean, given that the eventual hope for for something like this is that, you know, it happens incredibly rapidly and that you're essentially increasing that
00:48:04
Speaker
or you're decreasing the time that it takes for you to communicate something that a signal should be as fast as thought. And that's the goal. Yeah, it's quite slow. And again, currently in this current implementation is not in any way useful because there are better ways to do all of these things. These patients that are being tested do not need this technology. They will not use this technology. They do not use this technology in their daily life.
00:48:34
Speaker
they have to get literally plugged into the machine. There's a port on the top of their head that needs to get plugged in for this to work in any capacity. So these patients who can speak and move their face have no need for this and will never use it. And presumably they are participating in the research because they are suffering and they understand that other people are suffering even more and that their participation could help those people and people like them in the future.
00:49:05
Speaker
And so they're doing it to help people. And because they don't need that part of their brain, not doing them any good, they're willing to sacrifice some pretty considerable discomfort and really rather risky procedure. Having your brain operated on is no joke. So yeah. Well, at least this provides a clear next step is seeing what happens with a patient with locked-in syndrome to see if you can get this up and running
00:49:35
Speaker
Right, I don't know what the current state of the art is there, but it would be interesting to look at that. I guess the interface issues are more dramatic in that situations. It's so helpful to get the feedback of being able to say, do you understand what I'm saying? Or is that what you intended to do, for example? Yeah, yeah, yeah. I mean, there are systems that are designed to get around this,
00:50:03
Speaker
they're cumbersome and difficult to use. And yeah, with that, if you didn't know going into the experience ahead of time, how you were supposed to do this, it represents challenges. Let's see, are there any other? I feel like we've covered the paper fairly well in terms of what its goals are and how it's actually working. I think that basically covers it. I think I think it makes sense to step.
00:50:34
Speaker
really, really deep down, way, way, way into how this stuff works today in brain-computer interfaces.

Future Possibilities with BCIs

00:50:41
Speaker
We alluded a bit to how the extracranial devices work or don't work. I think it's worth getting a little bit future-oriented and talking about what Elon Musk and his ilk are trying to do. So are we thinking now about ninth or tenth order non-causal Butterworth filters?
00:51:03
Speaker
I think you probably might even need to go 12. Whoa, okay. You're thinking far future. Yeah, far future. Well, why don't we take a little break here and then get into some of the cool futuristic Robopocalypse kind of implications of this stuff.