Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode 5: Reading the Mind with EEG:  Adrian Nestor image

Episode 5: Reading the Mind with EEG: Adrian Nestor

S1 E5 ยท CogNation
Avatar
29 Plays5 years ago

We talk with Adrian Nestor, a professor and researcher at the University of Toronto, Scarsborough, about his recent research, the state of current brain imaging technology, and some speculations about where the field is headed. Can mental images and thoughts be captured, decoded, and understood by a combination of electroencephalography and machine learning techniques? What is the hype and what is the reality?

Recommended
Transcript

Introduction to the Podcast and Guest

00:00:06
Speaker
This is Cognation, the podcast about cognitive psychology, neuroscience, philosophy, technology, the future of the human experience, and other stuff we like. It's hosted by me, Rolf Nelson. And me, Joe Hardy. Welcome to the show. Our guest today is Adrian Nestor, who is a research scientist at the University of Toronto at Scarsboro.
00:00:34
Speaker
He studies EEG and face perception. He got his PhD at Brown University and did his post-doc with Marlene Behrmann at Carnegie Mellon University. We are happy to have Adrian on. He's going to tell us a little bit about some of his research in face recognition and EEG, which relates to our interest in brain-computer interfaces. Welcome to the show, Adrian.
00:01:01
Speaker
Thank you for inviting me. It's a pleasure to be able to join you guys. Yeah, thanks for thanks for being on the show.

Face Recognition Research: EEG vs fMRI

00:01:07
Speaker
Great so maybe we can start things off by just giving you a chance to talk about some of your recent research and the things that you're excited about Adrian. Yes, you covered some of the background, which is relevant for the things that I do now.
00:01:21
Speaker
I've been doing for a long time research into face recognition, psychological aspects, neural aspects, computer vision aspects related to face recognition. From a new imaging perspective, I've been trying to develop novel methods for the analysis of fMRI and EEG data. And at some point, I decided to pin these two methods against each other and see how they can
00:01:50
Speaker
illuminate some core questions into how we represent faces, how we identify faces, how we represent that information, and what sort of practical applications we can base on that research and on those insights.
00:02:05
Speaker
So you started out with doing fMRI analysis, and this can give you certainly a better spatial resolution so you can see a little bit more where things are going on. But there are obviously some difficulties with using this as a general purpose kind of technique.
00:02:21
Speaker
Yes, that's correct. There's a great deal of interesting research carried out with FMRI. That's how I started my training in neuroimaging. I love FMRI. It's a very powerful tool and it's a very useful tool. But at the same time, it has its own limitations. For starters, it's a big machine. It's relatively expensive. It's not as widely available as some other neuroimaging technologies.
00:02:50
Speaker
So from a practical standpoint, it's not necessarily something that you want to focus on exclusively. In contrast, EEG has all of those advantages. It's small, it can be made portable, it's much more widely available, and there's a lot of hype around BCIs that are based in novel types of analysis that target EEG data in particular.
00:03:18
Speaker
Well, that might be a great place to jump off in terms of talking about BCI.

EEG and Brain-Computer Interfaces

00:03:23
Speaker
You mentioned the word hype. I think that's an interesting topic that we have been talking about about what's real, what's hype, what are the possibilities for using EEG for controlling computers and other interfaces. So maybe we can talk a bit about that based on your experience and knowledge. It might make sense also before we jump too deeply into that is to give a little background
00:03:45
Speaker
into what we really mean when we say EEG and BCI? Sure. EEG is a neuroimaging technology that's been around for a long, long time before the advent of fMRI. It's relatively well understood, at least in terms of general principles.
00:04:05
Speaker
It involves placing electrodes on the scalp of somebody's head and then you record correlates of neural activity in the form of electrical signals. It's got pluses and minuses. Neuroscientists are not particularly so enthusiastic nowadays with EEG because of its poor spatial resolution. The neural generator
00:04:30
Speaker
or an EEG signal can span centimeters as opposed to millimeters in the case of fMRI and also the skull very much acts like a diffuser so it scrambles the information to such an extent that it makes it relatively difficult to be able to pinpoint where exactly the neural activity responsible for a given signal or component of the signal it's coming from. If you're interested in finding out exactly
00:04:54
Speaker
where things happen in the brain, then EEG is a very poor choice. But what lacks in spatial resolution, it makes in temporal resolution, because instead of settling for a temporal resolution of seconds like FMRI, we can collect information from millisecond by millisecond resolution. And because of that, one hope is that
00:05:17
Speaker
There's enough information there's enough structuring the data when you're examining temporal patterns that you might be able to achieve the same feeds that fMRI can support using special information i'm not sure whether that's fairly.
00:05:36
Speaker
Yeah, no, that's super helpful. I think that expresses well the advantages and limitations that EEG has, and I think it's difficult to.
00:05:48
Speaker
create a good picture of exactly what it is that an EEG setup is actually recording. Because like you say, your resolution is on the order of centimeters or so. And in that kind of space, you've got millions and millions, if not billions of neurons that could be contributing to that signal. And it's harder to know exactly what's going on at a local network level if you're summing over that large area.
00:06:16
Speaker
Yeah, Adrian, maybe you can talk a bit about some of the research that you've done, I guess, specifically related to maybe talking about face recognition. I guess the idea would be that using these neuroimaging techniques, you can try to get a sense of what's the possibility for actually seeing what faces people are looking at based on the neural data that you're recording from. Is that part of the general approach? Yes, indeed. So just to link those two lines of discussion.
00:06:44
Speaker
As I mentioned earlier, EEG has been around for a long time, but what's new and exciting about it, it's not necessarily the methodology per se, though there are exciting directions of research in terms of manufacturing, for instance, when it comes to tri-electrodes and things of that nature.
00:07:01
Speaker
Part of the excitement is the hype, is the marriage between this technology and new machine learning algorithms that allow you to make use of the data in ways that have not been possible before. So how is this relevant for face recognition? Well, initially I was particularly interested in identifying what are the neural representations that support face identification,
00:07:28
Speaker
gender recognition, emotion recognition, various aspects of face recognition. Then FMRI was the neuroimaging technology of choice just because so much work has been conducted using FMRI when it comes to face perception. Several years back, I started to apply a variety of classification, decoding algorithms to FMRI data.
00:07:54
Speaker
We managed to make some considerable progress in being able to illuminate not only where information is being processed. That's one of the core questions that I can write and address. But also, what type of visual representation supports those kind of processes? How can I distinguish Ralph and Joe? What sort of patterns underlie those spursets?
00:08:17
Speaker
We also wonder and more recently with whatever we managed to achieve with FM Ryan that since can be can be done with eg because again it's it's much more affordable it's much more convenient and it opens the possibility of practical applications and

Experimental Setups and Methodology

00:08:38
Speaker
As late as this year, we managed to perform decoding and reconstruction of facial percepts from EEG data. And that's something we managed to do in the past with FMRI. But the challenge was to be able to do this with a signal derived from EEG equipment.
00:08:57
Speaker
And we've been very pleasantly surprised by that feat, which also prompted a comparison between the sort of decoding and the sort of results that we can achieve with EEG and fMRI. And we were even more surprised to find that there's a great deal of information within temporal patterns that EEG records. And because of that, we're able to obtain decoding levels comparable to those that fMRI supports.
00:09:27
Speaker
So to phrase that a bit more generally, whatever fMRI can inform on when it comes to base recognition, EEG can do as well, but from a slightly different perspective. So you're extracting a lot more temporal information that you can pull out of it than you would be able to get with an fMRI scanner.
00:09:48
Speaker
That's correct. To be precise about it, in the past, we've also been using spatial temporal information from FMRI. It's just that over there, the primary source of information is spatial, it's not temporal. While with the GD opposite happens, you still have spatial information, right? You can, at the very least, collect information from multiple electrodes that sample overlapping but distinct cortical areas, presumably.
00:10:13
Speaker
But the core information comes from time. So again, spatial temporal patterns are at the core of this enterprise, both for fMRI and for EEG. But the weight is different in terms of where spatial versus temporal information comes into play.
00:10:29
Speaker
How good is fMRI in terms of distinguishing different faces? So say you're thinking of an eyewitness on a stand. As you're imagining a face, what sort of resolution is possible with EEG? And what do you think the state of the art is in terms of how good fMRI is? And then maybe how good EEG is or could be?
00:10:53
Speaker
I think it might be helpful here also to get a little bit specific in terms of painting the picture for listeners about what these setups actually look like, what participants are seeing, and then what we're recording, and then how those things relate to the underlying neural activity. So we can kind of paint a picture of what this looks like for people. Oh, so I was like this. So if you're a subject in the experiment, what exactly are you doing?
00:11:21
Speaker
Yeah, exactly. And then from the researcher perspective, then how are we using that information to get at what Rolf is asking about, which is the performance of these techniques? Well, the experimental setup is relatively standard. It's not particularly difficult. What we do is expose participants to lots and lots of images, let's say dozens, if not hundreds of images of
00:11:47
Speaker
different individuals. Then we collect neuroimaging data, either with the aid of fMRI or EEG, and using that data, we try to decode in a first stage. In other words, what we attempt to do is to decide whether participants at a given moment are looking at Ralph or they're looking at Joe. This is something that's been done for a while and that we started. More recently, we took a step forward, and instead of just decoding the information,
00:12:17
Speaker
So instead of just looking at a pattern and being able to decide, oh, this is a pattern corresponding to Ralph, Joe, or to Adrian, we do something quite a bit more intensive, which is trying to reconstruct, to build an approximation of the person associated with what the participants are seeing. So we take a neural pattern, either fMRI or EEG, and we reconstruct an image
00:12:45
Speaker
of what the participants perceive when they look at the face of Ralph or the face of Joe. So in terms of a task, there's not much on that front. Participants just view images. Sometimes we do our best to make sure that they don't fall asleep. So we can give them a completely different task. Let's say, press a button if you see a female face as opposed to a male face. But that's really good. Just so you know that they're paying attention, they're doing some kind of task. But you're measuring brain responses to different
00:13:14
Speaker
faces and using that to help reconstruct that's right yeah so we we try to keep those things simple as possible because they're not essential for our goals what's not you can we ideal from a practical perspective is the fact that we have to collect lots and lots of data so initially.
00:13:34
Speaker
Before I can write, we collected five hours of data to be able to achieve this. More recently, we cut down the number of hours because we can use EEG. And because of the high temporal resolution of EEG, we don't need multiple second trials, which is we can just bombard the participants with
00:13:55
Speaker
with images of faces every hundred milliseconds or so. So we can collect a lot more data over a shorter period of time. But even so, these are quite intensive ways of collecting data and they can induce fatigue.
00:14:11
Speaker
I remember when I studied that line of research, I was just finishing up my PhD at Brown, and I asked one of my friends to go through 10 hours of data collection, and I lost a very good friend, but you know, there's all those pros and cons. So let me make sure that I'm following this correctly. So when you say percept, how are you distinguishing that from simply recognition?
00:14:41
Speaker
What is the distinction that you're making there when you say like trying to reconstruct the percept? Well, what I mean by that is the fact that what we're trying to reconstruct, what we're trying to visualize is not necessarily what participants look at, but rather the way in which the visual system processes that. When you look at a single phase with good lighting and in typically good visibility conditions, there's not much of a distinction. What you reconstruct,
00:15:10
Speaker
can be relatively close to what's in front of them. But in other situations, what the brain does is construct a fiction. And I'm just going to give you one example. Whenever you look at a group of afraid faces at a crowd, then the brain often constructs a fiction on an artifact that summarizes the average mood and sometimes even the average gender or identity of that group of faces.
00:15:38
Speaker
And why would the brain does that? Because it's essential once you're in front of a crowd to figure out whether that's an angry mob or it's a friendly group that you should stick around with. But there was no real understanding from a neural perspective of how the brain does that. So then what we try to do is reconstruct the appearance of that percept when people look at groups of faces without particularly focusing on any single one of them.
00:16:08
Speaker
And the question was, if we try to reconstruct something, what will we reconstruct? Images of Joe and Ralph and Adrian, every single one of them, or rather some kind of weird mixture of all three of them that the brain produces? And yeah, recently we managed to not only ascertain that there's a neural basis for that fiction, but actually to visualize it.
00:16:34
Speaker
We show people six faces and instead of being able to reconstruct the individual identities of that ensemble of that crowd of faces, what we managed to pull out is an average of those images that the participant doesn't actually ever see in any of those experimental sessions. So that's just one way to convey the distinction that what we try to reconstruct is not what's in front of you,
00:17:02
Speaker
but rather your perception, your understanding, your interpretation of the visual world. So you're recording this brain activity when people are looking at these pictures and then what you're actually creating is another picture that is your interpretation of that neural data.

Visual Reconstructions from Neural Data

00:17:21
Speaker
And that picture is somehow an average of those faces. That's correct. So that's something that there's just example of a fiction of a contract. And I wouldn't call that an illusion because it's not really an illusion. It's just a useful contract that the brain is building to deal in an efficient manner.
00:17:42
Speaker
with a wealth of information that needs to be processed efficiently in a very short period of time. You shouldn't be able to, you shouldn't need to look at 10 different faces to understand that people are furious with you and you shouldn't run away. You need to do that within the scope of several hundred milliseconds. And then that sort of summary representation built on an average can do that for you. And that's precisely what we managed to extract and visualize from neural data.
00:18:12
Speaker
That's really cool. The part that I'm missing here a little bit is, how do we get from, you're recording the data, people are looking at faces, you're recording electroencephalography data, functional magnetic resonance imaging data, and then your models are outputting images. What are the necessary inputs to those models that allow you to actually
00:18:38
Speaker
draw those pictures. In other words, like what is the computation that allows you to make those pictures? I want to make sure I'm on board on this too. So I still have, I asked some questions too. And I think about, I'm trying to make sure I understand your question too, Joe. So I think about Jack Gallant's work at Berkeley. I don't know how much this is, the stuff that you've done, Adrian, is similar to that, but Gallant has done some
00:19:03
Speaker
really interesting work on reconstruction of visual images from earlier visual cortex. And the inputs that he uses for those are tons and tons of visual images taken from YouTube. So is this sort of the direction that you're
00:19:23
Speaker
Going Joe is what sort of inputs these are taking. Is it from a large library of visual images or? No, sorry, what I was getting at is somehow you need to have a mapping that gets you from neural data to images. In order to do that, you need to seed that in some capacity, right? So the algorithm needs to know what the temporal spatial relationships in the data to the to the to the images that you're developing. And just wondering how that
00:19:52
Speaker
comes together? How is that developed algorithmically, I guess? Right. So I think that the questions are related, because indeed, Jack Gallant has been a huge proponent and advocate for this experimental paradigm, even when people were doubting as to whether this is neuroscientifically interesting or just a neuroengineering trick.
00:20:15
Speaker
What we currently do compared to the older work by Jacqueline is that we're not necessarily targeting early visual cortex and also currently we rely on EEG as opposed to FMRI. Also, we try to synthesize visual features directly from neural data rather than assuming that there's some type of basic primitive vocabulary that we have to rely on.
00:20:44
Speaker
in order to perform image reconstruction. And that leads me into... So the vocabulary that you're speaking about would be a library of images and the distinction you're making, and I'm still trying to make sure I get a hold on this, with Jack Gallant, is he's recording from earlier areas of the visual cortex, so something that corresponds spatially to the external world, so things that are
00:21:13
Speaker
right or left on the external world are right or left on the early visual cortex. And then the representations that you're looking at are based on maybe a little more complex transformations of this kind of thing. So less like a map of the visual world and something that maybe approaches a little bit more semantics of what's being perceived.
00:21:36
Speaker
Right, yes, that's correct. So I think relying on information from the early visual cortex is very good, especially at the very

Synthesizing Visual Features: Challenges and Techniques

00:21:44
Speaker
beginning. So this, in time, experimental paradigm has been around for about 10 years now, from 2006, when the first paper was published by Thijon and colleagues on neural-based image reconstruction of simple alphanumeric characters.
00:21:58
Speaker
So that's how everybody started in this direction. This was the right play because if you look at early visual cortex, you know exactly what features matter, oriented edges to simplify things a bit. But if you're trying to reconstruct things that are a lot more complex, and also if you're trying to reconstruct things from memory, not just perception, then you need to start to rely a lot more on high-level visual cortex.
00:22:27
Speaker
You don't really know what sort of visual features that part of the cortex relies on. They can be very interesting, very fine-grained, difficult to understand features. So then why make assumptions as opposed to just trying to synthesize that kind of vocabulary directly from your imaging data?
00:22:49
Speaker
And this is an enterprise in itself. It doesn't necessarily need to be made to the image reconstruction. Trying to figure out what the visual cortex does before, let's say ARV4, is key to neuroscientists, the people interested in vision. So what we try to do is to address both problems, to try to figure out how the high level visual cortex analyzes information
00:23:17
Speaker
how it decomposes it into features, and then to use the brain's own visual features to reconstruct images. Because the assumption is, if we're using the brain's own features to reconstruct presets, probably we're in a better position than if we are when we pre-assume a certain set of vocabulary features. And secondly,
00:23:40
Speaker
Reconstruction gives us a way to validate our results concerning what sort of visual features high level visual cortex is using. But ultimately, all of this is just a part of the procedure for reconstruction. So to get to Joe's question,
00:23:55
Speaker
What we do basically is take a pattern of neuroimaging data and we build a mapping function between properties of the neural signal and properties of images. And that involves machine learning and computer vision algorithms.
00:24:11
Speaker
To build that function, what we need is essentially lots and lots of data. The more, the better. Once we can approximate that mapping function with some degree of certainty, then we can try to throw new images of the participant. Because if I know what aspects of the neural signal match onto what aspects of an actual, let's say, color image, what configuration of pixels matter,
00:24:41
Speaker
then I can infer what image people look at once I have access to the cortical activity. That was an extremely thorough and good answer. I appreciate that. That was helpful and I think you tied my misunderstanding of this with, I think, the way that Joe was thinking about it too. That helps a lot. Yeah, it's really cool and I think maybe might be a good lead in to talking a bit about
00:25:11
Speaker
where we see this going, you're able to get a pretty good sense of what people are seeing and not even just what they're looking at, but in some sense what they're actually perceiving from looking at this neural data. What do you see as the next really exciting topics in this direction?

Practical Applications in Healthcare

00:25:32
Speaker
Where do you see this going the next few years?
00:25:34
Speaker
for us that open a whole range of possibilities. So one of the challenges is to identify those lines of research that are most rewarding and informative, different timescales. One of the things that we are currently looking into is processing patients or individuals with visual disorders, with visual distortions, right? So an entire category of individuals have difficulties with base identification.
00:26:02
Speaker
They can look at themselves in the mirror and not be able to recognize themselves. And those are called congenital prosopagnosics. Some estimates placed a number of two to three percent of the population. So it'd be very interesting if we managed to visualize those sort of distortions. They report the phenomenology behind that.
00:26:24
Speaker
condition is quite interesting. They report seeing features floating in space, so it'd be very interesting if you managed to reconstruct those pulse sets to see exactly what they experience when they see RFA's.
00:26:38
Speaker
In contrast, you have people with lots and lots of mood disorders and personality disorders that have absolutely no problem recognizing individuals, but they have difficulty recognizing expressions and they have huge biases in terms of projecting or misinterpreting facial expressions and emotions. So for instance, individuals with a borderline personality disorder or bipolar disorder,
00:27:05
Speaker
can project an angry expression or contempt or disgust on a face that actually emotes nothing at all. So we'd love to be in a position to be able to visualize that kind of bias of misinterpretation.
00:27:22
Speaker
Because I think to some extent that validates their own experience and who knows maybe in the long term it can also be used as a diagnosis tool. Also so far we've only talked about face recognition but that's because a lot of my work in the past is focused on that. More recently I've been doing things with other visual categories such as words.
00:27:44
Speaker
We reconstructed words that people look at from EEG data, and we know that there's a considerable proportion of kids suffering from dyslexia. So if we manage to ascertain the content of their perception, for instance, the reports that they see the letters flipping positions swapped with each other, that would be very interesting to us.
00:28:07
Speaker
So again, that would validate, that would confirm their own subjective experience and it could also potentially be used as a diagnosis tool in the future. So again, I see a lot of potential from a healthcare perspective. That's really wild. So you're basically trying to get at what people are perceiving and they're not even seeing necessarily the images that would correspond to that perception in another person.
00:28:37
Speaker
That's correct. Even a healthy brain constructs fictions of the environment and we're currently able to see that. So just to take this a step further, we'd love to see, to visualize illusions and to visualize biases in perception, to visualize, why not hallucinations, visual hallucinations. And yeah, I think that opens up an entire range of possible applications.
00:29:06
Speaker
I think the emotion bit is interesting here too, that you're sort of moving beyond just talking about how a visual image might be reconstructed, but you're getting some inputs from emotion circuits and maybe from all over the place to reconstruct something that's a little closer to a person's actual experience of it.
00:29:28
Speaker
Right. So this is one possibility. Currently we're also trying to perform reconstructions from memory. So as opposed to having people look at things, we ask them to remember things and then we record the G data associated with that. And I think here's where a lot of the BCI applications come into play because this has been a challenge for a number of years, trying to decode information from neural activity related to imagery.
00:29:59
Speaker
And most of that research has been done with model imagery. Imagine yourself going right or going left. Imagine yourself playing tennis or sitting in a comfortable armchair, right? But more recently, people have taken an interest of decoding information from visual imagery or auditory imagery. Imagine yourself saying the word help or hearing the word help.
00:30:23
Speaker
and then trying to decode which word you heard. Was it help or was it map? We are already seeing some results pointing to the ability to discriminate, to decode information from visual imagery. The ability to decode, let's say, when the participants are looking at or imagining one word versus the other. But the challenge here is not to perform just decoding, but actual reconstruction.
00:30:52
Speaker
What we love to do is to have one of these patients or even healthy adults visualize with the mind's eye something and then make it pop up on a screen. So that's the challenge and I have high hopes for it, but there's a lot of things that come into play. Some of them are neuroscientific in nature because the signal associated with memory is not necessarily the same one that the signal associated with perception.
00:31:19
Speaker
Secondly, the quality of the signal is not as good. What's called the signal to noise ratio associated with imagery with memory is not quite matching that fine for visual perception. So then we need to find ways to boost the signal and also we need sensitive equipment as sensitive as possible to avoid and to diminish the impact of artifacts.
00:31:46
Speaker
And last but not least, I don't think this is going to work with everybody because if I try to imagine a face during my waking hours, I'm not doing such a good job. People are wildly different in their abilities for visual imagery.
00:32:00
Speaker
and I haven't particularly been fortunate enough with those skills. In my case, perception based on memory and imagery probably will not be so successful. But if you're recording data from a visual artist, then the signal over there is a lot more robust and the visual experience is a lot richer. I expect a lot of difference in terms of individual variation.

Real-Time Decoding and Neurotechnology Development

00:32:25
Speaker
Here's a question about
00:32:28
Speaker
Maybe when it might be a satisfying thing to get the kinds of results that you're looking for. So say you're hooking this up to yourself and imagining things yourself. If you could get a real time readout of the decoding of what it is that you're imagining, do you think at some point you could get a better
00:32:49
Speaker
visual representation, then you can get across in words that that might be satisfying. In other words, I'm having a thought right now. I'm saying having a particular kind of image. I can't necessarily describe that visual image very well via language, but if I could see it suddenly pop up in front of me on a computer screen, I could say yes, that is what I was thinking about. Would that be a satisfying end point for you or a satisfying direction to go?
00:33:18
Speaker
Yeah, yeah, there'll be tremendous because our ability to describe visual information in words only goes that far. And if we start to talk about patients, individuals with complete locked in syndrome, then
00:33:34
Speaker
virtually all or most of what we have to go on is neural activity. So we're trying to convert that neural activity into a way, into a pipeline of communication. So yeah, there's a lot of possibilities in that way. And the applications are not just in healthcare, but as you mentioned earlier, they can facilitate a lot of neural forensic applications, in particular is related to eyewitness testimony.
00:34:00
Speaker
And attention that directions have already made in the past, removing, for instance, human sketch artists with automatic systems. So, for instance, Epic, which is widely used in the UK attempts to do that, just basically modifying different aspects of a face so that it matches more or less your memory of, let's say, a suspect or an assailant.
00:34:25
Speaker
So what we love to do is to facilitate that process even more. So you're thinking, you're remembering in as much visual detail somebody's face, and then you can have that pop up on a screen. So that's very interesting to us. How realistic do you think a scenario like that might be?
00:34:46
Speaker
I think it's a matter of time. I don't think it's a matter of if. And the time, again, depends on a number of factors. Some of them have to do with the type of equipment that's being used. So I think there's a chase nowadays for identifying the best type of electrode, right?
00:35:04
Speaker
And nobody has found the golden standard of that to compare, for instance, with what we can achieve using gel based electrodes. So there's a lot about manufacturing and about hardware. The other part is about the algorithms that we're using. And because a lot of this research is so new, so young, we still have no clue as to how far it can get.
00:35:27
Speaker
Third, it depends on the people that were using the technology, on the users. Because to be able to do that, you need to be able to focus. You need to keep in mind's eye a particular person, and that can take a bit of training.
00:35:43
Speaker
So there are a lot of factors that come into play. From our perspective, one of the key challenges is not only boosting the algorithms, their ability to produce meaningful and robust data, but also shortening data collection. Because right now, we put the participant in the scanner for five hours, or we collect EEG data for three or four hours. That's just not going to cut it for practical applications. We need to shorten those
00:36:13
Speaker
We want to be able to place the headset on top of somebody's head, and within 10 minutes we intend to have the calibration more or less working. And to do that, I think one path that people have explored in the past using BCIs is transfer learning.
00:36:31
Speaker
training algorithms on the data of a group of individuals and then testing it using on a new person. So these are just some of the things that we're trying to keep in mind and see how far we can progress by emphasizing one or the other. Maybe one last question on progress on this and then we can move on to some more, maybe a few bigger picture issues or speculations.
00:37:01
Speaker
Our last episode we talked a little bit about Elon Musk's Neuralink. He's looking at companies that are trying to engineer something that could potentially be a great tool for researchers to use to get a better kind of resolution. So you had mentioned that one of the aspects that's going to create an eventual use for this is getting better electrodes and getting better hardware. So do you think this is the best approach to be taking right now?
00:37:30
Speaker
Well, there's a challenge and there's a race to identify not only the best EEG equipment, but to identify the best portable neurotechnology over there. Is it EEG? Is it epineus? Is it something else? So people are just trying to identify whatever can provide good signal quality, which is also fairly comfortable.
00:37:55
Speaker
And that's an important distinction to make because a lot of the equipment with good signal to noise ratio is also incredibly uncomfortable to wear. And in contrast, some of the things with a slick design and relatively affordable, let's say, a lot of the commercial available EEG does not necessarily deliver the high signal quality that you need for some of these applications.
00:38:23
Speaker
And I think in the far future Elon Musk version of this, the solution is some kind of injectable device that goes into your bloodstream and forms some kind of mesh over your brain. And, you know, maybe that's realistic and maybe it's not, but there is that search for something that's comfortable and something that's not noticeable in everyday life and yet has a good resolution across a large amount of the brain.

Non-Invasive Mind Reading and Philosophical Questions

00:38:51
Speaker
It's hard to comment on the feasibility of something like that, but maybe in the long-term. I think the sooner people start working on that, the better, because it's going to take quite a bit of time to figure out the details for that. But I'm focusing on a somewhat more reasonable, more restricted timeframe. I like to be able to help developing if possible and enjoy this technology during my lifetime.
00:39:17
Speaker
Well, I think Elon Musk said he'll develop this stuff within two years and we should have that directly. So it shouldn't be that hard. Yeah, so I place a great deal of trust in algorithms in artificial intelligence and also in the motivation of many manufacturers to come up with good solutions. But at the same time, I'm trying to be realistic.
00:39:42
Speaker
If what you're trying to do is not just what people call mind reading, but also mind writing, if what you intend to do is imprint patterns of activation, then probably you need to go in. You might need to open up the skull or inject things out.
00:39:58
Speaker
But for the time being, I think that it's important to focus on reading because once we understand exactly what sort of neural patterns correspond to what aspects of the environment you're looking at or how you're processing information or how you experience certain things, then we're in a great position to do writing in the future as well. And if we try to accelerate the timetable for developing mind reading neural technologies, then
00:40:26
Speaker
It's so much easier to use non-invasive technologies with EEG. We can collect such data within the scope of days in dozens of individuals. If you're trying to do this with a micro
00:40:40
Speaker
electrode arrays and with implants, then it's difficult to secure the patients, to implement the safety protocols, the data collection. So it's just so much slower to make progress on that front. And the progress is very much appreciated, but it's much easier to focus on things that are basically tall fingertips, such as scalp EEG.
00:41:09
Speaker
Yeah, that makes a lot of sense. As you start talking about the relationship between mind reading and mind writing, it makes me think a little bit about philosophical questions about if I'm reading out these percepts from your brain, how do we write those to my brain in such a way that the message is conveyed in a way that produces the result that we desire? It gets to an interesting philosophical question about what is that relationship between
00:41:38
Speaker
activity in your brain, the meaning of that, and then the activity in my brain and what I perceive as the meaning of that. I think, Rob, this gets a little bit to the question you were asking in the last episode about, is it ever possible to really have this instantaneous communication between brains? Yeah, I guess one interesting question about this is a term that comes from cognitive science and
00:42:04
Speaker
artificial intelligence, the idea of multiple realizability. So the idea that say a particular thought or a kind of intelligence can be realized in an almost infinite number of ways so that you could build brains out of biological material. You could build them out of silicon.
00:42:25
Speaker
You can build them out of whatever you like. But as long as there are the right functional relations, you can create a kind of intelligence. And if you think about, say, the perception of an individual face, so the way that I perceive a particular face might be different than the way that you perceive a particular face. And we can realize these things in completely different neural setups.
00:42:53
Speaker
Or I guess any kind of thought that you have. When you think about it as closer to an actual sensory representation, we might implement them in similar sorts of ways. So early visual information might be represented on early visual cortex in similar kinds of ways. But then as it gets processed further, it might be implemented in completely different ways. So how is it that you start understanding how to translate? How is it that you understand
00:43:22
Speaker
What a thought may represent or what a set of neural neural activations may represent? And then how do you figure out how to transform that into something that another brain in which it might be instantiated in a completely different way? How is it that you map one onto the other? And that to me is kind of an interesting question.
00:43:43
Speaker
This starts out as a philosophical question. It's the problem of other minds. How do you understand the subjective experiences of another person and maybe turns it into an actual kind of computational transformation? How do you transform the
00:44:04
Speaker
the particular pattern of neural firing from one individual into something that's comprehensible and equivalent into another individual. Does this strike you as a problem that's worth addressing or something that in the longer term might be a goal of this kind of research?
00:44:24
Speaker
Absolutely. I think there's a very interesting question and it's actually not as much a question as it is a challenge. Then you have to identify the different levels of that challenge and one of them is indeed in terms of algorithmic mapping.
00:44:40
Speaker
If you think about what we've been discussing so far vis-a-vis, let's say, image reconstruction, you're building mapping functions between the environment and the brain. What we are talking about now is essentially building mapping functions between multiple brains, but not the level of anatomy or not just anatomy, but rather mapping functions between functional patterns across the brains of different individuals.
00:45:07
Speaker
Yeah, and one of the key problems with something like this might be that it that it I mean the difficulty might be in figuring out what the referent is for each of these neural signals. In other words, the. The concept that these neural signals are representing might be the difficult thing to understand rather than the neural signals themselves. Right, I mean, at some point you know if we're talking about images in the world,
00:45:36
Speaker
have something that feels like an objective standard. It feels like an objective standard to us because we can look at the picture that's reconstructed and we can say, yes, in fact, that looks like the thing that I saw. But if it's something that's more of an emotional or other quality type of an experience, it's like a labeling problem. It's like, how do we decide what that was correctly represented?
00:46:04
Speaker
Yeah, so I think this is what Joe is suggesting. It's a great first step because if you have a good mapping function that references objective stimuli, then you can build this entire process from the ground up because ultimately representations of abstract concepts rely essentially on your visual experience. So once you build those sort of functions for early visual cortex and high level visual cortex, then you can move
00:46:34
Speaker
one step at a time further up within the processing hierarchy. Are you going to be able to completely be able to map equivalent emotions across the brains of different individuals? I'm not sure. I don't know. I can surmise that you could potentially build approximations.
00:46:58
Speaker
Yeah, that's something that I care about, and this is exactly one of the newer projects that I've been working on for the last couple of years. Just trying to build mapping functions across the brains of different individuals so we know exactly how what you perceive matches against what I perceive. And I think that bathes the way to mind writing in the future, but
00:47:22
Speaker
This is just one layer of that discussion of that challenge. The other one is how to make things possible because there's a simplistic assumption that if I understand what pattern in your brain corresponds to a particular image of an individual, let's say, I can just activate the right neural population and that will happen. It's not as simple because the brain works as a whole. If I only target a small part of the cortex by doing that,
00:47:51
Speaker
Things may go horribly, horribly wrong. And for the last several years, many labs have tried to just along with neural activation using what they thought is relatively fine microelectrode arrays, areas of high level visual cortex and
00:48:08
Speaker
Carly, as far as I know, all they managed to achieve is to destroy, to eradicate visual facets. You look at a face and you plot an area with activation and that just melts the face into nose and eyes and hair, just horrible nightmarish images. So sort of in the way that transcranial magnetic stimulation
00:48:35
Speaker
mostly works as a temporary lesion or stops activity in a particular brain area rather than stimulating brain activity in that region.
00:48:43
Speaker
Yes, that's correct. So the understanding was that this is no longer TMS. Now we're doing this at a much finer grain level, but even that is not enough. And I suspect that no matter how good those microelectrode arrays get, unless you have a good understanding of how the brain delivers those types of experiences in a more holistic manner, you're not going to be able to write information to the brain efficiently.
00:49:09
Speaker
and that's a huge, huge challenge. For the purpose of mind reading, we can bypass that problem because we can just target a specific area of the signal that's diagnostic and make inferences based on it. But if you're trying to influence visual perception or subjective experience of any kind, then just by targeting a single region might not just cut it in the long-term. This is so interesting.
00:49:39
Speaker
One of the things that I wonder is a sort of a thought experiment

Neural Recording Technology: Implications and Privacy Concerns

00:49:42
Speaker
here. So imagine that you had perfect brain recorder that could get activation of every single neuron or whatever level of specificity you want at every single possible timeframe so that you could see, you could in theory understand every single thing that's going on in a brain. What would you do with that information? What would be the limitations that you'd still have?
00:50:07
Speaker
You'd certainly have a it's a lot of number crunching to figure out anything that's going on. But what would you understand if you had a an absolutely perfect resolution.
00:50:20
Speaker
Well, I would say that the first reaction would be being stymied because there's so much you can do with that. But at the same time, there's so much information over there. So even when we collect FMRI data or EEG data, a lot of that information just gets thrown away.
00:50:37
Speaker
literally or it's just filtered out, eliminated from data processing. In the face of so much information, one of the key problems is identify your goal and identify the information that is relevant for that specific goal. What is the level of analysis? Are you looking at single neurons fighting? Are you looking at population codes? Are you looking at
00:51:04
Speaker
signal as course as what EEG is giving you. So you need to identify the right level of analysis because otherwise you can end up chasing your tail in a very complicated maze. When a lot of people think of brain scanning, they may think of limitations because there isn't enough spatial resolution or there isn't enough temporal resolution. I guess the implication of that is that if you had
00:51:31
Speaker
perfect facial and temporal resolution that you would really fully understand everything that's going on in the brain. But you're suggesting that even that is you're just getting information overload and you still have to filter out a lot of that and you wouldn't have a full understanding. Yeah, that's correct. Neuroscientists, including myself, are used to complain a lot about the state of hardware, right? We love to do that.
00:52:00
Speaker
is not good enough for all the guys who need to come up with things that are much better and much more easy to use and sophisticated and so on and so forth. But the reality is that
00:52:11
Speaker
Until recently, we used such a small percentage of the data that that hardware was providing for us, whether being fMRI or EEG, we just didn't have the tools, the algorithmic tools to sit through the data, identify what's useful, and do something meaningful with that entire
00:52:33
Speaker
with the entirety of the information delivered by those technologies. So yes, there's a great deal of progress to be done in terms of boosting spatial resolution, temporal resolution, maximizing signal to noise ratio, the quality of the signal. But there's an equal challenge to be effective at processing the information that's already available to us using any single one of those technologies. And to do that, we need
00:53:03
Speaker
a much better understanding of the data and we need to be able to use effectively the right, the suitable techniques for signal processing and beyond that for analysis, the right sort of algorithms. And that is an interesting challenge. It's one that defines my objectives when it comes to research. That's really cool. Research you're doing is really fantastic and it's getting me excited about the future of this technology.
00:53:33
Speaker
and the president of this technology as well, actually. But while we're talking about it, it seems like it might be a good time to transition into my favorite part of the show. What hellish dystopias are you helping to create? So we want to make you feel guilty about the research that you're doing, Adrian. So how are you contributing to the horrible futures that this could eventually lead to? Well,
00:53:59
Speaker
I think what people key on when I describe this technology in terms of horrible dystopias is loss of privacy.
00:54:07
Speaker
I've been talking about the possibility of visualizing the contents of your personal experience and then the possibility of sharing that information using tools for a variety of social platforms. Then what happens? Do we have a complete loss of privacy? Are we going to be able to protect the information that's in our own heads? Several people have waited on this.
00:54:38
Speaker
my sense and my expectations that we will do a lot more good than bad, primarily because none of this information can be collected without your consent. So assume that there's a gadget on the market that can be constructed the contents of your subjective experience. First of all, you have to place it on your head before you can collect any information. You need to be able to
00:55:02
Speaker
Focus to be able to visualize things with the mind's eyes. So if you don't cooperate, it's not going to work. You need to be willing to share that information with others. And that's also up to you. So there will be many, many levels of control that are still in place to protect your privacy.
00:55:23
Speaker
And a lot of agencies are actually working on that, trying to make sure that all this new and crazy tech ensures, secures privacy laws. There are even more extreme approaches to it that have gone as far as suggesting that human rights need to be amended to protect the right to neural and mental privacy.
00:55:48
Speaker
And again, there's a bit of a debate on that, whether we need to inflate human rights to accommodate something like neural privacy. But I think it's worth the discussion now and in the not so distant future as well. So something like that would entail laws that would prohibit
00:56:10
Speaker
extracting information without the explicit consent of a person. Right. Yeah. And it's a question mostly of how and at what level you enforce those privacy laws, because some people suggested that at this stage, they should be product specific. So somebody will evaluate the risks of a specific product on the market, and then you can verify exactly
00:56:37
Speaker
and you can enforce privacy applications in that manner, while others are suggesting that this needs to be secured within the scope of a much more general legal framework such as that as
00:56:51
Speaker
of human rights. In terms of developing the tech, I think we can also aid a lot, right? So you can make sure that you can set up, let's say, privacy filters. So the tech only extracts information that you're willing to share. So there's many, many different ways in which you can approach that. And it's just a question of finding what's optimal for most of us or for all of us, hopefully, without embedding too much or without
00:57:20
Speaker
without slowing down the development of that technology. One of the things that you were mentioning earlier, Adrian, was the use of this type of technique for, say, for example, eyewitness testimony. And what that immediately suggests to me in the context of this privacy debate is imagine a situation where you've got someone who's being interrogated as a terrorist, for example.
00:57:44
Speaker
and you're trying to identify other members of this terrorist cell. It seems pretty directly applicable to where you could show pictures of the different individuals in the lineup, for example, that you are trying to say might be members and do a task where you say, yeah, this person does believe this is a member of the cell or even actually construct
00:58:13
Speaker
an image of what the person looks like without showing any picture of anybody else who's even a suspect. So in other words, you could read the person's mind to maybe ferret out a crime that hasn't even been committed yet. That just seems like one potential way that for all our best intentions, it would be tempting to use this to violate individuals' mental privacy.
00:58:36
Speaker
you actually you're thinking of the same exact question that I was Joe and one response to take it just one ludicrous step further is if you are that terrorist you might even anticipate this kind of thing and you might get into programming the person that you're sending to do it.
00:58:58
Speaker
so that they would respond to an incorrect phase or that they would they would send false signals so that it couldn't be interpreted in that way. Yeah, well, I think that's no technologies ALC. So yeah, we're thinking about crazy future technologies too. So and not realistic. I think you're you're well grounded in this.
00:59:20
Speaker
Yeah, so it's a bit of an extreme example, but I think several groups have tried in the past to extract information in experiments that people were not necessarily so willing to disclose. And they claim that they managed to get such information, let's say somebody's credit card information, right?
00:59:43
Speaker
or date of birth, but it's not as easy as someone might sound. You need people to cooperate to make this happen, and you don't need reconstruction for any of those scenarios. What people have been doing in the past is just, as Joe mentioned earlier, is a potential scenario, and that's a very realistic scenario. Just present folks with a number of faces and record EEG signals,
01:00:07
Speaker
so they can identify exactly which faces you know are familiar to you versus not familiar. There are certain components of the EEG signal that can be associated with familiarity, but that doesn't work as well as you might expect. Secondly, again, just like you can fool a light detector test. Yeah, it seems like the same basic setup as a skin conductance light detector test, just a little more sophisticated.
01:00:37
Speaker
Right, just like you can pull a light detector test, you can pull any of these technologies. Because if you don't pay attention, if you don't do your job properly, then you can corrupt and confuse the signal to such an extent that no matter how smart the algorithm is, you're making the job of whoever's in charge of this little test or experiment very, very difficult, very complicated.
01:01:02
Speaker
Yeah, I'm not sure whether this puts all those concerns to rest, but it shows that there's always going to be some sort of race, some sort of chase, some sort of competition between people who try to advance technology and people that want to beat technology at its own game.
01:01:21
Speaker
I think that's completely fair and it's not something that somebody will win eventually. You might gain a momentary advantage once you bring to the market a new technology, but then people find out exactly what its weakness is and then they learn how to deal with it and how to exploit it.
01:01:42
Speaker
Yeah, so on the positive side, what other sorts of potential applications might you see for these kinds

Future Applications and Optimism

01:01:50
Speaker
of systems? I guess we've mentioned a couple, but are there any other ones that might be exciting to you or ones that you thought about? Yeah, so I see. I think there's a lot of potential applications in healthcare in designing a neural based of communication for people who can't communicate
01:02:09
Speaker
verbally or using the hands, using a sign language. I also think that there's a broad range of potential commercial applications. How do you see yourself? How do you see your friends, folks in your social circle? A lot of teenagers suffer from body image problems. I think it can be used
01:02:34
Speaker
to inform ourselves about how we visualize, how we process the visual world around that. So it can become also a great social tool. It can be integrated with social media, as I mentioned earlier. Or just therapeutic uses in general. Or therapeutic uses, yes. Yeah, absolutely.
01:02:56
Speaker
Yeah, I think there's also potential there for neuro marketing, right? You look at a product, what aspects of it do you pay attention to? What don't you? How do you perceive or miss perceive different aspects of that? So yeah, I think that the possibilities are wide open. It's just a matter of accelerating the timetable for this technology and just making sure that we identify the most useful applications at this stage. Adrian, I think
01:03:26
Speaker
We've really gotten a lot out of this conversation. Are there any last points that we want to hit? Rolf, do you think before we let Adrian get on with the rest of his day? Well, maybe we could just set it up as an opportunity for Adrian. Are there any other additional points that might be worth making or ones that you'd like to make? Or do you think you've said, I mean, you've gone over an awful lot. Is there anything else that you'd like to add?
01:03:51
Speaker
Indeed, I think we covered a lot of topics today. I would just end up noticing that there's a lot of promise for this technology. There's a lot of excitement, possibly overexcitement, but there's a lot of challenges, technical, theoretical, and all of them need to be addressed one at a time.
01:04:14
Speaker
I am one of the individuals that put a lot of time and resources into making things happen, but ultimately there's a lot of things that are not completely under our control. So I hope that within the next decade, this technology will be flourishing and will be available to most of us, but that's just a guess and it's a hope rather than a prediction of the stage.
01:04:40
Speaker
Great. Well, I really appreciate the level of thought that you put into this and the conversation that you've had with us too. So this is fantastic. Thanks again for your time, Adrian. Thank you, Ralph. Thank you, Joe. It's been a pleasure chatting with you guys.