Introduction of Dr. Julie Hook
00:00:10
Speaker
Welcome to Cog Nation. I'm Joe Hardy. And I'm Rolf Nelson. On this episode, we have a special guest, Dr. Julie Hook.
Dr. Hook's Role and NIH Toolbox Overview
00:00:21
Speaker
Dr. Hook is a research associate professor in the Department of Medical and Social Sciences at the Northwestern University Feinberg School of Medicine and is a product manager of the NIH Toolbox. Dr. Hook participates in grant-funded research and directs the marketing efforts
00:00:38
Speaker
and strategic direction for the NIH toolbox. Julie, thanks for coming on the show. Really appreciate it. Well, thank you for having me. It's a pleasure to be able to speak with both of you and tell you more about the NIH toolbox.
Basics of Neuropsychology and Testing
00:00:50
Speaker
Great to have you here. Might be good to start off by speaking a little bit about what is neuropsychology and what is neuropsychological testing.
00:01:01
Speaker
Sure. My background is as a clinical neuropsychologist. Essentially, in neuropsychology, you assess constructs that are associated with brain functioning.
00:01:17
Speaker
such as cognition. So there's different elements of cognition like executive function, attention, processing speed, and neuropsychologists use tests to assess how somebody might be doing. So a typical referral question might be,
00:01:35
Speaker
I think I have a learning disability. And so a student would come in and they would take different types of tests of intellectual functioning and achievement functioning, and based on how they did, as well as other factors like history, general health, previous scores in school, a neuropsychologist can aggregate that data and make a recommendation and a diagnosis of a learning disability. Great.
00:02:06
Speaker
trying to understand how we can appreciate brain function from a behavioral perspective. It's looking at performance on a variety of different tests and measures that are normed and standardized and using that to draw inferences about what might be going on in someone's brain. Absolutely.
NIH Toolbox in Neuropsychology Research
00:02:29
Speaker
So yeah, from that perspective, the NIH toolbox, which is this project that your product manager for is an effort to aggregate some of these types of tests in a very specific way. Is that a good way of putting it or is that right?
00:02:47
Speaker
I can speak a little bit about neuropsychology and neuropsychological tests and how the NIH toolbox fits into that. A neuropsychologist would typically see patients perhaps for a clinical question, maybe as an adjunct to other neurological treatments that you might get or psychiatric treatments.
00:03:14
Speaker
And the NIH toolbox sort of fits into that space, but is more of a research tool. So a neuropsychologist might go to a commercial test publisher to get a test like the Wechsler Adult Intelligence Scale, use that type of test and assessment.
00:03:36
Speaker
and may not turn to the NIH toolbox specifically in its current form.
Development of the NIH Toolbox
00:03:43
Speaker
The initial impetus for the NIH toolbox was developed from the blueprint for neuroscience that was first established under the Obama administration, where there was a recognition that a lot of research studies were going on that look at neurobehavioral development,
00:04:04
Speaker
neurobehavioral problems. And one thing that lacked was a measurement system that could be used nationally across studies to assess similar constructs in similar ways. And so the NIH toolbox was developed as an answer to that problem. So if you're in New York and you want to assess executive function,
00:04:31
Speaker
with a group of older adults, and I'm in Florida, and I am running a similar study, one way to decrease the variability in results is that we would both use an NIH toolbox measure of executive functions so we could compare the results with a similar instrument, so decreasing the variability in response.
Using the NIH Toolbox in Practice
00:04:54
Speaker
Neuropsychologists may use something like the NIH toolbox to screen patients or even as an adjunct to an existing battery that they would do. Their work tends to be a bit more in depth than what the NIH toolbox offers currently, but we are under different development to expand the tool. So some similarities, but also some differences.
00:05:23
Speaker
Okay, so that's a big project collecting all of these different tests, trying to make some standardized version of them that can be in some centralized location used for data collection. So how many people contributed to this effort?
00:05:41
Speaker
Yeah, you know, so the initial project started prior to my role at Northwestern and Dr. Richard Gershon was the principal investigator on the contract to develop the National Institutes of Health Toolbox Assessments. And he got about 250 scientists across the country to sit down and talk about what are the most important elements to assess
00:06:09
Speaker
for patients or participants with any sort of neurobehavioral problems. And those would be doctors and people all throughout the field? Yes. All neuropsychologists.
00:06:24
Speaker
You know, not all neuropsychologists, but certainly some. There's four different domains in the NIH toolbox. There's cognition, there's emotion, there's motor, and there's sensation. And so experts in each one of those domains were asked to participate. So physicians, physical therapists, occupational therapists, speech and language pathologists, and researchers like yourself, cognitive psychologists,
00:06:53
Speaker
So all of these different tests, is there a common principle that gets used in order to have a test be included in the set that you use? For example, does it need to have sort of a level of validity that might be established? Also, I imagine some of the tests can't necessarily be too long if they're going to be used on an iPad.
Criteria and Challenges in Test Selection
00:07:17
Speaker
Are there these kinds of limitations? Is there some sort of standardization that you need to do for all of these tests?
00:07:22
Speaker
Yeah, I heard two separate questions that I wanted to answer for you. One is how were the tests identified and included? And the second, what would contribute to the brevity of a test? And so starting with the first, how these tests were selected. So these hundreds of experts who were experts in their specific domain, cognition, emotion, sensation, or motor,
00:07:52
Speaker
sort of sat down and thought about what measures are out there, what measures already exist. And some of the criteria they used were a measure that was applicable across the age span because the NIH toolbox assesses people from 3 to 85, so they wanted to find measures that could be used across the age span.
00:08:16
Speaker
What measures didn't have intellectual property constraints? In the world of psychological test development, some tests have IP associated with it or intellectual property, so we needed to get away from that.
00:08:31
Speaker
And we needed to find tests or paradigms that didn't have an IP constraint associated with it. And a point that you made was psychometric soundness. So tests that were both reliable, so you knew over time you were testing the same thing and valid, that you were actually testing the construct you said you were testing.
00:08:51
Speaker
So brevity, ease of use. And I can talk in a bit about how the NIH toolbox accomplishes that. But at the outset, they looked for tests that would be brief. And then applicability across diverse settings with different groups.
00:09:09
Speaker
And last but not least, again, echoing that idea that we wanted to be able to have assessments that could go as low as three years old and up and now past the age of 85, we're doing some validity sampling in. So really across the lifespan.
00:09:26
Speaker
With all those considerations, they were able to find thousands of tests and downward selected from them and then did more additional research study with different
00:09:43
Speaker
different subset samples where they administered these tests that they were looking to include into the NIH toolbox across gold standard measures from psychological research that existed and see which measures were best in terms of the validity and the reliability for inclusion into the NIH toolbox. So did some
00:10:05
Speaker
instrument, construct, and content validity look that way, and then downward selected from there. Thousands of research participants were used in the development of the NIH toolbox, both in these validity studies of trying to select the measures, but then also in the normative sample.
00:10:26
Speaker
One question that I had, you mentioned that you have to select tests without IP, and I wonder if there were any tests that you wish you could have just had because you have to do your own independent verifications of these tests. Some of them that are used very standardly might have licenses associated with them. Did you run into any of that?
00:10:50
Speaker
You know, it's interesting that you say that. I used to work for a commercial test publisher
00:10:58
Speaker
And I know that the IP issue can certainly be a bigger deal when it comes to large research studies. And one test that's currently in the news now, the MoCA, wasn't so much of a license issue as it was a content exposure issue, the Montreal Cognitive Assessment with the former president when he had spoken about the different items that he
00:11:26
Speaker
had undergone in terms of the assessment, the Mocha test developers then implemented a certain certification programs to protect that content. So usually in test development, the IP has to do with the content of what you are
00:11:48
Speaker
testing, so the actual test questions themselves, and then your underlying data. So we at the NIH Toolbox collected 4,897-ish, just shy of 5,000 people for the large normative sample. That data for our test is more in the
00:12:13
Speaker
common use area, but for test publishers who are more commercially based, that would be IP. You couldn't just simply aggregate that data and use it in your own kind of competitive way to develop a test.
00:12:31
Speaker
I'm not sure that there was any one specific test that the NIH toolbox group wanted that they couldn't include, because paradigms, like a test paradigm themselves, as I understand, though, my caveat and prentors, I am not an attorney.
00:12:53
Speaker
copyright the paradigm. So if it was a go no go task, you know, where you see a nine followed by a seven, you hit the button. But if a nine is followed by anything else, you wouldn't hit the button. You can't
00:13:08
Speaker
I believe you can't copyright that. You could copyright images, right? If you had a specific, you know, in a kid test, if you had a particular palm tree followed by a coconut, you hit the button. Those images of the palm tree and the coconut could be copyrighted, but I don't think that that paradigm itself could be. So the NIH toolbox uses its own imagery.
00:13:35
Speaker
It uses its own normative data, but paradigms are things that you'll see in other tests on the market. And so in that sense, things that you see in the NIH toolbox by and large are things that you'd be familiar with if you're familiar with cognitive or motor testing or any of the other domains, really.
Technology Choices for NIH Toolbox
00:14:00
Speaker
So you guys decided to use the iPad for this. Do you want to talk a little bit about the use of the iPad for the NIH toolbox? Why you chose that and how it works in that context?
00:14:14
Speaker
Absolutely. So originally, the NIH toolbox was developed for a web platform. And there are a variety of factors involved with the scoring algorithms on the NIH toolbox test. I'm focusing a bit on the cognition test, but I'm happy to talk about any of the other domains as well.
00:14:39
Speaker
just let me know, the cognition tests are, there's at least three tests that use timing in the algorithm. And when you connect to a system via the web, right, so you have the internet connection, you have your platform speeds, you have all these factors that can
00:15:02
Speaker
go into your recording of a response to the millisecond timing. And the iPad is really great because our app can be downloaded to the iPad. And when you administer the test to the participant, that millisecond level timing is right there on the device. And Apple does a great job in that sort of
00:15:28
Speaker
precision and timing. And that was one factor that our test could run well there. Apple also has a number of security measures built into their operating system that we also liked that.
00:15:43
Speaker
And Apple also allows our team to really focus on the scientific development. And if you were to get the NIH Toolbox, it's Apple that does the sort of exchange of the technology and the cost. And our team really focuses more so on the development and maintenance of the tests.
00:16:07
Speaker
Understood. So it's nice that there's kind of a standard hardware that you can rely on that has good timing. That kind of brings up a question that I think just to maybe bring it home for some of the listeners, what these tests are like. Could you maybe give some examples from the perspective of a person taking some of these tests?
00:16:28
Speaker
what it would be like to be given one of these assessments and a little flavor for what we're really talking about here. Oh, sure. So I'll first say we do have YouTube videos on administration of these tests. If anyone listening is curious to see what they look like, we have administration videos that you can see them in action. And we also have other resources on our website
00:16:57
Speaker
where you can learn more about the tests themselves. In the cognition domain, there are seven core tests, and for language, one of the tests we would give is a picture vocabulary test. You, as the participant, would see four images, and you would hear the iPad say a word, and you would have to select of those four images what was the word you heard.
00:17:28
Speaker
And part of how the NIH toolbox works is using computer adaptive testing to administer these tests more quickly than you could a paper and pencil version of them. So, for example, in the picture vocabulary part, you hear a word, you select of those four images. If your response was accurate,
00:17:54
Speaker
the next question gets harder. If your response was inaccurate, the questions start to get easier. So the beauty of the iPad as well is it allows for this digital interface and the ability to run an IRT or item response theory based test with computer adaptive testing.
00:18:17
Speaker
So in about three minutes, this picture vocabulary test could find your optimal vocabulary score as predicted by the test.
00:18:31
Speaker
So one question here. So do you have any guidelines about optimal conditions to give these tests? I'm imagining if you have a test, it could be administered in a million different environments. You could be taking surveys at a crowded place, or you could have a quiet room where people are doing it. Is there any concern about
00:18:58
Speaker
that you get similar results in the kinds of range of settings that they might be used in? That is a great question. We do. We have pretty precise guidelines on standardized administration. And the NIH toolbox tests themselves were developed to be
00:19:22
Speaker
easy to administer, but there's still guidelines you'd have to follow. So you would want the participant in a quiet room. You as the examiner would be sitting next to them. And different tests require different level of involvement of the examiner. So you would have the iPad set on a stand in front of the participant. You as the examiner would be sitting next to them. And going back to the picture vocabulary test,
00:19:50
Speaker
the examiner would walk them through elements of how to respond on the test, and then the iPad would start talking to the participant and giving them the live items. So you would be there, you would be there to answer questions, the person would be in a quiet room, and you as the examiner would guide them through that process. And to your point,
00:20:18
Speaker
Even in the most ideal situations, things happen. A car alarm goes off during the administration of a test, and results could be invalid. And so you as the examiner would have to make the determination whether you wanted to readminister the test, give them a different type of test, or how you'd want to handle that. But certainly,
00:20:47
Speaker
experiences or things that go on in your environment outside of the testing could negatively impact and so you'd want to be aware of that from an examiner perspective.
00:20:56
Speaker
Well, I mean, this would apply to any psychological test that you would give, I suppose. So it wouldn't necessarily be anything special. One thing I suppose you do have an advantage in that some of the things, it sounds like you are standardizing that might not be standardized in some other tests.
Standardization vs. Human Engagement in Testing
00:21:16
Speaker
So if you have the iPad give instructions, maybe that would be more standardized than an individual giving instructions. You have less variability.
00:21:24
Speaker
So, I can see both sides of that. Yeah, you know, I think there's some debate in the test administration world of how standardized is standardized and then the human connection, particularly if you think of a young child who
00:21:45
Speaker
may have attention deficit disorder or other problems, a school psychologist might say, standardization is great, but if I can't get my participant engaged, it doesn't matter how standardized it is.
00:22:03
Speaker
the argument to strict standardization is allowing for that human engagement factor. And a lot of, I think, psychologists or examiners like the reading and the engagement factor just to sort of pull the participant back in. Who is eligible to administer these tests?
00:22:23
Speaker
So there are certain standard guidelines set forth by the American Psychological Association and educational standards for people to use different psychological or neuropsychological tests. The guidelines fall into several categories and
00:22:46
Speaker
The cognition tests in the National Institutes of Health toolbox are under what are called C-level qualifications. And essentially, they're trying to protect those questions themselves from overexposure. Again, harkening back to the MoCA test example that happened in the news a number of months ago.
00:23:10
Speaker
There's an overexposure of those questions means the test would become invalid. So, for example, if you wanted to take a test of intelligence,
00:23:24
Speaker
and you wanted to do really well, a good way to do that is to know the questions ahead of time. And so if anyone had access to intelligence tests or different cognition tests, they could then play the system as they would like. So it might be scoring really well to join Mensa, or conversely, something I see more often as a clinical neuropsychologist are the idea of
00:23:53
Speaker
You were in some sort of accident a car accident or something happened and you perhaps wanted to make yourself look more impaired because you're now filing suit against something and so knowing the answers ahead of time.
00:24:08
Speaker
gives people an unfair advantage one way or the other. So the guidelines are set up to limit who can have access to those items, but it's also set up to limit who knows how to interpret them, right? So it may take less to follow the standardization of how to administer the test, but then when the test results come,
00:24:33
Speaker
there's concern that those test results would be interpreted in a way that would be best suited for the participant or the patient involved.
00:24:44
Speaker
any one test, you can get many, many scores. You can get the raw score, normative score, an age corrected norm score. You probably know there are T scores and standard scores and Z scores. And, you know, I mean, you can go on and on with all the test scores you can get and you get a report with all these numbers. Some may look very good and some may not. And if you're not trained on
00:25:13
Speaker
how to use those and interpret them, people can be making erroneous assumptions and it could do harm. And that's part of what the qualifications are, are to protect both the test security, but also to protect people taking them. So they don't get information that wouldn't be suited for them or things aren't misinterpreted for them.
00:25:39
Speaker
Makes sense. So yeah, the question of overexposure, you're mentioning the mocha. So the context there, as I understand it, is when former President Trump was given a cognition test. This mocha, which is basically, if I'm understanding it correctly, is a measure that looks at whether someone might be having
00:26:08
Speaker
age-related cognitive impairment, for example. Different cognitive impairments, but age-related might be one. And he wanted to make the point that he was in no way impaired, and that was the famous person, woman, man, camera, TV example, right? Yeah. So I guess then the issue there would be if someone heard him say that,
00:26:34
Speaker
then if they were going in to take a Mocha test, for example, they might think ahead of time that, you know, I'll just think up, you know, a bunch of words that I can remember and I'll memorize those ahead of time and then score better, something like that, right?
00:26:52
Speaker
Yeah, you know, usually in a screening test like the mocha, there's a set number of words. So the example you gave, I think there were five words. And when you memorize those words in advance,
00:27:10
Speaker
you can score really well. So you come into the office and the clinician says, I'm going to give you this test. And you said, oh, well, what test are you going to give me? And they say the mocha. And you think, oh, well, good. I already know those five words. And so all of a sudden, a test that you could get a possible 30 points on, you've just prorated your score five points because you knew
00:27:39
Speaker
in advance what those five words are. And limiting exposure to test content would mean you or anyone else going into a setting to take a test, you'd all start off at the same level. You wouldn't have the advantage of knowing those five words. You would either remember them or not remember them based on your ability to remember them in that setting.
00:28:09
Speaker
So I think this is a really fascinating area or topic to think about and something maybe I haven't thought as much about. But is there a concern? I mean, another concern that you sort of might have alluded to is that if there's an advantage for someone to scoring higher on a particular test, then there might be an incentive to maybe search online or find that test somewhere so that they can beat it, just like I suppose in academic testing, right?
00:28:39
Speaker
I guess that's part of the same exact concern, and is there any... Does that happen, I guess, is one of the questions. Does it happen, yeah. Yes. I would say that it does. I don't know that it happens uniquely to the NIH toolbox, but just in a larger sense of psychological testing. Sure. Psychological testing doesn't just happen
00:29:06
Speaker
in a clinic where some of the examples I gave you are more health-related or medical-diagnosed related. In industrial organizational psychology, there are a lot of tests as well that you might take for employment.
00:29:24
Speaker
And you can imagine how high stakes a lot of those testing are. Right. And that's the example I had thought of right away is if you're applying for a job and results from something like this would be a way to cheat, I suppose. Yeah. And there are just like, you know, there's all the test prep for the SAT and the GRE and everything. It's an arms race. Right. I suppose. Yeah. You know, I think that there are
00:29:54
Speaker
I think if you Google it, you can probably find people who would prep you. You're going to go in and take these standardized tests in whatever setting you need to take them in. If it's employment, some sort of medical legal case, whatever it is, there's probably some sort of test prep that you can look at.
00:30:20
Speaker
the psychological test developers are trying to control against is that you get the exact item, right? They want that to be novel to you when you come in so you can't just come in knowing all the answers to the test. Yeah, that makes a lot of sense. I mean, it kind of gets to it's a very specific case and where people are trying to game the system.
Research Applications of NIH Toolbox
00:30:46
Speaker
But I think it speaks to a larger set of issues in neuropsychological testing in general, which is when you, for example, you're trying to assess someone. Do they have some sort of disease or do they, you know, how do they have, have they suffered a concussion? What have you, uh, you know, whether it's age related dementia or, you know, in, in children, if it's ADHD or learning disabilities in all these cases.
00:31:16
Speaker
You know, there are different situations and a lot of times you're looking for a change. So for example, in the case of, uh, of a head injury, whether it be concussion or otherwise, you want to see if someone's performance is worse than it was previously, but you don't know where they started from.
00:31:38
Speaker
And so that's always kind of a big challenge within the world of neuropsychology, isn't it? Understanding the relationship between where you would expect somebody to be and where they are right now and what that means. Do you want to speak to that a little bit? Sure.
00:31:58
Speaker
Clinical neuropsych, I could see that question you're bringing up also being relevant, perhaps in a school setting as well, but maybe more so clinical neuro. There's a variety of things that you could do to better understand what somebody's
00:32:18
Speaker
pre-morbid or before accident functioning was like. There are certain types of tests called hold tests or crystallized intelligence tests that you could administer. Those are tests of store knowledge that don't typically decline after a head injury or any of these other types of things we've talked about. That's an acquired injury or an acquired change.
00:32:47
Speaker
But other things you can do are look at records, looking at a school-aged individual, look at their school performance. Were they a straight A student? Were they failing out? Those types of things would give you a sense of where somebody was starting and what the impact of this acquired disease or injury state had on them. You can also, as I mentioned,
00:33:15
Speaker
give tests of hold knowledge or crystallize intelligence, like a vocabulary test, like what we talked about earlier. Also, there's a lot of irregular word reading tests that people use to make an assessment of that as well. So there's different ways to gauge what somebody's function was prior to the injury.
00:33:41
Speaker
So just to make a comment on where Joe might be coming from on that comment is we were previously we previously had a couple shows on this effect called the fray effect where microwave radiation can cause the perception of high frequency sounds and one of the issues was you know so this is something that was suggested to be at work at the Cuban Embassy
00:34:08
Speaker
There was some events a few years ago, and one suggestion was that it was due to microwave radiation. And one of the difficulties in concluding that there was, in fact, rain damage in the first place, or that there was some white matter track damage, was that there was no previous mark of these individuals. So it was diplomats and people at the Cuban embassy
00:34:38
Speaker
there was no way to tell what their brains looked like beforehand. So it was more difficult to determine if they had suffered from something like this radiation. So that's just to put it into context, the kind of thing we might be thinking of.
00:34:50
Speaker
Right. I think that it's exactly right. You find this finding. Is it something that was pre-existing with this individual? High frequency decreases with age, typically. Is it just the normal variation in aging? Or is it this effect of
00:35:14
Speaker
what's happening on the environment that you're trying to assess. And sometimes they're very challenging and hard to tease apart. And there's no clear sort of path and mnemonic thing that would point you to one versus the other. And then I think that's where you get into a lot of the interesting discussions and opinions on what may be leading to that deficit or what caused the decline. What are some
00:35:44
Speaker
Can you give some examples of use cases, for example, in research where people are using the NIH toolbox and their studies? Sure.
00:35:59
Speaker
large national studies that are using the NIH toolbox. One of them we refer to as ECHO. That's an acronym that stands for the Environmental Influences on Child Health Outcomes. And it follows very young children from, you know, as young as three years old upwards to see
00:36:23
Speaker
what effects environment may have on their neurocognitive development. That's an ongoing research study right now. You know, certainly the impact of COVID has hindered to some degree the ability for studies to continue, but that study is one that uses the NIH toolbox. There are also some other
00:36:49
Speaker
younger sort of pediatric focused national studies. And then at the other end of the lifespan, older adults, the NIH toolbox is used in a number of research studies looking at older adults who may be suffering from mild cognitive impairment, Alzheimer's, dementia, or just the effects of
00:37:14
Speaker
healthy aging, you know, as we all grow older, our processing speed becomes less quick and some other changes just sort of happen with aging. But those are kind of both ends of the lifespan. There's also clinical trials that use the NIH toolbox and a variety of other outcome studies that use the NIH toolbox, studies that I'm aware of, and then studies that I'm
00:37:41
Speaker
pleased to see get published in PubMed or Medline using the NIH toolbox as an outcome measure in a cancer research study, cardiology. And as I mentioned also, there's some studies using NIH toolbox in COVID-related symptoms.
00:38:01
Speaker
Yeah, it's interesting that you earlier mentioned that some of research was being hindered from COVID-related factors.
Innovations and Accessibility Efforts
00:38:10
Speaker
And I was thinking, well, an iPad must be a great way to collect data during COVID, although you would usually be doing it with a doctor right next to you while you were doing it. Has there been any attempt to do remote administration for, say, patients who have an iPad?
00:38:31
Speaker
and wouldn't download the whole toolkit, but would be able to take something at home on a regular basis? That is a great tee off for me to say yes. We have online right now, we have some screen sharing guidelines that I wrote with colleagues back, gosh, you know, it was a year ago now in March. When I initially I had thought that
00:38:56
Speaker
at the time, right? COVID and the social distancing requirements, maybe we were looking at three, maybe six months. And so I made some tests that they were a bit easier to administer over Zoom, or whatever screen sharing paradigm you use. And I
00:39:19
Speaker
had set those for a rapid response to your point. The NIH toolbox is examined or administered. Usually, you're sitting next to your patient or participant. Now with COVID, it puts a halt to that. What could we administer via screen share? We went through the tests and decided what could be administered and what couldn't be
00:39:41
Speaker
and then made some tests easier to administer in that paradigm. But the way we had initially launched it was the examiner had the iPad sitting in front of them. And you, if you were the participant, you would say, you know, it's picture number four that represented the word in this picture vocabulary example that I mentioned earlier. You hear a word, there's four images, and now you, the participant, instead of hitting the screen, you would tell me which box to hit.
00:40:11
Speaker
And so that was our initial push. But as I've seen remote healthcare taking greater shape now, more and more people are
00:40:26
Speaker
I think to continue some of the remote health assessments that were spun up because of COVID, it's really worked better for some populations and they would like to continue that. And also just the longevity of the COVID and social distancing requirements, we're devising something that you mentioned. So it would be an app that you as the participant could download and that
00:40:55
Speaker
the examiner would have embedded bidirectional video audio feed with you. So I could watch you and I would know if you were having trouble, I would know if it was really you and you didn't just hire a stand-in to take the test for you. And then I would be able to see how you're responding on the screen and
00:41:20
Speaker
you know, how that was all going forth. So we're in beta right now for that, but we are looking to have that as a companion to our examiner app as an answer to this need for remote assessment. That is fascinating because that is certainly something that I know a lot of people would find useful. And I did not know that you were, that that was something being worked on. So that's fantastic.
00:41:48
Speaker
Yeah, actually we're in part the study I had mentioned previously echo the Environmental Children's Health Outcome Study. They have a whole remote working group and this effort was in part being done to assist that research study. And then of course, looking to expand it out to all of our users who would have benefit from it.
00:42:15
Speaker
So this may be something that was accelerated from COVID and might not have happened as quickly. Absolutely. Yeah, I think so. You know, when you think about everything over the last year or so, at least for me, it's easy for my mind to go to the negative things. But I think that the acceptance for
00:42:36
Speaker
remote assessment in healthcare, both from insurance companies and from healthcare workers, and us really being pushed to consider that, also in terms of remote work for people, I think could have some really positive effects for people who may have a really hard time getting to the medical center.
00:42:57
Speaker
who would participate in your study, but they just, that barrier to entry is so high, they can't drive themselves, they can't find parking, or maybe there is no parking. If you can do things remotely now, it opens the door for these people. It's a great way to increase access for a lot of people, yeah. I think so, yeah. That sounds really, yeah, that's a really good direction. What else is going on with the NIH toolbox
00:43:27
Speaker
that's forward-looking. What are you working on next?
00:43:31
Speaker
I mentioned the remote companion participant app. There's also work that's being done called the mobile toolbox. Those are self-administered tests. Our paradigm right now is examiner administered. We talked about the C-level qualifications that you would need to gain access to the cognition test. I'll just do the side note. None of the other tests require that. If you're interested in the NIH toolbox and
00:44:01
Speaker
you don't meet that qualification. It doesn't mean you couldn't administer hundreds of other tests that are part of it. So just as an aside. But the mobile toolbox is self-administered. So it will be a smartphone adapted test that are parallel to NIH toolbox tests, but not exactly, right? If you just even
00:44:26
Speaker
these paradigms, you'll get better at them, right? There's always practice effects in testing. And so they're similar, but not identical. And what would happen is you as the researcher could assign self-assessments to people to take at home on their iPhone. This is work that's being done at Northwestern, but in conjunction with a company called Sage in Seattle. And, you know,
00:44:56
Speaker
They're going to validation for this project over the summer, and it should have more accessibility for any interested people in the next year or so. So this is something that would be available generally, and anyone could use it without needing to request credential access?
00:45:20
Speaker
I'll do a qualified, I believe that's true. I think they're still working out the deployment model on how you would gain access to it. I do know that Sage will have this partner company that is partnering with us.
00:45:39
Speaker
they'll have a research platform. If you wanted to go in with them, I would assume, this is obviously the underlying, I believe, you would probably need to set up some login. I'm not sure that there's necessarily a credential review, but you would have to set up something to then be able to deploy the tests through them. Now, if you just wanted to gain access to the tests,
00:46:07
Speaker
Outside of that, you could always contact us at Northwestern. We could help facilitate that. But to your point, I don't believe that there's any specific qualifications that would need to be reviewed for that. Sounds like a useful tool for undergraduate research.
00:46:30
Speaker
Yeah, and also, we were talking about this barrier of entry to research participants to follow them at home. And one of the things we're considering right now with this remote companion app is that it has to be on an iPad because the stimuli for the cognition test, we want to maintain a similar size, size matters in some timing.
00:46:57
Speaker
Of how quick you can respond or visually see things We want to control for that but think about how many people? Have an iPad that's a class issue. Yeah. Yes And so we're really thinking about how ubiquitous smartphones are and certainly the mobile toolbox Has greater accessibility because of that we're working through other ways to let people gain access to iPads You know sending them to research participants is one thing
00:47:27
Speaker
But then there's the issue of needing connectivity for this remote participant app. So we're still in beta and working through some of those things because we want to make things accessible and want more people to have access to it. But mobile toolbox and smartphones really, you know, everyone
00:47:51
Speaker
pretty much had a smartphone, right? I think there was a study I read that the only technology that was adopted faster than the smartphone was television. Quite ubiquitous at this point. Is there anything that you wish that we had asked or that you would like to talk about that we haven't touched on yet?
Future Directions and Pricing Model
00:48:10
Speaker
I think that the NIH toolbox is a great tool. It's accessible, I think, as you mentioned, through the Apple App Store. The only thing that I would mention is other ongoing work we have with something we internally call the baby toolbox. We're working on using eye tracking in infant assessments.
00:48:37
Speaker
And again, it would be on the iPad, but it would be extending the age range down below the age of three and also offering assessments for individuals much younger than three. So our goal really as a nonprofit is to make the highest quality test that we can and get them available to as many people that we can.
00:49:03
Speaker
Great. The eye tracking stuff sounds promising. And I know Joe said that was the last question, but I have one more question. So there is, okay, so you can get it on the app store and I believe the yearly cost is about 500 bucks. Do we get a podcast discount? You can get a, you can email me. I had to ask. I had to ask.
00:49:26
Speaker
But it's $500 for the year. And if you are familiar with apples,
00:49:35
Speaker
business model, they do everything through an Apple ID. So if you use the same Apple ID and you buy the subscription for $500, you can use it on up to 10 iPads under that same Apple ID. So it's really $50 for the year. And really, there are hundreds of assessments in there. I focused a lot talking about cognition, but there are motor assessments.
00:50:00
Speaker
There are sensation assessments, including an olfactory measure, which is very salient now with COVID and loss of smell. There are other sensation measures that are included in there with vision and hearing and pain. And then under the emotional health domain,
00:50:22
Speaker
There's a variety of what we term patient report outcome measures, and these are questionnaires that you can administer to assess people's functioning, both emotional health, physical health, response to stress, so really timely measures and assessments that I think could be useful.
00:50:46
Speaker
in a variety of different types of settings. I know we've got people on all sorts of fields of study using these assessment tools. Great. Well, Dr. Hook, thank you so much for joining us on the pod today. Really enjoyed it.
Contact Information and Closing
00:51:02
Speaker
Well, I thank you too. I appreciate you letting me talk about these different things and share with your audience more about the NIH toolbox and also neuropsychology. I thank you very much for having me.
00:51:17
Speaker
Thanks. And if anyone is interested in giving feedback on the show, you can reach us at info at cognition podcast.com. You can also reach me at JL Hardy PhD at gmail.com or JL Hardy PhD on Twitter. I will also respond to that.
00:51:38
Speaker
And you can reach me, the Rolf Nelson at Nelson underscore Rolf at Wheaton college.edu. You can't reach me on Twitter because apparently my account was banned. Not that I pay attention to it anyway, but what'd you do? I don't know. I don't know what I did. I didn't pay any attention to it and then it was just gone. But that's okay. I don't care anyway.
00:52:01
Speaker
All right, talk to hook. Thanks so much for being on the show. Well, thank you both. Yeah, Julie. Do you want to give any any Way to reach you or said you'd want to not leave that absolutely. No You can we and the NIH toolbox has several social media channels. One is also on Twitter It's just at NIH toolbox. I'm also on Twitter at JN hooks
00:52:26
Speaker
You can reach our help desk if you have any questions about the NIH Toolbox. It's help at nihtoolbox.org, or go to our website, nihtoolbox.org, and all that contact information will be there. And if you need to get a hold of me, you can just let the help desk know that you want to talk to me and they would forge your request right to me. That's great. All right. Well, thanks, everyone, for listening, and we'll talk to you soon.