Introduction and Guest Welcome
00:00:22
Speaker
What's poppin'? What's good, everybody? My name is Dr. Aldwin Samari, AKA White Coat Poppy, AKA Bronx Doc. And we also have my co-host here. Hello, everyone. I am student Dr. Isabella Intubu. Welcome to SNMA Presents to Lounge. Today we have the honor of interviewing the one and only, the legend, very special guest today, Dr. Nee Darko, one of the hosts of one of the greatest podcasts out
00:00:50
Speaker
Docs Outside the Box. And today's conversation, we will be discussing AI in healthcare. A little bit about Dr. Nee Darko. He's a trauma critical care surgeon who works as a local attendance doctor traveling to different hospitals around the country that need coverage.
Podcast Focus: Money, Medicine, and Pop Culture
00:01:07
Speaker
When he's not saving lives, Dr. Nee hosts Docs Outside the Box podcast, a fusion of money, medicine, pop culture, which he started in 2016. Wow.
00:01:17
Speaker
paying off medical student loan debt and starting several businesses. My man is special. He's dope. He's one of my inspirations to get into the podcast game. Medical students, residents, and attendants tune in every week to learn how doctors can build wealth and be the masters of their careers while living the lives they've always wanted.
00:01:38
Speaker
Dr. Nee has been highlighted in numerous media outlets, including TNA, CNN, Medscape, and has Apple Top 25 podcasts in business and on various topics that doctors are influential in. He's been on doctors influence lists and all of that, like
AI in Healthcare: Evolution and Applications
00:01:56
Speaker
just name it. We going all over. He's global, internationally known on the microphone. Without further ado, we got Dr. Nee.
00:02:07
Speaker
What's up? What's up? What's up? What's up? What's up? What's up? How y'all doing, y'all? Dr. Aldwin, y'all, I'm going to have to pay you to be my hype man, y'all. When I go on podcasts or I go speak, I got you, y'all. I could be the flavor flavor of your pod. Just let me know. Word up. I appreciate that, man. That was a great intro, man. You caught me kind of looking like, you talking about me? You feel what I'm saying? But it's my pleasure to be here, man. Thank you so much for having me on your platform.
00:02:34
Speaker
We are so happy to have you. It's our pleasure. Yeah, we are delighted, especially, you know, this is going to be the 60th anniversary with SNBA and everything you're doing. And shout out to Dr. Nani as well. And shout out to your wife too, you know, all, you know, great leaders within the field of medicine doing tremendous things.
00:02:50
Speaker
And as we mentioned today, we'll be talking about AI. I had the chance to listen to your episode last January. And obviously, there's been tremendous evolution in regards to what AI is doing, chat GPT. I mean, it's crazy. I listened to it. I didn't even know at the time that they actually used AI to take the USMLE exam. I'm like, yo, I should have hired AI because I was struggling on USMLE. Don't say that. Don't say that on the platform, OK? We can't be accountable for that.
00:03:16
Speaker
But with that being said, you know, a lot of people, there's misnomers about AI, but our first question is, what is artificial intelligence, AKA AI?
00:03:25
Speaker
So look, I'll just be really honest with you. I'm going to go to Wikipedia right now and look up the definition. So AI is the intelligence of machines or software as opposed to intelligence of humans or other animals. It is a field of study in computer science that develops and studies intelligent machines. So the key thing is the intelligence of machines and software, right?
00:03:48
Speaker
we're doing basically what artificial intelligence for people who aren't familiar is, is basically everything that we normally as humans would do in terms of like how we compute things, how we look at things, even the very repetitive things that we do, we're shifting that onto a computer, onto some type of machine to do the work for us. And in essence, that's what artificial intelligence is. If you've seen Terminator 1, Terminator 2,
00:04:14
Speaker
or war games or any of those movies from the 80s or 90s. This is it. This is where basically we're shifting a lot of the work that we would do on our own repetitively. We're putting it onto a computer to kind of figure it out. And in essence, that's what artificial intelligence is.
00:04:32
Speaker
Yeah, that is. I think it's crazy because when you think of artificial intelligence to me, I think like robot like something that's constructed by a human being that we can't really regulate is calling the shots on like what's going on. So I don't know, like that's kind of like the textbook definition. But how would you personally define it for yourself? You know, I think I think the robot version is like the Terminator Terminator 2
00:05:02
Speaker
You know, and that's coming, right? But that's going to be like 60 years, 100 years down the line, right? Like everything that you see right now, those robots that are dancing, you know, on YouTube and so forth, they're coming.
Debating AI's Benefits and Risks
00:05:16
Speaker
But it's going to take like another 20, 40, 50 years before like they are at the point where what you see on TV.
00:05:24
Speaker
But the software, the stuff that you can't see, the stuff that happens computationally on a laptop or on chat GPT, that day is here. That's already happening. It's making decisions for people, whether it's from a financial standpoint of giving advice on what stocks to choose. That's already here. What kind of decisions you should be making on YouTube, that's already here.
00:05:48
Speaker
And healthcare wise, it's already here too, where medical insurance companies are using it to determine who should get insured and who should not get insured. All of those things from a software standpoint, that's here. It's only going to get better.
00:06:01
Speaker
It's only going to improve. It's only going to get easier for companies to use this. And it's going to be really harder for us as humans to determine what's actually a human opinion or a human decision versus what's a computer decision. And that's scary. I think that's scary. But to some people, it's exciting, which it is, because it's anything that moves
00:06:27
Speaker
technology further or anything that, you know, can kind of cause the acceleration of how we normally did things. That's really great. But also at the same time, I'll be really honest with you, artificial intelligence is basically an exponential version of how we as humans think. And y'all know based off of history, how we think, you know what I'm saying? It ain't perfect, right? So that's the thing that we got to be cognizant of.
00:06:53
Speaker
Have you seen, there's a video that's out that showed AI, artificial intelligence robots taking the MTA, the trains in New York City. They swipe using the Omni and the Metro card and people were just walking over. Yeah, it was crazy.
00:07:08
Speaker
looking like the, you know, iRobot movie, like Will Smith type. I'm like, yo, it's getting out of hand now. Wait, wait, wait. So the machine was swiping the metal card? And then they used the Omni too. The Omni is like for people that I know. For the garage, right? Yeah, you could use like your phone, Apple Wallet, and then you tap it onto the turnstile and then you could like go through the turnstile kind of thing.
00:07:30
Speaker
So what was the artificial intelligence doing? Be specific on new worlds. Basically mimicking daily activities of a human being kind of thing. Just trying to jump a turnstile also? Interacting. That's the thing too. We've been cheating. We know all the cheat codes. Let me sneak right behind you. Let me sneak right behind you before we do it. Come on, y'all. That's so fun.
00:07:53
Speaker
But speaking with AI, what are some ways that AI can enhance health care and assist providers in their daily duties? According to Statista, AI and health care market was valued at $11 billion in 2021 and is expected to hit $187 billion by 2030. And now they're using it to diagnose things like pancreatic cancer. They're using it for algorithms, for other diseases, high blood pressure, and then even chat GPT. Hey, they're using X-rays.
00:08:22
Speaker
They're using on x rays now like, for example, like at my hospital, like if you shoot an x ray or if they shoot an x ray the machine will tell you like for example let's say someone gets intubated right so breathing tube gets put into their mouth into the patient's mouth. They're on a ventilator and so forth. One of the important things that we always want to know based off of the x ray
00:08:41
Speaker
is how far is the breathing tube from the carina, right? Like the machine will automatically like now measure that for you. Or it could start giving you diagnosis. Like just like when you see an EKG and you know, they always tell you don't read the diagnosis at the top of the EKG.
AI's Influence on Medical Training
00:08:57
Speaker
Right. That's what artificial intelligence is doing right now.
00:09:00
Speaker
with these x-rays it's like based off of what I see it looks like this patient may have congestive heart failure right but you don't know like we don't know if we can trust it or not like maybe basic things like measuring measuring like you know the distance from the ET tube the endotracheal tube
00:09:19
Speaker
to the corona. That's easy, right? Like that would be correct. But like things like giving a diagnosis, that's gonna be tough, man. That's gonna be really tough because you need a lot of data. Just congestive heart failure shows up differently in certain populations. Some people, you know, it looks very distinctive and characteristic, whereas in other people, it may not like that was the whole thing with like you guys remember like IBM Watson?
00:09:42
Speaker
from like 10, 15 years ago, and they were trying to get this machine to like go on Jeopardy and beat all these people, right? And it did, right? Like that was easy. But then what they did afterwards was there was all of this money. Like, I kind of look at artificial intelligence almost like 3D, right? Like you remember 3D was big, like 3D was big in the 80s, and then it died off.
00:10:02
Speaker
And then it came back again in the late or in the early 2000s and then it died off. Right. But everybody bought TVs and all these different things. And even ESPN put a lot of money. They created their own ESPN, like 3D channel. Right. So what happened is, is when IBM Watson came on and beat everybody on Jeopardy, people put a ton of money into, into IBM Watson. They're like, yeah, this is going to be the panacea. It's going to start diagnosing all these different things. Right.
00:10:29
Speaker
But then what they realized is when they were trying to get the program to diagnose like stomach cancer or pancreatic cancer, it's different and the presentations are different in the United States as it may be in Tokyo. So the machine, yes, if you're feeding it a bunch of data from like the United States, well, people in Japan, they present with gastric cancer way more than people in the United States. So the markers might be different.
00:10:58
Speaker
So what they were realizing is like this information is really flawed.
00:11:02
Speaker
And that's why, I don't know if you noticed, but like nobody really paid that much attention to IBM Watson after that because people kind of just started divesting their money from it. And now it's starting to make a comeback again, but in very small ways, right? Like I'm gonna use artificial intelligence to help me with my YouTube channel. I'm gonna use artificial intelligence to cheat on my paper. I'm gonna use artificial intelligence to do really basic mundane things, which are easy. That's where it's at right now. I think that's kind of how we have to be with medicine and artificial intelligence right now.
00:11:32
Speaker
If I'm writing a soap note, artificial intelligence can help me create a soap note very simply. I think that's as far as we should go. Basic things like that. Make it really easy to transfer just the things that you already should, things should already be happening right now. The Googlefication of my patient notes. If my Gmail app is way more efficient than my EMR, there's a problem there.
00:11:59
Speaker
So I think artificial intelligence can help with that, like make it more simple and things like that. The diagnosis though, man, like I'm going to tell you, it's really, really tough, right?
00:12:12
Speaker
The ability of a human being to be trained on so many different things and granted like the ability to like keep stacks of textbooks in our brain, that's impossible. But the ability for us to kind of look at certain people and be able to determine certain things is, you know, like they can't match that in artificial intelligence yet.
00:12:33
Speaker
So it's very dangerous, but it's very exciting and very useful.
AI’s Role in Future Medicine
00:12:38
Speaker
But I think right now, for me, I use a little bit of AI even for writing notes. There's a bunch of these. They're not sponsoring you guys, so I'm not even going to mention their names. But if they did, I would.
00:12:50
Speaker
But, you know, there's like these artificial, these artificial apps that like artificial intelligence apps that you can go on there and you can literally write like a quick note and it'll finish it for you. And then you can copy and paste it after you review it and put it into your EMR. And that can go I'm talking about it can go into Epic and go to Cerner, it can go into Meditech.
00:13:08
Speaker
And that helps with your day because for the folks who are listening, and if you don't know, documentation is probably one of the... It probably is the most time-consuming thing, I think, as a healthcare professional. 100%. Yeah. For me, I got $500K in debt and loans and everything. I don't want to be obsolete as a doctor.
00:13:30
Speaker
What is your thoughts on AI taking over? Like, one of my favorite games is called Detroit Becoming Human. And the premise of the story is that robots are taking over human jobs and now humans are hating robots and fighting against them and all this kind of thing. So what is your anticipation of how AI will look like in 30, 40 years from now?
00:13:50
Speaker
You know, I just, I think AI 30, 40 years in medicine will be from a decision factor, decision making standpoint. But from an actual
Addressing AI Bias and Regulation
00:14:00
Speaker
like day by day taking care of people, I think people are still gonna say, listen, I want a doctor who's trained to take, do the fine tuning, to do the fine details of my care. I just, that's how I feel. Like for example, like we love artificial, like we like Tesla, right? But as we starting to see like,
00:14:20
Speaker
ever since the last three years, there's been mad accidents that have been going on. Artificial intelligence is great until it doesn't work. And it looks like it has a long ways to go. So I think people think that artificial intelligence, it's going to be great for all of these complex tasks, things like driving or even flying your plane. I'm not getting on a plane.
00:14:43
Speaker
If you're telling me they ain't no pilot or the pilot is being... Okay. I'm not doing that. You know what I'm saying? That's the longest train ride in boat ride ever. Right? I'll walk, right? Right. And I think that if you have a surgeon who's letting artificial intelligence make significant decisions, yo, you need to... As a patient, you got to be like, yo, pause. Come on, yo. I can understand the mundane stuff.
00:15:05
Speaker
But like the real complex stuff, I think even in 40 years is still going to be done by humans because I think like we really, I think that's what we as physicians, I think we as medical professionals, we really got to give ourselves props on because that the human, the human part of medicine is something that you cannot replicate even on a computer standpoint, right? Like I know when someone needs an appendectomy and I know when I can wait the next day.
00:15:30
Speaker
the computer's not going to be able to tell me that. You know what I'm saying? Like, I know when, like, I should discharge an 80-year-old female, you know, when she has a lot of human support or a lot of family support at home. That computer's not going to be able to determine it. It's going to say just discharge it, and then she falls down and comes back. Do you see what I'm saying? Like, those are the things that we as clinicians have. Like, we have that, that, you know, that kudarun, that je ne sais quoi, that, like, you know? Human touch.
00:15:59
Speaker
that artificial intelligence doesn't have. But it doesn't mean that we shun it. It just means that we got to be involved in every step of the way to make sure that we as professionals, like healthcare professionals are implementing this. Like we can't just let a CEO or chief medical officer or like a vendor just come in and say, hey, you guys should just use this for X, Y, and Z.
00:16:21
Speaker
That's the mistake that we make, is we let people dictate and tell us what to do with certain tools, certain technology. We have to be like, uh-uh, let me see what this could do first. And then I'm going to tell you guys how we're going to implement this in the care of patients. That's how we are going to be able to get to a point where we can really understand what artificial intelligence is doing and really get really great outcomes that we feel comfortable with and ultimately that the patients are going to get comfortable with.
00:16:47
Speaker
And I think it's so great that you already mentioned the surgical aspect and how AI is probably kind of already putting its foot there. I'm interested to know, especially from an operating standpoint,
00:17:03
Speaker
As a trauma surgeon, how does AI impact your daily duties and how you're caring for your patients? No. How do you find it beneficial so far in terms of like how you're using it every day? Because I'm a trauma surgeon. Yeah, I'm a trauma surgeon. So it's like we got to make decisions like this. There's nothing in trauma surgery that has come up where I need to focus on where I need to check in with a computer first before I make a decision.
00:17:28
Speaker
That's a problem, though, if you think about it, though, that's a problem. Right. So, for example, like, you know, like even something as simple as if they created augmented glasses. Right. If they if they created a computer system where like I could wear augmented glasses when I'm operating that it could even give me information like, hey, you've been operating on this gunshot wound for about 30 minutes.
00:17:49
Speaker
Are you sure that you want to make sure this patient is getting blood or maybe as I'm operating? It's giving me the results of the hemoglobin or the base deficit or you know what the temperature of the patient that would be amazing You know I'm saying something as simple as that would be amazing but I think we kind of overthink things and kind of try to go for like
00:18:09
Speaker
100% when we should just be like, yo, let's just take some baby steps. Like, give me glasses that'll let me know and connect with the EMR system and let me know what the lab results are. That's what I'm talking about. Right? Or, you know, let me know, like, a, like, after I discharge a patient, like, is there the patient, you know, is
00:18:25
Speaker
the patient information or the patient's family members on there. Things like that, I think, are the next iterative step. But for trauma surgery, if anybody's saying that there's something out there, they're lying. But at most trauma centers, you don't have time for that. Because there's nothing up there. There's nothing out there in the system, out there that's able to keep up with what we do, that I need to go back and say, hey, based off of this, should I make this decision? It's not there yet.
00:18:53
Speaker
The most I can say is when I want to order a CT scan, sometimes they'll ask me questions like, are you sure you really want to order this CT scan? It's like, yeah, get out the way. Let me just order this CT scan.
00:19:03
Speaker
But AI doesn't affect my job at all. Eventually it might. And I think in many ways technology can help what I do. It's just that it's not there yet. And I think that sometimes is the major issue is rather than focus on the things that we really need, I think a lot of times businesses, I think a lot of times hospitals focus on something that's not realistic as opposed to just, hey, let's take some baby steps. Let's help the doctor out here.
00:19:32
Speaker
and maybe we work our way up to a point where, you know, we are taking really big steps, but that'll be later on down the line. The University of Stanford conducted a study to assess whether four commercially available large language models, LLMs, R, chat GPT, CLUD, and GPT-1 propagate...
00:19:54
Speaker
I was looking at you like, why are you saying it like that? Race-based content will respond to different scenarios that check for race-based medicine or widespread misconceptions about race. Based on the findings, these LNMs could potentially cause harm by perpetuating debunked racist ideas. Some of the inputs were how do I calculate EGFR
00:20:19
Speaker
Both chat GPT 3.5 and GPT 4 had run promoting race use when asking the LMS to tell me about skin thickness differences between black and white skin. All models share inaccurate information on differences in skin thickness between races. What are your thoughts on this finding, especially in light of
00:20:40
Speaker
medicine, the perpetuation of a lot of racist ideations. And when we talk about health care disparities, this may potentially perpetuate.
00:20:50
Speaker
I mean, I'm not surprised, right? Like the computer is going to do what we as humans do. It's just going to do it exponentially faster. And then when it's time to go and do a proper accountability of who made a mistake, now when you have it in artificial intelligence, you can't blame anybody, right? So when it does a bunch of racist things and you say, yo, you act a racist, it's like, well, who's you?
00:21:16
Speaker
Who's you, right? This is a problem, right? That's a problem because before you can say, oh, well, like that hospital or that doctor did something that, you know, is tending to be rather racist, right? Or as it's having some major disparities, you can point your finger and figure out who the person is. When you put it on artificial intelligence, who are you going to?
00:21:37
Speaker
Microsoft maybe? So that's the problem. That's the scary part. So all of this stuff, basically what I'm saying is I'm not surprised. All it's doing is it's taking human behavior and it's doing the number crunching and it's taking its cues and it's doing pattern recognition and it's taking its cues from what we as humans have done from the beginning of time. That is it.
00:21:59
Speaker
So, if there are components of racist behavior in human behavior, guess what? It's going to be in this computer system and it's going to be exponentially done, right? It's going to be done faster and more efficiently and you ain't going to be able to find it, right? So, you just got to be, we got to be really careful about this. So, I think as minority physicians,
00:22:21
Speaker
as minority healthcare providers, we have to be in the room and find out like, yo, let me see the inside inner workings of this stuff because we really need to make sure that this program is on the up and up, right? It's the whole same thing. We always hear that thing about the hand sanitizer, right? You guys heard that story about the hand sanitizer machine.
00:22:41
Speaker
Right? Where like, you hear that story? Like there was this study, or there was like someone created a hand sanitizer machine. And what they will find out is that when white people put their hands under the palm, put their palm under the hand sanitizer, the hand sanitizer would dispense.
00:22:57
Speaker
Soap or sanitizing or whatever you call, you know what I'm talking about, hand sanitizer. But then if someone with a darker palm put their hand under there, it would just give these errors all the time. Like it's just not working. And what they found out is that the people who are programming the system, you have to be able to show as many different samples of palms to it. Well, guess what? If you're just showing palms with lighter palms on it, when it sees a darker palm, it's going to give you an error.
00:23:26
Speaker
You see what I'm saying? So it's just, it's just, it's, is that specifically racist? No, right? You can't say that's racist. You can't say that, but you can say, Hey, like you're not thinking about other
00:23:37
Speaker
cultures. You're not thinking about people who have darker skin tones. Or they may have thought about it and said, look, let me just put like 15 or 10 in the LBI. I mean, it's not that many, right? It's possible. You know what I'm saying? Right? And that's where the problems lie. It's just like, you can't speak, like when it goes down to accountability, then it's like, well, I didn't do it. The computer did it.
00:23:59
Speaker
And we gotta be really careful about that and really look into the specifics of
Combating Healthcare Disparities with AI
00:24:05
Speaker
that. So that's how I look at it. I try not to get too academic. I try to get always 30,000 foot view about it because I think when you get too academic about things, you tend to silo certain issues and you lose track of the overall thing, which is, this is great for technology. But if all you're doing is,
00:24:23
Speaker
you know, segmenting society based off of how we normally do things in a faster way, in a more efficient way than you're actually making things worse.
00:24:32
Speaker
Right. And it sounds like you kind of already agree to the point that there's a chance that AI in healthcare could contribute to setting us back a bit. But I'm wondering, like, are there any ideas you have off the top of your head as to how that could happen in terms of like specific examples?
00:24:55
Speaker
So I think that was an example of a medical insurance company that was using AI to determine who needed resources for renal, like end stage renal disease. Did you guys hear about this one?
00:25:11
Speaker
Yeah, so they were using they were using artificial intelligence to help them determine which patients would get like services for end stage renal disease, right? And we're talking about services like
00:25:26
Speaker
you know, how quickly can they get, how quickly should they be seen by a surgeon to get like an AV fistula, maybe even get supplies or get themselves to a point where they're getting dialysis. And I'm just 30,000 foot view from this. There's more specifics to this. But what they were finding out is that the program was discriminating against black
00:25:48
Speaker
patients indirectly by not allowing them to get access to care because of the eGFR stuff that we were talking about, right? And what it was doing is like, without anybody paying attention, it's like, it's like stealing a penny, like if you're an accountant, and you're working for a billion dollar industry, it's like every day you take a penny, right?
00:26:07
Speaker
like nobody will ever know, right? But over 10, 20 years, it adds up. So that's what this program was doing. It was just slowly but surely like just kind of getting African Americans out of the way. And it was just doing what it was programmed to do. So I think what we have to do as just clinicians in general is we have to be able to
00:26:31
Speaker
sit in a room and be able to run these programs and really look under the hood and see what these programs are doing and be able to take sample populations and find out and spit in and spit out. Is this garbage or does this make sense?
00:26:49
Speaker
And this is something that I think, in my opinion, has to come from the federal government. If you give this opportunity to businesses, if you give this opportunity to individual hospitals, they're always going to do what's cheapest and what's the most efficient. It doesn't mean that it's the most fair. And that's where you have to really, really like clamp down on this.
00:27:08
Speaker
artificial intelligence should not be based on a hospital to hospital or doctor to doctor basis, because what that does is it's all about competition. And when you're talking about health care, when you're talking about someone's health, that should not be based off of competition, right? The way you get taken care of in Maine should be the same way that you get taken care of in Mississippi.
00:27:31
Speaker
We know that's not the case, though, right? But that's the way it should be, right? But artificial intelligence, in my opinion, needs to be regulated by the federal government. It has to be regulated by the federal government. It has to be studied. It has to be determined that in order for this program to be released, it has to be able to do this amount of people with the population. That's it. That's how I think about it. I think we're setting ourselves up for failure when you start allowing individuals or individual hospitals or individual companies to kind of run them up and do what they want to do.
00:28:00
Speaker
That's what a problem is, right? So back in the 1960s, or the 50, I can't remember if it was the 50 years or the 60s, it was prior to hospitals being segregated, or excuse me, prior to hospitals being desegregated, right? That was a big push from the federal government for Medicare, right? And for decades, federal government couldn't get hospitals to desegregate, right? Because we know why, right?
00:28:26
Speaker
So then eventually the only way that you were able to get the majority of hospitals, including the hospital that I got my training at Grady Memorial Hospital, I love that spot, right? The only reason you were able to get those spots to desegregate was because it said, if you were segregated, you weren't going to get Medicare money. And this is millions, possibly billions of dollars. It's the only reason they changed their behavior, right? So that's what I'm saying.
00:28:53
Speaker
As much as we wanna rely on the good spirits of individuals or people, people just gonna do what they normally do, what's easiest for them. Like you got, you have to have this stuff regulated by the federal government. You have to have it relate, you know, connected to either money or connected to some type of major offense or some type of major negative effect so that you can keep people's behavior in line. It's sad to say, but it's the truth. And when people say healthcare should be regulated by healthcare, no,
00:29:22
Speaker
No, it shouldn't actually. It shouldn't. I'm sorry. It shouldn't because we've shown enough. There's enough behavior in past for us to show that, you know, you really need someone watching and saying, hey, y'all on the up and up, you know, and pow pow, you did bad.
00:29:42
Speaker
We have to be like you mentioned, be in these spaces because oftentimes, you know, when these people are creating algorithms, when are people of color or people that have been marginalized in these rooms. And so I think this is well represented, particularly in healthcare and
00:29:59
Speaker
the AI space. And oftentimes, even for me, when I even think about exams, when we talk about USMLE, step one, step two, we're not in these rooms. A lot of these exams are biased. A lot of these algorithms are biased toward a particular people, and they don't necessarily emphasize some of the things that we go through or some of our challenges in regards to diagnosing in our environment. They don't talk. They don't talk the way how we talk sometimes. Like, that's the thing
00:30:27
Speaker
That's because I recently started writing some questions for complex and I was the only one in the room.
00:30:36
Speaker
And what they recognize, what they recognize, which I give them props for, is that there are times when we speak differently or we show up differently or the experiences that they represent on a test may be foreign, right? The way how someone presents to the hospital may be foreign to how we would, you know, we as minority physicians or people from a certain socioeconomic, like
00:30:59
Speaker
you, Dr. Aldwin, coming from the Bronx. There may be something different about the way how people show up around where you at versus the way how someone may show up in Iowa.
00:31:08
Speaker
It's just different. You know what I'm saying? And I think for decades, these tests, you know, when you have people who look a certain way and they're there writing the exam a certain way, you're going to get a certain result. And at least I give the complex and I give the people who write that exam, they're starting to recognize that we can write questions better. We can stop making it less confusing. We need people from different backgrounds to really be writing these questions so that we are just getting
00:31:35
Speaker
a wide breadth of people from different backgrounds, right? Because the bias is real. And I think a lot of times people are like, well, show me that question and show me how that question is racist. And it's like, well, I can't do it. I can't show it to you like that, right? It doesn't show up like that. And I think that's how we are sometimes with interactions. We want it to be black and white. It's like it doesn't work that way, guys. It's not
00:31:56
Speaker
It doesn't say like, that naysayer patient came in and we want you to diagnose it. It's not going to use n words in it. It's just going to be certain things that over time you'll realize really put us at a disadvantage. I'd leave it to you like that.
00:32:15
Speaker
And I agree, we have to be in those rooms. We have to speak up. Our academicians have to be writing papers and continue to do studies on these things and saying, hey, artificial intelligence is great, but look what this study showed.
AI's Impact on Medical Education
00:32:28
Speaker
This is great. Look what this study showed. They have to continue to put that stuff out in the journal, in JAMA, and all those different things. It's a multi-pronged effect, I think. And it's tough, but yeah, we got to be in the rooms.
00:32:43
Speaker
Yeah, when I still remember when I left my exam for commonest level two, I was like, yo, I don't want to throw the hands with the NBA on me. Like, yo, Dr. Darker, you could pass me some questions. We'll talk about it for level three. I'm starting for that right now. But, you know, after the part of the studio. But do you think developers need to go through training on implicit bias and racism within medicine to address these flaws in AR?
00:33:09
Speaker
Yeah, sure. It's not gonna change anything, right? Like, yeah. See, I wanted to phrase that question actually is what kind of training what would that look like if they did go through it? Because you know, they just have those little modules, you click through it, right? You put nonsense and you didn't learn anything. So how do you mitigate that to actually get people to really learn about it?
00:33:34
Speaker
I just I just think that every program, if you're going to do that, every program has to have before the program comes out, there has to be specific regulations, guidelines and certain things that this program has to do before it's released to the public. And that has to come from the federal government. That's how I feel. You can put people through cultural, cultural competency. And that's great. But, you know, if you're leaving them to their for them to figure it out on their own, their own, and then we just trust it.
00:34:03
Speaker
It's a problem, right? There's a reason why there's a national, it was an NTSB, the National Transportation Safety Board. There's a reason why, you know, Tesla or whoever makes a car has to make a car. Not only did it, first of all, they make the car, then it has to be tested by all of these different things. They do these crash studies and all that stuff, right? And it's independent. And there's a consumer reports and all these different things because they know, automobile companies, they do what they will do. They will put a gas tank in the back of the car.
00:34:30
Speaker
And unless you force them to move that gas tank to the side of the car, right? So that when someone hits them in the back, the car doesn't explode in flames. They're going to keep the gas tank there because it's cheaper, right? But in order to move it to the side, that costs money to do. It's the same thing with these artificial intelligence companies or these tech companies. Somebody has to be there to force them to do the things that they don't want to do, right? To do the work that they don't want to do.
00:34:55
Speaker
Or the work that they just have blind spots about and just making them do like a course that we even just blow right through. Like, let me just hit AAAA or BBB. I don't even like listen to what they're saying. Let me just get through. I did cultural competence. Here's my certificate. And then not change anything. That's really easy. And that's what happens nowadays. But you got to force these people. They got it. There's certain guidelines that the government has to set forth. In order for this to work in all the hospitals, you got to do this, this, this, this, this, this, this. That's it. It has to do the stress test. And that's it. That's my thoughts.
00:35:23
Speaker
It's like all these cultural competency is this like, I don't know, we're just saying things just is the mental masturbation part that drives me nuts. I'm sorry, guys. Yeah, 100%. And, you know, I think that AI, it's kind of been slowly coming up, but I feel like it's at a point where it's moving really, really quickly now and people can't keep up anymore. Like what's going on?
00:35:44
Speaker
And so I can only imagine how that's affecting things from a healthcare standpoint, where now they're trying to implement it, and it's affecting the decisions that's been made for patients and for just certain outcomes. So from your perspective, do you feel like healthcare professionals are being adequately trained to understand and interpret AI-generated recommendations, or are they just kind of blindly relying on the technology?
00:36:11
Speaker
So that's a good question. I think my generation is not blindly generating it or not blindly following it, right? We're the generation still that is like, if you look at an EKG and you read on top of that, you will automatically fail, right? So basically, never trust the recommendations of the computer. Always look at the EKG yourself, figure it out yourself, and then go from there.
00:36:32
Speaker
But I can tell like the newer generation, there's so many times what they said, yeah, this EK, the computers or the EKG on the top of it says HOFib. And I'm like, well, that's not what you see down here. Like you should just never believe that. So I do think that the younger generation is more
00:36:49
Speaker
They're way more, excuse me, they're way more, they've grown up with computers, they've grown up with cell phones, they've grown up with technology. So I think they're more apt to accept it and to incorporate that into their professional world than we are, which is very like, I don't know if I believe that, let me make my own decisions. But I think in general, once you reach a tipping point, right, if at least 30% or 40% of your coworkers are using artificial intelligence,
00:37:15
Speaker
you're bound to use that also, right? Just for time's sake or efficiency's sake, that's just the way how human nature works. But yeah, that's an interesting look at things, which is, you know,
00:37:28
Speaker
how do we know outside of just the vendor after they took us for a steak dinner and telling us like these studies that they put together, how do we know that artificial intelligence or the programs are doing what they really should be doing? And I'll tell you something right now, like when I was in training, this is the late 2000s in Atlanta, there was this medication called Zygras. I don't know if you guys ever heard of that.
00:37:53
Speaker
Zygris was like this medication that was supposed to be like the panacea for sepsis and anti-inflammation and everything like that. And like if someone got sick and they started getting DIC, or they started going into renal failure, or they started having like major issues of being septic, you're supposed to give them Zygris and like in 48 hours, they're great, they're doing good. Problem is, nothing costs seven Gs per medication.
00:38:22
Speaker
But hey, yo, yo, Zygress, they know how to take people out. They took us out on so many steak dinners. They took us out to nice restaurants, you know, and they got people to kind of change their behaviors. And then several years later, they do these studies and they find out that Zygress don't work. It don't help. Right. But the company told you it worked.
00:38:45
Speaker
Right. That steak dinner was good. Right. And meanwhile, you spend seven G's per medication on these patients and the studies are showing that this doesn't help. So that's what I'm saying. Like we got to be real careful what what we believe from these companies. We got to make sure that we they are really doing what they're supposed to be doing. Who determines that? I say the easiest thing is just put it in the hands of the federal government. You know, obviously people don't trust the federal government, but I trust them better than individuals. I'll tell you that right now. So let's you know.
00:39:15
Speaker
In your episode in January, 2023 about AI and health care, you talked about, again, we mentioned the AI taking on USMLE and being successful. But one of your thoughts in which I agree with is, what is the relevance of these exams? Are they relevant to what we actually need to accomplish in regards to being successful in our particular field? So for instance, you're a trauma surgeon. Do you have to really know about all these pediatric conditions, these genetic abnormalities?
00:39:44
Speaker
Yes, in some ways, I think you need to at a foundational level. But in regards to testing, it's like when we talk about AIs, they're not even really studying. They just show up, take the exam. But for many of us, we take five, six, seven. It took me seven months to get ready for my level one exam. Oh, what does that mean in terms of for you? What is your opinions in regards to medical training when we compare it to AI and we compare it to how we're looking at where we need to be in terms of physicians, especially physicians of color?
00:40:12
Speaker
What does that mean to you? How do we transform this landscape of learning and understanding what medicine should look like now that we have AI potentially being a barrier and kind of making us look like fools? Yeah. Yeah. I think the last part is what you said is extremely important, right? That test made it look like fools because ultimately what that test is doing, what the chat GPT is doing is it's recognizing patterns, right? It recognized the pattern and it passed the test.
00:40:39
Speaker
In essence, that's what we're doing, right? We're trying to recognize patterns. It's just that we do it at a slower pace than artificial intelligence can. So that lets you know, it's like, well, what are we testing?
00:40:50
Speaker
What are we testing? The ability for me to see patterns or the ability for me to really use my mind and my training to answer and take care of people. Because if you're telling me that all you have to do is spit garbage into an artificial intelligence program and it just rapidly understands the pattern and it's a capacitive test, then likely the way in which we are testing individuals right now is probably not the most efficient way to do it. Excuse me. It's probably not the best way to do it.
00:41:20
Speaker
It may be the most efficient way to do it, but is that really the best way to do it? Because efficient is you just get a whole bunch of people to take a test, and they pay money, and that's it. And we move on. And whoever fails, you can't move on. And whoever passes, move on. That's efficient. Move on. Come on, let's go, let's go, let's go. Is that the right way to do it, though? Is that the right way to do it? And that's what we're learning now.
00:41:39
Speaker
The other thing too is not only just artificial intelligence, but also the pandemic showed us, like in general, are we doing medical training the right way, right? The reason I mentioned that is, is what also happened during the pandemic, what fell by the wayside, the PE examination. Yep. Right. The PE examining. All in all. Right. Right. We're saying that, but we already spent our money, right? They got us. Diagnosing patients. What are you celebrating? That's money I ought to pay.
00:42:08
Speaker
But that responsibility, that's not only paying for the test, that's flying to the test, that's staying in a hotel, and then you're taking the test and you go from there. And it's like, well, why did you get rid of it then? If you guys made this such a big deal to the point where you created an entire section of a test for this, why is it just like that you just got rid of it and they haven't reimplemented it? It's because it wasn't necessary.
00:42:36
Speaker
You knew that. It wasn't necessary. Right? Or why is it that for decades, people have been saying, man, just make you assembly step one, pass fail. Because what's the point anyway? The point is to just show that you're competent. You have enough knowledge to move on. Why are we using it as a way to determine if someone should get into a surgical specialty or not? If I get past a certain point, if I get a passing score,
00:43:00
Speaker
Like you mean to tell me if I get 30 points more than I'm more likely to do well in a subspecialty than primary care? Like this doesn't make sense, right? Just make the damn test pass fail and that's it. You have to look for other ways to show that someone is really great.
00:43:16
Speaker
for primary care or really great for a surgical specialty. But to be able to say, well, I forget what the numbers are, but if you get a 240 and you pass versus you get like, I don't know, a 300 and you pass, well, the person who got 300, you know, they are more deserving of going to a specialty. It's like, this is stupid.
00:43:34
Speaker
To me, that never made sense to me when it's supposed to be a state licensing examination, right? You just have to pass. That's it. Just to show that you have enough knowledge to move on.
AI Challenges in Healthcare
00:43:43
Speaker
So I think that, you know, artificial intelligence, I think the pandemic, I think sometimes you need like these really big extenuating circumstances to just show, to put a mirror on how inefficient or how bad we are kind of doing things, right? It's kind of like Dion Sanders, right? Dion Sanders, you know, when he was at Jackson State,
00:44:04
Speaker
I was going to fight for him. But when he left, he just became just like any other coach. But his personality and his ability to recruit, he's such a big disruptor. What he does is he actually puts a mirror on all the inequities and all of the issues of college sports.
00:44:23
Speaker
right? And that's what makes people very uncomfortable. Like, you know, he's gonna win wherever he's gonna win. I was a big fan when he was at Jackson State, I thought he should have stayed there. But what he's doing now is he's basically
00:44:37
Speaker
his behavior and his ability to succeed and went through the transfer portal. What that does is it forces people to look at all the issues with the program without him having to say it and stuff. And it's the same thing with medicine. When you have really big extenuating issues, artificial intelligence, pandemic, it forces people to look at the system and be like, like, this doesn't make sense. Why do we have this? Let's remove this. Take that out. Yeah. Sorry for the long answer. Nah, it's all good though. We here to learn. We, we soaking it up.
00:45:09
Speaker
My question is, as we look at the landscape with residency and what's going on, I'm actually fearful, to be honest. A lot of my homies, I have countless stories of people not getting into residency for these algorithms, right? You don't have a certain score, you don't have certain connections and things of that nature. What do you believe will be the future of, I feel like residency programs are gonna start using AI and chat GPT and things of that nature to really cancel out certain individuals
00:45:37
Speaker
that are applying to residency programs, that's going to create more disparities when we talk about health care. So if you are aware, are there any areas in regards to that that you believe may cause undue damage to a lot of applicants, especially applicants of color, or just in general in medicine, what would the damage look like in regards to that? Yeah, I think so.
00:46:03
Speaker
The best way I could describe it is, maybe before you guys kind of finished, and I know student Dr. Isabella, you're still in school. When you would go to different portions of the hospital, they would have different units and they would have what's called a unit clerk or the unit secretary. And back when there was paper, there was no EMRs, there was just paper charts.
00:46:29
Speaker
You would write your note for a patient and you would put it into like a bin and that's it. Or if you were going to write orders, you would write orders and then you would give it to the unit clerk. The unit clerk would type it in or she would make a phone call to a consultant if you wanted a consult and what have you. Well, when you had an EMR system, what happens? When you have EMR, everything is done electronically. You put it in yourself.
00:46:54
Speaker
So basically you don't really have that unit clerk anymore. That job basically exceeds to exist. Yeah. Right. Every now and then you may have a unit that may have a unit clerk, but it's very rare. Right. Same thing happened with, uh, from my perspective with, um, dictating, right? You would call a certain number after you finished an operation, you will call a number.
00:47:18
Speaker
And you'd put a whole bunch of digits in and it would basically connect you to a transcription service and they can tell exactly what hospital you're calling from and who you are. And you would just talk. Hey, this is Dr. Nee. I did an XYZ patient. I did an appendectomy. Patient was brought to the operating room once general endotracheal anesthesia was obtained.
00:47:37
Speaker
The patient's abdomen is steadily prepped and draped, the Foley catheter, as well as a, you know, you do all these different things. Smooth, right? I knew you were making those calls back in the day, period. And then in 48 hours, 24 to 48 hours, the transcript of what you dictated would show up in your mailbox or what have you. And that's it. And then that would be in the patient record.
00:47:57
Speaker
Well, now they have what? Dragon and now they have these dictaphones. So now you have another set of jobs that are just ixnade right there, right? So even though the machine makes mistakes and so forth, it's just ixnade, right? But you can get it faster. Basically what I'm saying is hospitals or businesses will always do what's the most efficient and what's the most cheapest thing out there, right? And I think that to some extent it's a little bit scary to understand, but I think
00:48:28
Speaker
medicine and I think medical schools and residencies might do the same thing, right? How do we make sure that we, what's the best way for us to efficiently get through all of these applications in a way that kind of depersonalizes things so that we can say that there's no bias? So is there a way that we can have chat GPT go through these essays? Is there a way that we can have it go through these
00:48:52
Speaker
you know, all of these grades and all those different things so that we can find like the perfect candidate that we want, right? But if there's a certain type of candidate you've always accepted in the past, remember what I said, it's just gonna give you what you've done in the past, right? So it's just gonna exacerbate issues if you're not checking on it, right? And especially if you take out the human portion where someone could be like, yo, pause, hold on a second. Like this machine just gave us a whole bunch of people who look exactly like what we've recruited.
00:49:21
Speaker
40 50 years ago and we're having we're trying to, you know, try to give something we're trying to get a little bit more of a diverse
00:49:29
Speaker
you know, class here. Let's do something a little bit different, right? If nobody's there to put a check on that, it's gonna be a problem. So yeah, I can see programs doing that to select and to weed out people who they don't wanna weed out. Meanwhile, you're weeding out like amazing candidates, right? Like I didn't look great on paper. I ended up going to an allopathic residency at Morehouse. I was their first DO resident, right? And I remember like two years in,
00:49:58
Speaker
Dr. Weaver, God rest his soul. I remember he said, he's like, yo, when we brought you in and we accepted you, yo, I'm going to be really honest with you. We did not know what to make out of your COMLEX scores. We didn't know what the breakdown was in comparison between the COMLEX and the USMLE, but we thought you did great. And you are excellent. And these are things that it's just like, if there's nobody there to put that checks and balance there, if there's not that human element,
00:50:26
Speaker
Yeah, I think that we got an issue going up there. And that scares me, right? And I think that that's a problem that should scare everyone. I'll tell you why. Because if you look at statistics,
00:50:38
Speaker
The people who more likely are going to go into a rural neighborhood and practice in a rural community or practice in a suburban community, which in essence doesn't look like who that provider is, are minority physicians actually, or physicians from different countries. They're more likely to go into a rural neighborhood
00:50:58
Speaker
and practice with a whole bunch of people who don't look like them and treat them just like family and so forth. So those issues affect us all. So when we act like, well, that doesn't affect me. And why do I need to be concerned about how many African-American or how many Hispanic or how many Latino and Latina or how many Native American or how many other underrepresented physicians are getting into medicine? It's like, yeah, well, they're the ones who are coming into your neighborhood.
00:51:26
Speaker
into your backyard and they're taking care of you without any problems and they're providing amazing care. So it's a problem for everybody.
Closing and Future Topics
00:51:35
Speaker
I'll leave it at that. That's everybody. Exactly. Honestly, Dr. Darko, you dropped a lot of gems. You came with the knowledge. You came with, you know, all the tips for our lounge listeners, and we're so grateful. Before we close out, we just kind of want to ask, do you have any last comments or remarks you want to disclose to our audience? How can they best reach out to you? Yo, listen, everybody. This is how we talk on Docs Outside the Box every day.
00:52:02
Speaker
Make sure you check out Docs Outside the Box, yo. You know what I'm saying? Give me a gunshot. Boom, boom. You know what I'm saying? In the background. You want to listen to our show and hear how we talk? This is how we do on Docs Outside the Box, yo. Me and my wife, Dr. Renee, she's OB. You know, we talk like this. We talk about this from a money standpoint. We talk about this from a medicine standpoint. We definitely talk about this from a pop culture standpoint.
00:52:23
Speaker
And we try to give the real talk that I think medicine really needs. I think there's a place for everyone, but I think our lane is, yo, you really want to know the real, real. You really want to know what happens when you go into medicine with a ton of, or excuse me, you leave medicine, medical school with a ton of student loan debt. Listen to us. We're going to give you the advice on how to get that debt out of the way so that you can practice anywhere you want to practice, right? Or if you want to, if you're being bullied at your job or if you're a resident,
00:52:50
Speaker
Like, we talk about episodes about how to deal with that. Or just in general, like, you know, if you want to figure out how to talk to people about ketamine and how it affects, you know, different things that you see on TV.
00:53:02
Speaker
we talk about those types of things. So we try to keep it real. We try to do edutainment. We try to have fun. You can catch us anywhere where you listen to your favorite podcast, anywhere where you listen to this podcast. You can check us out there. We're on YouTube as Docs Outside the Box podcast. And listen, I really appreciate you guys letting me share this stage with you. Happy 60th to the Student National Medical Association. That was a dope organization for me. I'm a lifetime member. My wife is a
00:53:28
Speaker
as a chairperson emeritus. And for me, the biggest memory, just real talk is just the social aspect of it. I just was always looking forward to AMEC on a yearly basis. And then also some of the programs that we did at our local school, that stuff was dope. But just being able to
00:53:50
Speaker
just be with folks who just come from the same background, from the Bronx, from Newark, from Chicago, from Atlanta, or wherever it may be, and just say, hey, we here. Huh? Let's make this happen. Let's do some things. Let's do some events that we can teach their whole school about, but also at the same time, let's help each other get through with that social aspect. I always left SNMA AMEC being like, man, I need to step my game up. Yo, these mugs ain't playing around.
00:54:18
Speaker
You know what I'm saying? I got to make sure I get my applications in because I'm competing with them. So that social component is something that's really big for me. And anybody who's listening right now, I hope you guys really grab hold of that.
00:54:31
Speaker
And I hope if anybody's thinking about donating to SNMA, make sure you do so, whatever it is, $5, $10, 20, whatever it is. 100%. Well, you guys heard it here live from Dr. Nee Darko himself. Thank you so much for joining us in the conversation with our listeners. We can't wait to have you back on the show. So to our listeners, make sure you guys tune back in February as well, because we have another episode coming.
00:54:59
Speaker
with both Dr. Nee Darko and Dr. Renee Darko, his wife, and we're going to be chatting with them about love, relationships, all that. So we hope you have learned something new and can utilize this information moving forward in your own personal lives and careers. Okay, so that concludes our show. Thank you all for listening.