Introduction and Host Introduction
00:00:06
Speaker
Welcome to Critical Matters, a sound podcast covering a broad range of topics related to the practice of intensive care medicine.
00:00:14
Speaker
Sound provides comprehensive critical care programs to hospitals across the country.
00:00:19
Speaker
To learn more about our programs and career opportunities, visit www.soundphysicians.com.
00:00:26
Speaker
And now your host, Dr. Sergio Zanotti.
Episode Topic: AI in Medicine
00:00:33
Speaker
In this episode of Critical Matters, we explore one of medicine's fastest evolving frontiers, artificial intelligence, or AI.
00:00:41
Speaker
From predictive analytics to decision support tools, AI is beginning to influence how we deliver critical care.
00:00:47
Speaker
But what does that actually mean for frontline clinicians?
00:00:51
Speaker
I'm a big believer that the best uses of technology are not defined by those who create the technology and have the most expertise in the technology, but those who use it.
00:01:00
Speaker
So today we're going to talk about how we can leverage AI at the bedside for our practice.
Guest Introduction: Dr. Sherrod Patel
00:01:06
Speaker
Our guest is Dr. Sherrod Patel, a critical care physician with additional board certification in nephrology and echocardiography.
00:01:13
Speaker
He is a critical care intensivist at Cooper University Healthcare in Camden.
00:01:17
Speaker
He's also the assistant program director for the Internal Medicine Residency Program and an assistant professor of medicine at Cooper Medical School of Rowan University.
00:01:25
Speaker
Dr. Patel is deeply interested in applying artificial intelligence and other technologies at the bedside.
00:01:31
Speaker
Sherrod, welcome to Critical Matters.
00:01:33
Speaker
Thank you, Sergio.
00:01:34
Speaker
Thank you for having me.
00:01:35
Speaker
And I'm excited to talk about this topic.
00:01:37
Speaker
I could probably talk about this topic for hours, but I'm excited to spend this hour with you.
Importance of AI for Intensivists
00:01:42
Speaker
So why don't we start as a way of introduction with why do you think intensivists should care about AI in this topic in particular?
00:01:51
Speaker
Yeah, I think really to answer that question,
00:01:55
Speaker
um we have to kind of go back and look at what what are some of the the pain points in the icu and what makes the icu unique so the icu is unique in the sense that there's so much data there's arterial line there's the ventilator waveforms there's the labs the vitals there's there's so much data that it produces on a daily basis this is ripe
00:02:18
Speaker
to provide for an AI to learn from, which we'll talk about what this means about the learning.
00:02:24
Speaker
So there's so much data.
00:02:25
Speaker
So that's one thing.
00:02:26
Speaker
Two, I think intensivists should care because where in the hospital is there...
AI's Role in Managing Cognitive Load
00:02:34
Speaker
more cognitive load.
00:02:36
Speaker
And I guess this could be argued, but as far as the cognitive load of the ICU, the noises, the sounds, the interruptions, the number of decisions you're making under duress, it's such a cognitively loaded place that I think AI is a great tool to not replace us, but to actually provide
00:02:58
Speaker
They'll fill in the gaps of where the human mind can't take over.
00:03:04
Speaker
So one thing about the human mind is that our ability to think about something, our working memory can handle about four to seven items at once.
00:03:13
Speaker
But in the ICU, we're trying to think about a lot more things at once.
00:03:17
Speaker
So our ability to do this is limited.
00:03:21
Speaker
There's a concept called ego depletion.
00:03:23
Speaker
which as you make decisions, your ability to make more complex decisions down the line gets diminished.
00:03:31
Speaker
And so AI could potentially help with this where it could take away some of the more simple decisions and automate some of the more simple things and then allow us that cognitive reserve of our pre-parental cortex to make the more complex decisions down the
Understanding AI Technologies
00:03:46
Speaker
So I think that the applications of AI, the ICU is probably one of the top five places in medicine where it should be applied.
00:03:56
Speaker
And when we've talked offline, I mean, you always refer to yourself as an AI practitioner.
00:04:02
Speaker
Could you share with us your path to making AI useful in your critical care practice?
00:04:08
Speaker
And I was thinking about this, Sergio.
00:04:10
Speaker
I call myself a practitioner because this is the first time in my relatively young career that
00:04:18
Speaker
that I'm actually speaking about and actually applying in research things that aren't directly related to my credentials.
00:04:27
Speaker
So, for example, my previous research talks would be about things within nephrology or critical care and ventilators and echocardiography.
00:04:37
Speaker
But those stemmed from my training and my credentials.
00:04:42
Speaker
This I've kind of learned more as almost like a tradesman.
00:04:46
Speaker
I've picked up these tools.
00:04:48
Speaker
I've tinkered with them.
00:04:49
Speaker
I've practiced with them.
00:04:51
Speaker
And so over time, I got more, I became more proficient at using these tools.
00:04:57
Speaker
And I started looking under the hood.
00:04:58
Speaker
and looking to see how these tools worked.
00:05:01
Speaker
And so I call myself a practitioner because I don't have a degree that I've taken.
00:05:09
Speaker
I don't have a degree in machine learning.
00:05:11
Speaker
I don't have a degree in AI, but I'm a self-learner.
00:05:15
Speaker
I've taken many, many courses, these mini courses.
00:05:20
Speaker
hours and hours about this topic and I've practiced and coded and made mistakes.
00:05:25
Speaker
And along that line, it's given me this degree of proficiency in it where I can actually think about this in a logical way and think about ways to solve problems.
AI Algorithms and Models
00:05:37
Speaker
But before we dive into those applications and some of the things that are going on, maybe we should start with setting the floor with a little bit of AI 101 and really make sure that all our listeners are on the same page of what we're really talking about.
00:05:52
Speaker
So from a, I guess, high level perspective, what is AI?
00:05:57
Speaker
What is the difference between AI, machine learning, deep learning?
00:06:00
Speaker
Could you talk a little bit about that?
00:06:03
Speaker
So AI meaning artificial intelligence.
00:06:06
Speaker
So you kind of think about it as creating a system that can perform tasks that usually that used to require human intelligence.
00:06:15
Speaker
For example, writing
00:06:17
Speaker
a poem or writing an epic like the Iliad, AI could potentially do things like this today and before it required human intelligence.
00:06:26
Speaker
So the artificial aspect of it is doing the things that human intelligence used to do that now AI systems could potentially do.
00:06:34
Speaker
When you think about machine learning,
00:06:37
Speaker
So machine learning is under the umbrella of artificial intelligence.
00:06:41
Speaker
So these terms often get thrown back and forth.
00:06:44
Speaker
And if you wanted to visualize a Venn diagram, that's probably kind of the best way to kind of talk about these things.
00:06:50
Speaker
But machine learning is a subset of artificial intelligence.
00:06:54
Speaker
And so let me kind of give you a simple example where everyone in their EMRs gets these sepsis alerts, right?
00:07:02
Speaker
These simplistic sepsis alerts based on their SERS criteria.
00:07:07
Speaker
These are explicit rules that are programmed to say when the heart rate is this, when the respiratory rate is this, and these two things exist, send an alert.
00:07:18
Speaker
This is not a learning system.
00:07:20
Speaker
It hasn't learned by patterns.
00:07:22
Speaker
But machine learning, it actually is a method that you provide a large amount of data, labeled data, to say, all right, algorithm, machine learning model,
00:07:34
Speaker
This is what sepsis is, and this is what sepsis is not.
00:07:37
Speaker
So you give them thousands, hundreds of thousands, and millions of examples of where sepsis criteria was met, where sepsis criteria wasn't met, or AKI criteria was met, and where AKI criteria wasn't met.
00:07:50
Speaker
And it's trained on this data.
00:07:53
Speaker
And based on that kind of pattern recognition and learning, it can identify sepsis.
00:07:58
Speaker
It can identify AKI.
00:08:00
Speaker
And it gets better as more data, clean data is provided.
00:08:04
Speaker
So there's a big difference between just a simple kind of algorithm that identifies SIRS
00:08:10
Speaker
versus machine learning, which actually learns from data and doesn't require explicitly programmed rules.
00:08:18
Speaker
For example, just identifying like SIRS in a patient.
Deep Learning and Neural Networks Explained
00:08:23
Speaker
And then deep learning.
00:08:24
Speaker
So the deep learning builds upon machine learning.
00:08:29
Speaker
Deep learning is, and you'd have to talk about neural networks, and I don't want to get too much into the weeds of this, but to understand deep learning,
00:08:39
Speaker
It's essentially a type of neural network, and it's basically created based on the human mind.
00:08:47
Speaker
The human mind has layers of neurons.
00:08:50
Speaker
The same with a neural network.
00:08:52
Speaker
It has an input layer, it has hidden layers, and it has an output layer.
00:08:57
Speaker
data goes in to the input layer and there's lines.
00:09:02
Speaker
You can draw these arrows back and forth where what it's trying to do is learn these nonlinear patterns in the data.
00:09:08
Speaker
How do you identify sepsis?
00:09:10
Speaker
How do you identify AKI?
00:09:12
Speaker
And it has the label data of what sepsis is, what sepsis isn't.
00:09:17
Speaker
And it finds the nonlinear patterns, and it goes back and forth until that error rate is reduced as low as possible.
00:09:24
Speaker
And now it's trained to identify sepsis.
00:09:27
Speaker
So that's what deep learning in the neural network is.
00:09:30
Speaker
And these are powerful, powerful methods
00:09:32
Speaker
to identify these non-linear trends that the human mind actually will have a difficult time with.
00:09:38
Speaker
So that's kind of like basics of what AI, machine learning, deep learning, and what a neural network is.
00:09:47
Speaker
The other two terms that I think are worth maybe throwing in there and you can maybe give us a simple explanation is algorithms, which you already mentioned, right?
00:09:55
Speaker
Which I still think is important part of AI and large language models or LLMs.
00:10:04
Speaker
So an algorithm could be basically a recipe is an algorithm, right?
00:10:08
Speaker
So you look at a cookbook,
00:10:10
Speaker
It tells you how to make something that you're basically you're following a predefined method to solve a specific problem.
00:10:18
Speaker
So that could be an artificial AI doing that or a human doing that.
00:10:23
Speaker
So that's the basics of an algorithm.
00:10:26
Speaker
Now, when we talk about kind of the evolution of AI, and I mentioned kind of like the neural network already, the big thing that when people talk about AI today, they're probably talking about large language models.
00:10:40
Speaker
They're talking about Chasie PT, they're talking about CLAW, they're talking about Gemini.
00:10:45
Speaker
So back in 2017, so what led to this point where there's this big, huge AI explosion?
00:10:54
Speaker
In 2017, a paper came out, and it was by Google, and it was titled, Attention is All You Need.
00:11:01
Speaker
And it introduced this new type of neural network, and it was called a transformer model.
00:11:09
Speaker
And again, we could easily get into the weeds on this, but to kind of just keep it high level, what this did that improved upon the previous neural networks, like recurrent neural networks,
00:11:24
Speaker
LSTMs, these are just examples of neural networks that existed before, is that the previous neural networks did well with sequential data.
00:11:32
Speaker
So when you trained it, you would have to train it sequentially.
00:11:36
Speaker
And it had a difficult time remembering what came before it.
00:11:39
Speaker
So for example, if you're trying to predict text, it had a difficult time predicting the next word because it didn't have the context of all the previous words before it.
00:11:52
Speaker
But with the transformers, now it has contextual understanding.
00:11:55
Speaker
It's something called attention.
00:11:57
Speaker
And this attention made it better at identifying that, let's say, a patient who is a 74-year-old patient with COPD had pneumonia six weeks before.
00:12:10
Speaker
It's coming back again with dyspnea, completed antibiotic course.
00:12:14
Speaker
And so this kind of context...
00:12:17
Speaker
a transformer model would be able to keep in context the temporality.
00:12:21
Speaker
When did the patient come in before?
00:12:23
Speaker
What diagnosis does it have?
00:12:25
Speaker
So all these things are kind of the attention is given to this, and it's better able to predict the next word than the previous neural networks were.
00:12:35
Speaker
And that's what kind of really made these chat TPTs and things like that more powerful.
00:12:40
Speaker
The second thing that really changed beyond the attention and the contextual understanding
00:12:46
Speaker
was the ability to train it.
00:12:48
Speaker
And so you could train these models using something called parallelization, which now it can use
00:12:55
Speaker
a bunch of GPUs to train the data and break the knowledge or the information into small chunks, and they could all run at one time.
00:13:05
Speaker
So you were able to train a lot, the models on a lot more data than you were before.
00:13:11
Speaker
So that's what kind of the two big things that led to this explosion of what we think of AI today.
00:13:17
Speaker
So most people, when they speak about AI today, they're probably talking about large language models.
00:13:22
Speaker
They're probably talking about ChatGPT, Gemini, and things like that.
Common AI Misconceptions
00:13:26
Speaker
And the other thing I wanted to ask you, Sherrod, is if you could share with us some things that AI is not and maybe share some common myths or misconceptions about AI commonly held by clinicians.
00:13:40
Speaker
Yeah, I think it's easy to anthropomorphize ChatGPT or Gemini, and it almost feels like you're speaking to a human.
00:13:49
Speaker
Really, what ChatGPT and these LLMs are, they're essentially really complicated calculators.
00:13:56
Speaker
They're prediction machines, and they're thinking in numbers.
00:13:59
Speaker
They're not actually thinking in words.
00:14:02
Speaker
They're thinking in numbers.
00:14:06
Speaker
The fact that they don't understand, like when we think about morality, emotions, and words like pneumonia and MI, all these words, this means nothing to the large language model.
00:14:20
Speaker
These are all numbers and it's trying to predict basically the next word.
00:14:24
Speaker
So it can do things like create something out of thin air.
00:14:29
Speaker
And this is called, one concept is called hallucinations.
00:14:32
Speaker
And so a concept could be is that, wow, these outputs are so beautiful.
00:14:37
Speaker
I can trust these, this reference.
00:14:42
Speaker
And so trust is something that is going to need to be earned in this.
00:14:47
Speaker
And it can hallucinate and create references out of thin air.
00:14:51
Speaker
It's getting better.
00:14:53
Speaker
But this still exists.
00:14:55
Speaker
So really, for one of the misconceptions for AI is that you could trust all the outputs.
00:15:01
Speaker
We are not there yet.
00:15:02
Speaker
We might get there one day through careful implementation, but you'd have to be very, very careful and really be hypervigilant of things such as hallucinations when you're doing research and even eventually getting to the point where you're using these tools for clinical decision making.
00:15:19
Speaker
The second point I would make about this is that AI is not going to replace physicians.
00:15:24
Speaker
I think it's kind of humans go through this kind of black or white thinking, right?
00:15:28
Speaker
So it's that that's one of the cognitive biases that exists with humans.
00:15:32
Speaker
is that we think in black and white terms.
00:15:35
Speaker
And it's not going to be AI versus doctors.
00:15:38
Speaker
The best implementation is going to be an AI that augments what humans do in clinical practice.
00:15:47
Speaker
And through like looking at some of the cognitive limitations
00:15:52
Speaker
load of the clinician so we can make our complex decisions and be better with the other aspects of being a doctor, like the humanism aspects and things like that.
00:16:03
Speaker
And so that's a second thing that I think is a misconception that I think we really need to rethink.
00:16:13
Speaker
And then another misconception might be is that the AI needs to be
00:16:18
Speaker
perfect or the data that we're feeding it needs to be perfect.
00:16:23
Speaker
I think this is where kind of we're going to be running into some problems.
00:16:27
Speaker
Like how good does the AI need to be for us to actually use?
00:16:31
Speaker
So I'll use the example of an autonomous car, right?
00:16:34
Speaker
So an autonomous car, if we all drove autonomous cars, even today, the accident rate would probably be much, much lower.
00:16:41
Speaker
It'd be one in a million.
00:16:42
Speaker
but that one in a million would probably be scrutinized very, very heavily.
00:16:46
Speaker
Whereas humans, the fallible that we are, we may have an accident rate of one in a thousand, right?
00:16:52
Speaker
The AIs could be much, much better than us and there'd be less total harm to humans in general, but our ability to let go and just accept that one in a million is more difficult than the one in a thousand for the human.
00:17:07
Speaker
So in the medical aspect of it,
00:17:10
Speaker
So one kind of, I think, like, I guess a bias that we may have against AI is that it can make mistakes.
00:17:17
Speaker
Well, I agree AI can make mistakes, but humans make lots of mistakes, right?
00:17:22
Speaker
So you're like the medical errors is one of the biggest, biggest reasons for morbidity and mortality in medicine.
00:17:29
Speaker
And so if the AI is properly applied, there could be a small error rate.
00:17:34
Speaker
But I think overall, it would improve the error rate of the overall medical system.
00:17:40
Speaker
So I think those are some of the things that come in kind of myths and misconceptions about AI that I think we should really rethink.
AI Applications in Critical Care
00:17:48
Speaker
Let's move on to applications of AI and critical care today.
00:17:52
Speaker
And I would like to do this in kind of like two steps.
00:17:56
Speaker
First, I would like to just get your overall assessment of some applications where we are in general.
00:18:04
Speaker
And then I would like to go more specifically in how do you use AI in your day-to-day, whether it be clinical or non-clinical, and maybe dive a bit deeper into that.
00:18:14
Speaker
So in terms of the applications that,
00:18:18
Speaker
I kind of hear all the time.
00:18:20
Speaker
I hear about predictive analytics.
00:18:22
Speaker
I hear about natural language processing for clinical documentation.
00:18:26
Speaker
I hear about decision support tools.
00:18:28
Speaker
I hear about AI-powered imaging analysis, right?
00:18:31
Speaker
I mean, at the end of the day, x-rays and MRIs are just pixels with basically, like you said, number information, right?
00:18:40
Speaker
I hear about workflow optimization.
00:18:42
Speaker
And I also very interested in how do you apply it to teaching?
00:18:45
Speaker
So maybe we can go just like I said, Sherrod, at a high level.
00:18:49
Speaker
What do you know what's going on in this world?
00:18:51
Speaker
And we can start with predictive analytics.
00:18:55
Speaker
Yeah, I think the predictive analytics, and this was before the transformers.
00:19:01
Speaker
Sergio, I've been in this field for like six years, but six years in this world is, it just moves so fast.
00:19:09
Speaker
that when I think about what existed four or five years ago, it seems so rudimentary and simplistic compared to what is coming out today.
AI in Early Detection of Conditions
00:19:17
Speaker
So the things that exist and have existed for a while but are getting better, one is sepsis detection, right?
00:19:24
Speaker
I think the big problems and pain points in critical care especially is that we have these conditions that are time sensitive.
00:19:33
Speaker
Strokes, sepsis, AKI,
00:19:37
Speaker
of which if we get to it early, we can change the trajectory of that patient.
00:19:43
Speaker
And so what AI can potentially do very well is to find and identify these things earlier than the human mind might be able to.
00:19:51
Speaker
And let me give you an example of how it might do this.
00:19:57
Speaker
It's even a simple machine learning model where it's trained to identify sepsis.
00:20:05
Speaker
Now, this model could have something called feature engineering in it.
00:20:10
Speaker
So again, not to get into the weeds of where we might say, all right, the lactate's elevated.
00:20:16
Speaker
I'm worried this patient has sepsis and the patient may not do well.
00:20:20
Speaker
Well, the machine learning model could have something called feature engineering input programmed into it, where it not only just looks at the lactate, it looks at the fact that the patient's albumin on admission was 1.7.
00:20:33
Speaker
And a little known fact is that if you look at most sepsis predictive, like mortality prediction tools, albumin is one of the biggest predictors of death.
00:20:45
Speaker
And I think of it as like a biomarker of your fragility.
00:20:48
Speaker
And it might say, all right, well, the combination of the albumin and the lactate being elevated and the creatinine went up by 0.1, which we might not even blink at.
00:20:58
Speaker
The probability of sepsis has gone up from 35% to 75%.
00:21:03
Speaker
It notes the low-grade fever, 100.6, which we might not even bat an eyelash at.
00:21:08
Speaker
And then it changes the probability to 78%.
00:21:12
Speaker
And so these sepsis detection tools find these nonlinear patterns, and it can potentially look at all these features, the lactates, the vitals, and everything, and look at them through different mathematical representations.
00:21:27
Speaker
that in a way that it allows it to catch these things at a much, much earlier level so we can actually modify the trajectory.
00:21:35
Speaker
And I think sepsis and kidney injury are low-hanging fruit for this where it can potentially, there are models that exist for AKI that can identify AKI 24 to 48 hours before the creatinine rises.
00:21:47
Speaker
Now, these aren't being applied directly
00:21:50
Speaker
at full scale yet, but if you look in the literature and you look at papers, and I've been a part of one paper where we used the neural network and we were able to identify AKI before the creatinine rose.
00:22:01
Speaker
And this is something that needs to be solved because creatinine, again, it doesn't rise until 24 or 36 hours after the insult.
00:22:10
Speaker
And at that point, you may be too late to actually correct the process that led to the AKI.
00:22:15
Speaker
And we both know when a patient has AKI with any other condition,
00:22:19
Speaker
your trajectory in that ICU has changed.
00:22:21
Speaker
Your probability of death has gone up significantly.
AI in Clinical Documentation
00:22:24
Speaker
So sepsis, AKI detection tools are, I think, big, big.
00:22:30
Speaker
There's a lot of investment going into that because you can potentially change the trajectory of these.
00:22:36
Speaker
And I think that my understanding is that we might slowly not even be aware that a lot of our EMRs are incorporating better models for prediction, right?
00:22:46
Speaker
I mean, you talk about sepsis, the famous sepsis alerts, right?
00:22:50
Speaker
But I think that even that is getting better and better without us even being aware of it.
00:22:54
Speaker
So I think it's a great example of how AI is permeating into our practice, whether we are aware and like it or not.
00:23:03
Speaker
I think one of the biggest problems with the sepsis alerts was the false positives, where you lose trust in these alerts.
00:23:11
Speaker
The trust is lost.
00:23:12
Speaker
So as these models get more advanced, the false positives will most likely be less.
00:23:18
Speaker
And so when you actually see the alerts and your mind puts together, it's like, wow, I got this alert.
00:23:23
Speaker
and actually did predict the patient's deterioration, the trust will increase and we're more likely to be reacting to these alerts in the future.
00:23:31
Speaker
So I'm excited about the AI applications for these early detection tools.
00:23:37
Speaker
Then the other kind of big thing is that you mentioned is the natural language processing for clinical documentation.
00:23:43
Speaker
I think this might be the lowest hanging fruit.
00:23:48
Speaker
So summarization might be the lowest hanging fruit.
00:23:50
Speaker
I think one of the biggest pain points for me, so let's say I have 16, 17 patients in the ICU and I'm rounding.
00:23:57
Speaker
When I have a new patient and they're complex and they've had multiple admissions and they have resistance to multiple antibiotics and they have been on steroids in the past, but they're no longer.
00:24:13
Speaker
For really getting a good
00:24:16
Speaker
medically tuned AI system or large language model to really kind of give you a nice summary of the relevant things that you're looking for would really reduce my cognitive load.
00:24:28
Speaker
And then I can actually think critically about the information that I have in front of me.
00:24:33
Speaker
And I think the best examples would be is the patient comes in with septic shock.
00:24:38
Speaker
The urine is dirty.
00:24:43
Speaker
They get put on the usual antibiotics.
00:24:46
Speaker
But the AI, the summarization, might note to you that they've had resistance pseudomonas in the past.
00:24:54
Speaker
So the sethpeme that you started them on
00:24:56
Speaker
it may not be adequate.
00:24:59
Speaker
So I think the natural language processing, clinical summarization is low hanging fruit.
00:25:05
Speaker
Decision support tools.
AI Decision Support Tools
00:25:07
Speaker
This is kind of where I'm actually working on one decision support tool.
00:25:11
Speaker
And I think this is probably one of the more difficult parts and the one that's going to be scrutinized a little bit more by the FDA and things like that.
00:25:21
Speaker
So when you're actually using AI to make actual clinical decisions,
00:25:26
Speaker
Like, for example, differential generators, next best test.
00:25:32
Speaker
Like if the AI can help you decide what the next best test is.
00:25:36
Speaker
De-escalation of antibiotics.
00:25:37
Speaker
These are high-risk decisions that you'll be making.
00:25:43
Speaker
And there aren't many AI tools at this point that are actually applied.
00:25:49
Speaker
And I think this is more...
00:25:51
Speaker
because of the worry about the mistakes and the errors that might come with this that they haven't.
00:25:57
Speaker
Because I make tools like this.
00:25:59
Speaker
And I'll tell you, Sergio, is that just kind of like when I create these synthetic cases to put in there, it's better than us as today.
00:26:07
Speaker
It's better than us today for differential generation to think in a statistical probabilistic way.
00:26:14
Speaker
rather than just more of, I've seen a few cases like this, so this is what I'm going to do.
00:26:19
Speaker
And I'll give you an example, is that, and this might go into kind of the next question you're asking, is that how do I use some of this?
00:26:29
Speaker
I have a tool that I made, it's a multi-agent tool that I kind of, I don't use it to
00:26:37
Speaker
for clinical decision-making, I go back and I retrospectively my difficult cases, I'll go back and I'll put the cases in then.
00:26:46
Speaker
And so I had a case of a young lady, she was in her 40s, came in with altered mental status, had a seizure, and the MRI showed temporal lobe enhancements.
00:26:55
Speaker
I'm like, okay, this must be HSV encephalitis.
00:26:58
Speaker
The patient's on acyclovir.
00:27:01
Speaker
The patient had renal dysfunction.
00:27:02
Speaker
So the acyclovir is not benign in that situation.
00:27:07
Speaker
And so, but the LP fluid, there was nothing.
00:27:10
Speaker
It was completely clean, no cells, no protein.
00:27:13
Speaker
The HSV PCR was negative.
00:27:16
Speaker
And then we got into like a multidisciplinary discussion and the consensus was to keep the acyclovir going.
00:27:24
Speaker
I kept the cyclivir going because at that point, that was a consensus.
00:27:27
Speaker
But I went back and this tool that I created, I call it BayesBuddy.
00:27:31
Speaker
It uses Bayesian reasoning and using Bayesian probability.
00:27:34
Speaker
It finds likelihood ratios of tests and it gives you probabilistic differentials of what's the likelihood.
00:27:41
Speaker
And I gave it the data, obviously, without any patient info.
00:27:45
Speaker
I gave it kind of a similar case and it gave a probability of this being HSV encephalitis to be like 0.003%.
00:27:54
Speaker
And when I think about the actual risk and harm of the acyclovir in the setting of AKI,
00:27:59
Speaker
I could argue that the risk of the acyclovir and the AKI was higher.
00:28:03
Speaker
So in my own personal use, just for my own kind of like creating my own probabilities in my head, I often take my difficult cases, things that I'm not sure about, and I'll put it into my own multi-agent tool that I created.
00:28:18
Speaker
And I'll look to see how is the performance and how does it look objectively compared to what my decision-making was.
00:28:25
Speaker
So that's an example of a clinical decision support tool that
00:28:29
Speaker
I can't put into real practice as of yet because there's just a lot more scrutiny for it.
00:28:35
Speaker
And so that's an example of something I use and then potentially an example of where you could use as a clinical decision support tool.
AI in Imaging and Workflow Optimization
00:28:44
Speaker
and imaging analysis.
00:28:44
Speaker
So this existed even before where we are.
00:28:48
Speaker
So there's a lot of vision models that have come out where specifically the pre-training is for vision purposes for x-rays, MRIs, CTs, and things like that.
00:28:59
Speaker
There were neural networks that existed before this, one of which was called convoluted neural networks, which even at that time,
00:29:07
Speaker
It was as good as junior radiology attendings.
00:29:13
Speaker
And the only, the most experienced radiologists caught things that the AI didn't, even at that point.
00:29:21
Speaker
But this is getting better and better.
00:29:22
Speaker
So this is also one of the lower hanging fruit
00:29:25
Speaker
is that if you don't have an on-call radiologist at night or you're in a rural setting, potentially the AI has gotten to the point where an application would be is just x-ray interpretations, which could be overread by the attending the next day.
00:29:38
Speaker
And this is not in prime time yet, but this is a potential application.
00:29:43
Speaker
Another one is for point-of-care ultrasound.
00:29:46
Speaker
where in the ICU, I use a lot of point-of-care ultrasound.
00:29:51
Speaker
But how can you make this more accessible to someone who hasn't had as much training?
00:29:56
Speaker
Well, there's AI tools now being integrated into a lot of the newer ultrasound machines, which will calculate IVC collapsibility.
00:30:05
Speaker
It'll identify pulmonary edema on a lung ultrasound.
00:30:08
Speaker
It'll calculate the ejection fraction.
00:30:10
Speaker
It can automatically calculate cardiac output based on some Doppler analysis.
00:30:15
Speaker
And so that's a potential application as well.
00:30:17
Speaker
And then workflow optimization.
00:30:20
Speaker
This is probably another area that AI could potentially do very well in.
00:30:26
Speaker
And for example, how can you triage
00:30:29
Speaker
patients quickly in the ER where they automatically get their assignment for step down versus ICU rather than the ER doctor calling and pleading to the intensivist to accept the patient.
00:30:43
Speaker
If the calculated severity and acuity scores are high enough,
00:30:47
Speaker
then the patient will automatically get put towards an ICU bed as opposed to a step-down bed or a floor bed.
00:30:53
Speaker
So this is potentially possible as well in the situation of like kind of workflow type of things.
00:30:59
Speaker
And so these are kind of the examples, a few examples of how AI can be applied in the intensive care unit.
00:31:08
Speaker
Sure, Rob, before we go into more specifics of your use of AI for solving problems, one of the things that a lot of clinicians come to me and ask about is related to natural language processing for clinical documentation.
00:31:23
Speaker
thinking more of what they have to document on a day-to-day with their notes and patient visits.
00:31:29
Speaker
Now, I think for the ICU, I haven't really found the great tools.
00:31:35
Speaker
There's obviously a lot of kind of AI supported scribes that I think are great if you're having a conversation in an office, right?
00:31:43
Speaker
But I think that when you walk into a patient's room that's intubated in a mechanical ventilation and sedated, right, there's not a lot of conversation going on.
00:31:53
Speaker
Any thoughts in terms of AI helping us take care of that menial work that, or what I call like low impact, right, work that we all kind of don't enjoy that much?
00:32:06
Speaker
Yeah, yeah, yeah, 100%.
00:32:07
Speaker
I think like one potential, like,
00:32:11
Speaker
Sarah, because how do we generate the HNP in our head for an ICU patient?
00:32:15
Speaker
It's often done through our clinical routes.
00:32:19
Speaker
So we're talking about, we're hearing about the patient.
00:32:22
Speaker
So one potential option in that situation is to give an AI context.
00:32:27
Speaker
The audio ability of a lot of these large language models is getting better and better.
00:32:31
Speaker
You could potentially have an AI where you click the button on, or it might be in the room, or it could be wherever you're around it.
00:32:38
Speaker
and it's listening to rounds in context.
00:32:41
Speaker
And these, this AI could potentially be fine tuned.
00:32:45
Speaker
So the fine tuned meaning, so when the AI was trained, it's trained kind of in a general way, but you can do something called fine tuning, which you could like, all right, AI, I'm going to change your neural architecture and weights to be specifically good at writing HMPs, notes and developing differentials and writing assessments and plans.
00:33:06
Speaker
And so that could be a fine-tuned AI where that's what it does.
00:33:10
Speaker
It may not be as well, do as well with math calculations or writing poetry, but it has specifically become better at that specific task.
00:33:21
Speaker
And so there are probably ways to write good notes and HNPs and progress notes.
00:33:30
Speaker
And it could probably take context if this was a patient, it's a reoccurring patient,
00:33:34
Speaker
You could look at previous notes.
00:33:36
Speaker
It could have the context of your progress note that you wrote the day before.
00:33:40
Speaker
And so it could probably give you good skeleton outputs of H&Ps, progress notes, assessments, plans.
00:33:48
Speaker
And the rest you can kind of just fill in and add in.
00:33:50
Speaker
Because we have to remember is that whatever comes out isn't the final product.
00:33:54
Speaker
You're still the human in the loop, and you're still going in.
00:33:57
Speaker
and adding and changing things.
00:34:00
Speaker
So, and then the AI could potentially just learn your practice pattern even better, how you write notes as well.
00:34:06
Speaker
And so that could be potentially added as well.
00:34:08
Speaker
So I think it's very much possible for you.
00:34:10
Speaker
So, but it would be a little bit of a different
00:34:13
Speaker
kind of workflow than in the ER or in the outpatient.
00:34:17
Speaker
I think the outpatient would be really kind of, it's much simpler to do because there's less ambient noise.
00:34:25
Speaker
If there's an audio system in the room, it can just hear the conversation between the clinician and the patient.
00:34:30
Speaker
So I think that's a little bit easier to apply.
00:34:32
Speaker
The ICU might be a little bit more difficult, but the audio is getting, the features are getting so good.
00:34:37
Speaker
I don't think we're that far away from that.
00:34:39
Speaker
Yeah, and I think that there's already applications that kind of take this principle outside of maybe a clinical conversation, but like there's all these applications and even devices that are AI powered that you can use for a meeting, right?
00:34:54
Speaker
And then it will create a summary of the meeting and to do and to follow ups and stuff like that.
00:34:59
Speaker
So I think eventually that is something along those lines, but much more clinically oriented.
Personal Use of AI by Dr. Patel
00:35:05
Speaker
So let's talk about how you use AI in your day to day.
00:35:10
Speaker
And I would like to maybe have one or two examples in the medical arena with some detail.
00:35:16
Speaker
But I also want you to share with us an example that's non-medical.
00:35:20
Speaker
I'm definitely interested in that as well.
00:35:22
Speaker
So go ahead, Sherrod.
00:35:25
Speaker
And so I program my own agents.
00:35:29
Speaker
So let me kind of first start off by just identifying and just really defining what an agent, AI agent is.
00:35:36
Speaker
What is agentic AI?
00:35:37
Speaker
Because you're going to hear this more and more going forward.
00:35:41
Speaker
So if you were to look at 10 different sources, you'll probably hear a slightly different definition.
00:35:47
Speaker
So I'll give you my definition.
00:35:49
Speaker
An agentic AI or an AI agent is essentially the large language model.
00:35:53
Speaker
Let's say a chat GPT model and the GPT 4L model.
00:35:59
Speaker
And now it has the ability to almost autonomously, semi-autonomously,
00:36:05
Speaker
make its own decisions.
00:36:06
Speaker
So for example, if you asked it a question about the latest guidelines for the initial, the latest guidelines for tidal volume titration and ARDS, it could rely on its own kind of training database.
00:36:22
Speaker
And that is such kind of
00:36:26
Speaker
information that's in the literature and it's been cemented it's unlikely to probably hallucinate about that but it could say all right just to be sure i'm going to search the internet i'm going to search cochran i'm going to search up to date i'm going to search pubmed to get a better research report on this and so it could potentially just connect to the internet so you have these api calls to the internet connect to pubmed connect to cochran and make these decisions and pull that information
00:36:55
Speaker
and actually be able to have a separate agent that communicates with this first agent and it'll organize that data.
00:37:05
Speaker
And then you could have a third agent that talks to these other agents as the fact checker and the hallucination checker.
00:37:12
Speaker
And so the output you get is kind of a multi-agent type of thing.
00:37:15
Speaker
So this is what I primarily use.
00:37:18
Speaker
And so up until very recently,
00:37:21
Speaker
When you use ChatGPT, you use Claude or Gemini, it wasn't an agentic process.
00:37:27
Speaker
But now in ChatGPT, there's this deep research tool, which works very, very well.
00:37:36
Speaker
And so what I used to do until very recently is I would have a research tool.
00:37:41
Speaker
So if I wanted to write a new paper on fluid management in ARDS, I would have my agents search the web, compile all the information, fact check it,
00:38:01
Speaker
And so there would be four or five agents each with its own role, and I would get all the information back.
00:38:07
Speaker
And then I would look at that and then I could potentially start writing at that point once I had the information.
00:38:13
Speaker
The second way I use this is for hypothesis generation.
00:38:18
Speaker
So I actually use a really cool audio feature on the Gemini app on the phone where if I'm thinking about non-invasive ways to identify intra-abdominal pressure elevation using ultrasound,
00:38:37
Speaker
I would use this Gemini tool and we would just hypothesis generate.
00:38:41
Speaker
And I would say, all right, so signs of intra-down web elevation, what would they be on ultrasound if I was thinking about it?
00:38:49
Speaker
And it might say, okay, it might be a smaller than usual IVC or collapsing IVC despite a high CVP.
00:38:55
Speaker
And so we'd go back and forth and we kind of really refine this thought process and hypothesis.
00:39:01
Speaker
And then once I've gotten to the point where it's clean, it sounds good, then I might go and apply this in a research fashion.
00:39:07
Speaker
And so hypothesis generation is probably one of the biggest use cases I use this for.
00:39:14
Speaker
And again, most of the time I'm programming my own agents to do this.
00:39:20
Speaker
But now just the foundation models are so good, you could just use the foundation models to do things like that.
00:39:26
Speaker
So these are some of the medical use cases that I do for like, but non-medical.
00:39:35
Speaker
Our private conversation before, so you know, I was telling you that my love, my first love is philosophy.
00:39:41
Speaker
And I love to just take these books that I loved and I thought I knew, and I'll put it into something called Notebook LM.
00:39:52
Speaker
It's a Google product.
00:39:53
Speaker
And you put it in there, and this tool can create a podcast.
00:39:58
Speaker
um and i could listen to it there's actually it's a really cool feature where it's it it has two people talking and it sounds like a real podcast and so the it this gives me kind of like a high level understanding of that book again and then it has auto prompts on there to ask the next next best questions on this and say all right how can i understand this book more deeply um and so my own learning
00:40:25
Speaker
as like having a personal tutor.
00:40:27
Speaker
That's my kind of non-medical kind of use case for AI.
00:40:33
Speaker
And that actually has probably been my favorite use case is being able to learn difficult material in a more efficient way than I was before.
AI in Research and Coding
00:40:46
Speaker
And I think that one of the uses that is very, very important
00:40:52
Speaker
simple and like you said i think because you know what you because you're feeding it kind of like the the information it's less like to hallucinate is just to identify like giving them like a pdf of a paper of a article and just can you summarize the five more important points here right or can you tell me what they did in methodology and it does i mean takes like two seconds right
00:41:16
Speaker
Now, I think that works very well when you've already read the paper.
00:41:20
Speaker
It gives you other insights as opposed to doing that instead of taking the time to read the paper.
00:41:26
Speaker
Because we were talking about the effort, a lot of the effort that we do for things that maybe AI can do quickly are part of our building blocks for expertise.
00:41:35
Speaker
So in certain areas, it's probably worth using it as a co-pilot as opposed to just the only source.
00:41:45
Speaker
But those are all great, great uses.
00:41:48
Speaker
I was going to ask you also in terms of when you say you program the agent, can you give us a little bit more specifics about that?
00:41:55
Speaker
Like for me, what does that really mean?
00:41:57
Speaker
Is that you program with something like Python or you program with prompts?
00:42:03
Speaker
So I and to that, I think it goes back to how I kind of learned all of this is where like a few years, like about six years ago, I became frustrated with kind of the
00:42:15
Speaker
end-of-life care with patients and I felt like every intensivist might have a different impression of what their kind of trajectory would be and different specialties would have a different trajectory and and I didn't think we were doing right by the patient by providing all of this kind of variable information so I six years ago I taught myself Python because I wanted to create this mortality prediction tool and then we did and I actually
00:42:42
Speaker
worked with a few engineers over at Rowan, which is associated with our medical school, and we created this tool.
00:42:49
Speaker
And I learned a lot.
00:42:50
Speaker
I made a lot of mistakes.
00:42:51
Speaker
I learned so much from it.
00:42:53
Speaker
And then I also learned SQL as well to kind of
00:42:57
Speaker
pull data from these huge data repositories.
00:42:59
Speaker
So I learned this building that tool before.
00:43:03
Speaker
And so I primarily program in Python specifically.
00:43:08
Speaker
And so when I'm programming these things, I'm using using Python and I use these kind of agent framework.
00:43:15
Speaker
So there's a lot of these out there now.
00:43:17
Speaker
Microsoft has one which is called Autogen.
00:43:20
Speaker
There's another one called CrewAI, and there's another one I use which is called Agno.
00:43:25
Speaker
And these are all agentic frameworks that use, and one of the languages you can use is Python.
00:43:34
Speaker
But what they do is where you may have needed 300 lines of code before, they've created these kind of agentic, like almost-
00:43:45
Speaker
simplified way to create these agents with Python, where it would be 100 lines of code as opposed to 300, 400 lines of code.
00:43:54
Speaker
And so for myself, as a programmer, I'm 201 level, but I know enough code and then I know enough how to prompt an AI that I can kind of create more advanced code and be able to troubleshoot things.
00:44:11
Speaker
And so this is how I create more advanced code
00:44:16
Speaker
apps and tools that are beyond my coding abilities.
00:44:20
Speaker
But I primarily code in Python and that's how I kind of create these kind of agent tools.
00:44:26
Speaker
Thanks for sharing that.
00:44:27
Speaker
And just a quick question.
00:44:29
Speaker
If I would talk with family, young kids going into college, I would always encourage them, like, you know, learn to code.
00:44:37
Speaker
Now, if you are young and starting or if you're old like me and want to move forward, do you think that learning to code is the key or just learning more about how to prompt AI?
00:44:46
Speaker
Since I keep reading that AI probably in the very short future will be as good coder as any human.
00:44:54
Speaker
Yeah, no, that's a great question.
00:44:56
Speaker
And I think I'd have to answer that in kind of a twofold way.
00:45:00
Speaker
One, should you learn to code for the practical applications?
00:45:05
Speaker
Should you learn to code because it can teach you how to think cleanly?
00:45:11
Speaker
For a young person, learning code really teaches you how to think logically in a way that may not have been there in your initial primary college education.
00:45:22
Speaker
It's really cleaned up my thinking.
00:45:24
Speaker
And now I think in terms of almost like Legos, where these code blocks in my head, when I see a problem, I'm thinking of code blocks in my head that I would use to solve that problem.
00:45:35
Speaker
So I think of a young person coming up, 100%, I think, learn new code.
00:45:41
Speaker
you may not be able to use those skills directly, but just being able to think in a code type fashion will give you a leg up in a competitive, whatever competitive market you're going to be in.
00:45:53
Speaker
And I think it was just as far as the pure benefit of your brain architecture changing and being able to think more clearly, I think that benefit is just, it's beneficial no matter what the practical application of it.
00:46:06
Speaker
And then if you're older,
00:46:09
Speaker
and you're thinking about learning the code yeah i think it's great but let's say you don't want to learn the code that the no code tools are getting better and better so the inertia the the the blockades that existed to get into being an ai practitioner are getting smaller even in the past two years
00:46:31
Speaker
where the things I needed to code before, there are so many no-code tools now that you can get something in an AI app up and running in a few hours that's quite complicated.
00:46:45
Speaker
And it's really just kind of a plug and play thing.
00:46:47
Speaker
So as long as you actually know what things you want to put together, the more important thing is you understanding the actual details of what connects together.
00:46:57
Speaker
And so if you're learning to code and you want to get into AI, I would be reading a little bit more about what is a transformer model, what is embedding, what is attention, what is a vector database, all these things.
00:47:13
Speaker
You understand it more, you'll learn how you can connect these things and almost in like a Lego type of fashion and create the things that you'd like to create.
00:47:25
Speaker
So as we move forward to closing, obviously, you did mention some of the challenges and cautions,
Risks and Limitations of AI
00:47:31
Speaker
You talked about hallucinations and trust.
00:47:35
Speaker
Could you just tell us, I mean, from your perspective right now, what are some of the biggest risks of using AI in a high-stakes environment like the ICU?
00:47:42
Speaker
Yeah, I don't think we're ready as of yet.
00:47:47
Speaker
It's just using the foundation model.
00:47:49
Speaker
But there are different approaches to doing this.
00:47:53
Speaker
So one, you can use an architecture called RAG, which is called Retrieval Augmented Generation, where the large language model is connected to the ground truth, the guidelines, the antibiogram of your hospital, the vitals, and the best methods to wean a ventilator.
00:48:16
Speaker
So it's connected to that.
00:48:18
Speaker
and the system prompt of the large language model is instructed to only rely on this external information, but it can do have its usual kind of decision-making, calculating, prediction ability, but rely on this outside source of information to make decisions.
00:48:36
Speaker
That's been shown to reduce hallucination.
00:48:38
Speaker
It doesn't get rid of it, but it reduces hallucinations.
00:48:41
Speaker
And so I think one of the biggest risks is if a clinician were to use ChatGPT today for clinical decision making without these additional kind of fail-safes and guardrails, there could be issues and you could potentially apply things to your patient that may not even exist.
00:49:01
Speaker
It could be hallucinations.
00:49:03
Speaker
The other things is just ICU research, right?
00:49:06
Speaker
So you creating...
00:49:10
Speaker
writing papers for ICU research and not fact-checking things and not fact-checking references.
00:49:16
Speaker
That can be a risk as well.
00:49:18
Speaker
So hallucinations still remain a big problem for large language models, but there's ways to mitigate and reduce this risk as well.
00:49:28
Speaker
And I think that that's where really I feel that a lot of it is right now that you still need, like you said, to have the human in the loop.
00:49:37
Speaker
And for high stakes environments, you need experts in that loop, right?
00:49:42
Speaker
I mean, and your expertise is important there.
00:49:44
Speaker
I think there's a great case of a lawyer who generated some information
00:49:51
Speaker
or he had basically a couple of cases generated by AI as to support his argument.
00:49:59
Speaker
And then they were all hallucinated.
00:50:00
Speaker
None of them were real.
00:50:02
Speaker
Eventually they figure it out and he lost his license, right?
00:50:07
Speaker
And even though he didn't know they were unreal, I think the judges told him, look, I mean, it's your responsibility where you present.
00:50:13
Speaker
So he was disbarred.
00:50:14
Speaker
So obviously that you have to be careful, right?
00:50:17
Speaker
There has to be a human in the loop.
00:50:20
Speaker
What are you most excited about in the short term regarding this whole area and critical care?
Future AI Developments
00:50:28
Speaker
So can I add a detail, ask a detail about that?
00:50:33
Speaker
Do you mean the AI in general?
00:50:36
Speaker
Because in general, I would say the new reasoning models and things like that are coming out.
00:50:40
Speaker
Or most excited about applications in the ICU?
00:50:47
Speaker
The things that I'm most excited about in just AI in general, which will have applications in medicine, is the AI agents is number one.
00:50:54
Speaker
Number two is the new reasoning model.
00:50:57
Speaker
So these reasoning models are incredible.
00:51:01
Speaker
And one of my best use cases for this, I think, going forward is that these reasoning models are able to
00:51:07
Speaker
almost they used to something called chain of thought reason and almost mimics human type thinking.
00:51:13
Speaker
Um, and it's able to really kind of put together a large amount of information,
00:51:21
Speaker
and think about it non-linearly and solve really complex problems.
00:51:27
Speaker
It has its issues still, but the outputs are incredible.
00:51:32
Speaker
The ultrasound example I gave you earlier of coming up a non-invasive way, I used a reasoning model for that.
00:51:38
Speaker
And I told it to use all your knowledge about the basic science
00:51:43
Speaker
of how ultrasound transmission works, how intra-abdominal pressure works, and how the best put together, how do you think elevated intra-abdominal pressure would affect the size of the IBC through the ultrasound lens.
00:51:56
Speaker
And so this type of thinking, these reasoning models are incredible at.
00:52:00
Speaker
So I'm very, very excited about that.
00:52:03
Speaker
And I think taking these reasoning models and AI agents, I think their application in the ICU are going to be incredible.
00:52:11
Speaker
So as the reasoning gets better and better, the outputs will get better and better and more accurate.
00:52:17
Speaker
And so I think the application of both of these in the ICU will be really, really interesting.
00:52:25
Speaker
What would you recommend or how would you encourage other clinicians to start their journey or become better AI practitioners?
Engaging with AI: Advice for Clinicians
00:52:36
Speaker
to practice, to just get in there, get your hands dirty.
00:52:42
Speaker
There's great courses on course.
00:52:45
Speaker
Sergio, I wake up every morning and I'm very excited to wake up because we live in an era where you can learn anything without actually going to a college campus.
00:52:54
Speaker
And so there's websites called Coursera.com or Udemy.
00:53:01
Speaker
There's open source MIT courses.
00:53:02
Speaker
There's open source Harvard courses.
00:53:04
Speaker
There's open source Stanford courses on all of these topics.
00:53:08
Speaker
I suggest just getting in there, taking a course on Coursera on there's an AI and healthcare course in Coursera, which would be a really by Stanford, which is a really nice kind of primer course.
00:53:21
Speaker
on things you need to know if you want to get into this field.
00:53:24
Speaker
Once you have that kind of primer, then you can actually just start practicing and trying to find these no-code tools.
00:53:32
Speaker
If you want to program, great, take a programming class or Python course.
00:53:36
Speaker
But if you don't want to learn how to program, there's so many no-code tools for which there's courses as well on Udemy.
00:53:46
Speaker
There's a tool called NAN, which is basically it's kind of almost like you're kind of just connecting things together in a visual way to create AI applications.
00:53:57
Speaker
So there's courses on NAN.
00:53:59
Speaker
And so really, I would say just take a beginner's course on any of these websites, find these low-code, no-code tools, learn how to use one or two well, and just start practicing and tinkering with it.
00:54:12
Speaker
And I think the same applies, right, to your daily use outside of medicine.
00:54:20
Speaker
I keep playing and you start learning, right?
00:54:23
Speaker
So, for example, what used to be like a prompt on Google, best restaurants in whatever city you're visiting, right, now it can become a much more detailed prompt for ChatGPT and it gives you actual some recommendations that are usually pretty good.
Personal Insights and Conclusion
00:54:42
Speaker
Well, I would like to close the podcast with a couple of questions that are unrelated to AI.
00:54:47
Speaker
And I hope that you do not use AI to answer these.
00:54:50
Speaker
Would that be okay?
00:54:53
Speaker
Number one is, and I know you talked about philosophy, but you talked about books.
00:54:59
Speaker
So I like to ask our guest, is there a book or books that have influenced them significantly or a book that they have gifted very often to other people?
00:55:13
Speaker
One is Meditations by Marcus Aurelius.
00:55:18
Speaker
I love this book because, and so Marcus Aurelius was a Roman emperor and he had his own diary and he wrote, he practiced something called Stoic philosophy and he basically just wrote in it.
00:55:30
Speaker
And this was never meant to be read by anyone.
00:55:32
Speaker
But eventually someone got their hands on it and they published it.
00:55:36
Speaker
And it's a really nice way to look at this guy who was the most powerful man in the world, how he approached just waking up and how he's got his mindset in a correct way to be able to function in the most virtuous, ethical way possible and to maintain his ability.
00:55:56
Speaker
wellness and his mental health, as well as functioning well and creating the best Roman empire he could create.
00:56:03
Speaker
So I think a lot of the lessons in this book hold true today.
00:56:08
Speaker
Another one is thinking fast and slow.
00:56:11
Speaker
So this book might be one of the
00:56:13
Speaker
two or three books that changed how I view the world.
00:56:16
Speaker
So Thinking Fast and Slow was written by these two Nobel Prize, now Nobel Prize winning behavioral scientists and psychologists from Israel, Amos Tversky and Daniel Kahneman.
00:56:32
Speaker
looked, it showed scientifically and experimentally showed the fallibility of the human mind, right?
00:56:39
Speaker
So a lot of economics research assumed a rational character, a rational human being.
00:56:45
Speaker
This kind of flipped it on its head and saying, well, you know what?
00:56:48
Speaker
Our decision-making actually isn't rational.
00:56:50
Speaker
We're not very good at statistical thinking.
00:56:52
Speaker
We're not very good at making money decisions.
00:56:55
Speaker
And so when I read this, it really, really changed the way
00:56:59
Speaker
I viewed decision making, I had more compassion for myself and I gave myself more slack.
00:57:05
Speaker
And it really directed
00:57:10
Speaker
my interests going forward.
00:57:11
Speaker
And I honestly, without this book, I probably wouldn't have picked up coding and AI and AI applications in medicine.
00:57:18
Speaker
So that book probably is one of the two or three books that changed the way I view the world.
00:57:24
Speaker
I think they're both phenomenal books and I would definitely put them in the show notes.
00:57:28
Speaker
So thanks for sharing that with us.
00:57:31
Speaker
The second question, Sherrod, is could you share something with us that you changed your mind about recently?
00:57:38
Speaker
Oh man, I, I think one big thing for me is that I am always trying to get better at something.
00:57:47
Speaker
Um, so for example, I I'm 41 years old, but my goal this year is to dunk a basketball.
00:57:54
Speaker
So I'm doing jump training and I'm like, my wife's just watching me.
00:57:58
Speaker
It's like, I'm on the couch doing, uh, split squats and doing hops and kettlebells and all that stuff.
00:58:07
Speaker
In the same way academically, I have intellectual ADD and I'm always trying to learn something new.
00:58:12
Speaker
But what I've found is that this really fed my anxiety.
00:58:17
Speaker
It wasn't great for my mental health because I always felt like I needed to be on the go.
00:58:21
Speaker
So the one thing I've changed my mind on in the past few years is that there's nothing wrong with stillness.
00:58:27
Speaker
And in fact, actual stillness and just sitting there and doing nothing and just being present is
00:58:34
Speaker
has made me more creative and actually more, more my output has improved as well.
00:58:41
Speaker
Not to mention improving the relationships between my wife, my family, my friends and things like that.
00:58:46
Speaker
So I have a long way to go with this concept, but the concept of stillness is something that I've changed my mind about in the last few years.
00:58:55
Speaker
And the other thing you could do is just lower the rim.
00:59:01
Speaker
That's what I did.
00:59:02
Speaker
That's what I did to solve that itch anyways.
00:59:08
Speaker
So that's, I think that's what I'm going to do.
00:59:10
Speaker
Well, let me know how it goes.
00:59:12
Speaker
So the last question is what would you want every intensivist to know?
00:59:17
Speaker
It could be a thought or quote or a fact as we close.
00:59:22
Speaker
I think the thing I wouldn't want the intensivist to know, especially in regards to this, um, the, this oncoming, um,
00:59:29
Speaker
AI wave, it's a tsunami, it's coming.
00:59:32
Speaker
And what I would recommend is just surf it, ride the wave.
00:59:37
Speaker
And for that, we're just going to have to be accepting of these changing workflows and these tools coming along.
00:59:48
Speaker
What the understanding is that
00:59:50
Speaker
If you work with it, your patient care, your accuracy, your cognitive reserve, all these things could potentially get better.
00:59:58
Speaker
So I think that would be my parting message for intensivists on this topic.
01:00:04
Speaker
And I think this is a perfect place to stop.
01:00:06
Speaker
And, Rod, thank you so much for sharing your expertise, your time, and your enthusiasm.
01:00:11
Speaker
I think that you're right.
01:00:13
Speaker
I like what you said.
01:00:14
Speaker
So I guess our closing comment is jump in the water, start paddling, and ride that wave.
01:00:22
Speaker
Hope to have you back soon.
01:00:24
Speaker
I'm sure that in a couple of months, everything that we talked about will be changed.
01:00:27
Speaker
But we'll definitely have you back and maybe to talk about other clinical topics as well.
01:00:32
Speaker
Sherrod, thank you very much.
01:00:34
Speaker
Thank you, Suryodh.
01:00:35
Speaker
Thank you for having me.
01:00:38
Speaker
Thank you for listening to Critical Matters, a sound podcast.
01:00:42
Speaker
Make sure to subscribe to Critical Matters on Apple or Google Podcasts and share with your network.
01:00:48
Speaker
Sound's transforming the way critical care is provided in hospitals across the country.
01:00:52
Speaker
To learn more, visit www.soundphysicians.com.