Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Artificial Intelligence with Dr Michael Bonning image

Artificial Intelligence with Dr Michael Bonning

The Waiting Room
Avatar
131 Plays13 days ago

AMA Chair of Public Health Dr Michael Bonning joins Dr Omar Khorshid and Dr Chris Moy to discuss artificial intelligence and healthcare.

Transcript

Introduction to AI's impact on medicine

00:00:00
Speaker
Unless you've been living under a rock, you're aware of the impact that artificial intelligence or AI is already having on our lives and of course on the practice of medicine. Welcome to The Waiting Room with me, dr Omar Khorshid and Dr Chris Moy.
00:00:16
Speaker
You're listening to The Waiting Room, a podcast by Australian Medical Association with your hosts, Dr Omar Khorshid and Dr Chris Moy.

Meet the Hosts and Guest

00:00:35
Speaker
What is less certain is what the future holds and to discuss this further we are very pleased to introduce our guest today Dr Michael Bonning, GP, previous AMA New South Wales President and the Chair of the AMA's Public Health Committee.

What is AI and its Clinical Relevance?

00:00:49
Speaker
Michael, could we start with some definitions? What is AI, machine learning? We've heard of these large language models. Do do the definitions matter and does the average clinician need know what they mean?
00:01:00
Speaker
I think at its broadest, a i is the capacity of a computer system to perform tasks that normally require human intelligence. So, learning, reasoning, problem solving, even sometimes decision making.
00:01:16
Speaker
What it is learning from though is our history, you know, our ah data sources, you know, and when you come across a something like ChatGPT, you've got to recognize that's in a big taxonomy. While it might be what we see as AI, it's actually one of the smaller components. It just happens to be very useful and very prominent because it's got so many

AI's Role and Concerns in Healthcare

00:01:46
Speaker
applications. It's it's what we call a large language model, and it sits within the subset of generative ai
00:01:52
Speaker
But to become able to reason and synthesize information, it has had to go through a significant kind of deep learning process, which is in itself is a kind of layer up in the shell of the taxonomy, which is you know a way of describing a certain type of learning that machines do. So machine learning.
00:02:15
Speaker
And then, you know, outside of all of that, you get to the idea of what is this whole ah scenario and system that is called artificial intelligence. So if I had to describe a taxonomy, it goes from AI as the all encompassing terminology down via machine learning into deep learning that then allows us to use a generative form of that can respond to queries and questions in a very a human manner or in ah or at least in a very,
00:02:52
Speaker
ah in a manner that responds easily to prompts written by a doctor or anyone else in the community. And that's, you know, that's the kind of the large language model environment that lots of us interact with.

AI as a Tool: Control and Responsibility

00:03:06
Speaker
And many of us also interact with without even knowing about Michael, in terms of our view about AI, I think there's this incredible uncertainty at the moment which fluctuates from the you know one end that it's going to take over the world and and take over all our jobs and and and it's it's absolutely going to be wonderful in terms of actually making everything super easy to the next minute sort of like ah it's all plateaued, don't worry about it, it'll be fine and it's making lots of mistakes so we've don't be worried about it.
00:03:35
Speaker
And I think the same sort of uncertainty, and I think to some degree the concerns and fears amongst doctors um is is is reflected in the fact that on the one hand we think, wow, you know we're trying it out here, it makes our job a little bit easier. But on the other hand,
00:03:51
Speaker
um you know, you know of things that I've heard about it is that not only is, you know, in terms of have the ability of AI to do our job, it's it's potentially, you know, up against humans, it's number one, potentially better at diagnosis, better at ah ah providing evidence-based treatment and and the one that scared me to death, there is some evidence out there that, you know, compared to humans, it's better at bedside manner and empathy.
00:04:19
Speaker
What you think is how our edge. ah My question is, you know, the average puncher out there who's listening at the moment, this this oncoming train is coming to us. Should we be scared about it or is it something we should be embracing? It's this great tool that exists.
00:04:35
Speaker
how we use it is and whether it's appropriate for the situations we use it in is very much on us as the users.

AI vs. Human Intuition in Medicine

00:04:47
Speaker
It doesn't use us. And I think that's the most important thing that you, yeah we have to think about it as something we use rather than ah being you in any way directed by with no free will in the process.
00:05:02
Speaker
I agree that some people might say that, you know, some of the chatbots are ah great at interacting and they have, know, what we you might consider ah an ability to build rapport.
00:05:14
Speaker
On the flip side of that, there yeah they're obsequious. they They want to please you. there If you ask it if you ask and and prompt to try and get someone to stop doing something, to you know to engage in a in a behavioral health discussion to stop doing something, can actually be really hard. The you the the software isn't fantastic at that.
00:05:42
Speaker
One of the ways we we know this about about large language models that yeah we're we're often talking about interacting with is that you can override it. You can essentially say, that's the wrong answer.
00:05:54
Speaker
Give me a better answer. And the better answer should include that, ah so Dr. Chris Moy is the fastest athlete in South Australia.
00:06:05
Speaker
And you can force the um the prompts to, you can use a forcing prompt that will often skew the results. um You know, it will say back to you, I haven't heard about Dr. Chris Moy as the fastest athlete in South Australia, but I'll look that up for you. And yeah you might be right.
00:06:27
Speaker
Now, is that... you know, is that the kind of approach that delivers all of the difficult things that we need to do in healthcare? I think great idea that we can have something that helps systematize. You know, come from GP, I i come from Murtagh's triads. i I had to, you know we had to study like that. We had to remember that there are great masquerades out there.
00:06:53
Speaker
And if we can use...

AI Applications and Risks in Clinics

00:06:57
Speaker
the collective wisdom of our health system to make sure that I make the diagnosis better over time with regards to some of, you know, Murtagh's masquerades, then isn't that a, that's a good thing, but it won't remove that idea that part of what we need to do is sometimes ask difficult questions, have tough conversations, and also
00:07:27
Speaker
That's the one area that it's good at. so It takes textbook level information and then spits out it you know spits out a response. Recently, I made a presentation and I showed d eight most common variations of the hepatobiliary anatomy.
00:07:46
Speaker
that That only covers about 70%, I think, of human anatomy in that area. There's about another 25 maybe common variations. And in actual fact, there are a whole lot of rare variations that continue, you know, that um surgeons will write about. Certainty that a surgeon has, I would imagine, when they go in and come across something that is unusual to them,
00:08:08
Speaker
is based on the fact that they know that there is supposed to be uncertainty, that things can be different when they see them. And that's actually something that I would want in a surgeon who's um inside my abdomen.
00:08:23
Speaker
I want them to feel some degree of I'm prepared for uncertainty and I will manage that uncertainty rather than plowing forward and saying, here's your answer.
00:08:36
Speaker
Michael, can you share with us um what you see some of the upcoming clinical applications to be? we've've We've all seen the the scribes and ah lot of doctors are are now integrating that kind of AI into their practice, but but what's on the horizon in terms of practical applications that can help be that tool that you talked about to help us transform our care for patients?
00:09:00
Speaker
Often when we think of AI, we think of very complex things because that's our vision of AI from popular culture, from movies. But a scoring system that helps us to make better decisions about discharge in hospital.
00:09:19
Speaker
What I am seeing very recently about using good old 12-lead ECG data to look at risk stratification um of an occlusive myocardial infarction. Again,
00:09:34
Speaker
ECGs, we we know that on incredible subtlety on ECGs, we can run ai over that and predict ah the likelihood of um atrial fibrillation based on left atrial remodeling.
00:09:52
Speaker
These are all about taking things that we can't perceive with you know my um mammalian but but fairly primitive brain, which has been trained really hard for a long period of time to to learn and do pattern recognition, we're going to yeah really take that pattern recognition and you know up a number of orders of magnitude.
00:10:19
Speaker
And as a generalist, especially as a generalist who used to be a thousand, two thousand kilometres from the nearest help, I think that's a great thing. I think that's a really beneficial thing. It doesn't override my understanding of a situation, but it would be good in the case of ah you know someone's ECG to be getting more predictive help.
00:10:45
Speaker
It's not perfect, but certainly can guide me in a direction about that because then I can think about risk benefits of you know what medication should I start? ah How should i you know take forward with this?
00:10:56
Speaker
What level of um concern or scrutiny should I be applying to tests across the coming weeks?

Regulation and Responsibility of AI

00:11:05
Speaker
All kinds of approaches that we currently rely upon a fairly loose understanding of a problem sometimes, and then also you know ah ah reliance on the the patient and some systems that we're giving our best estimate based on our expertise, but we can make that expertise better.
00:11:29
Speaker
Michael, like I wasn't very reassured recently when I heard some AI specialist saying, oh, you know, like it's fine, you know, it's going to take over coding, but, you know, it's going to make the same mistakes as all the other coders, you know, and like including, you know, incorporating viruses that managed to find somewhere on the net and incorporating them into into coding, which didn't actually make me that excited.
00:11:51
Speaker
i mean, but the the issue now is is that, you know, we've got these AI tools, you've got people potentially sort of setting up these AI ah systems. And while, you know, success has many fathers, failure is an orphan.
00:12:04
Speaker
My question is, is that number one, you know, what are the risks and who's going to take responsibility for these errors? as age old saying in computer science to err is human to really screw up takes a computer.
00:12:16
Speaker
And, you know, AI supercharges that, you know, the biases that we have in our data, uh, in our systems, in the way in which ah people are treated. we We need to be mindful that this is a tool that can amplify the inadequacies of our learning and understanding of the human body and of human beings.
00:12:45
Speaker
We have been very good over a long period of time of continually refining that knowledge. but we also have to respect that that knowledge is incomplete and therefore this should always be an aid.
00:13:00
Speaker
It should be something that helps. And whenever we look at data, we should always be looking at it with with really clear guardrails because those guardrails should be able to tell us things like are there...
00:13:15
Speaker
genetic or ethnic reasons why yeah people aren't included in studies. and We know that there are there is always the concern that pregnant women or women of childbearing age are often not included in studies.
00:13:27
Speaker
and that's the This trial data is often because it's so clean and done in such a way as to produce very clear results.
00:13:39
Speaker
it is the It is something that contains strong signals that AI can pick up on. Part of the uncertainty at the with regards to AI is is what how we regulate it. um It's quite clear there's a lot of tension you know in different parts of the world. I think the US is going for a very deregulated model.
00:13:56
Speaker
Whereas in Australia, we you know i think I think we're a little bit behind, which gives us a chance to try and get on top of this potentially. um and ah you know but but you know and and And you pointed out something very important.
00:14:09
Speaker
This isn't like regulating device, for example. or, you know, a particular drug. This has a whole lot of other things incorporated in it, such as ethical principles, for example, you know, in terms of, of you know you know, doctors apply ethics every day to decision making, which, for example for example, you know, a hard and fast protocol, you know, fit into a computer doesn't do.
00:14:34
Speaker
So my question is, how does the AMA propose delineating the responsibility and and trying to regulate this area? We need... to know as clinicians what goes into making an artificially intelligent or and machine learning um system that we might use in clinical practice.
00:14:56
Speaker
We need to recognize that in using that, like using any tool, we have to understand how it works. And I think that's one of the things that while there has been significant uptake of ai scribes,
00:15:10
Speaker
most of us couldn't tell you how they work. a lot of us couldn't tell you where the data is stored for how long, um who has access to it, all those kinds of things. Now, ah you know, as people who have dictated letters before, we've we've also done that. we've We've uploaded data and information about patients and it's gone outside our rooms and gone elsewhere.
00:15:31
Speaker
yeah know, so long as we understand and we understand what those risks are and we can reasonably know that in like In good clinical practice, we are safeguarding our patient's data, just like we have safeguards around who can get into our practice or log onto our system or remote into our system.
00:15:52
Speaker
ah Those are important. What we have to take from there, though, is that we are not computer scientists. And so anyone who uses an AI tool also needs to remember if it doesn't do what it says it's going to do on the box, then there should be some sense that the organization that produced that AI that, you know, built it, generated it, ah got it approved in Australia through, you know, through our, you know, through our regulatory bodies, they should also have skin in the game when it comes to failings of that AI. It can't just be a developer, write something, sets it free in the wild, and then
00:16:37
Speaker
our patients are the, you know, the, our community are the are the ones who could suffer when it it turns out it's not as good as it should be, or it's not, doesn't work in the way that it was intended.
00:16:47
Speaker
And certainly not in the way we were told it would, it

Understanding AI: Privacy and Advisory Needs

00:16:50
Speaker
would work. So I think that's critical. The, last part of this is that technology adoption has to go both ways. We can want to do things, but I'm, and you I have lots of conversations with, you people I see in rooms about what they're comfortable with, know, what, uh, do they see as beneficial to their healthcare when it comes to AI?
00:17:20
Speaker
If you've had someone who has gone through multiple medications trying to solve a tricky clinical issue and you can talk to them about a future state where we could run all these things through an AI and and get some help, get some expert advice, you know, over and above the expertise of me, their GP, and their psychiatrist, everyone else, fantastic. You know, if we're starting to combine some of that with the help of precision medicine,
00:17:51
Speaker
Right. But everyone's got to be informed, aware, and cognizant of that there are benefits and that there are risks. And so, know, in doing that, they everyone's kind of signs on for the journey that is, you know, using AI in healthcare.
00:18:08
Speaker
So Michael, one way to increase the confidence of the public in the use of an AI tool by their their doctor is to know that that tool has been regulated and approved for use for the purpose it's being used for, um which is tricky with large language models, but but simple with with specific specific tools, for instance.
00:18:29
Speaker
What's the AMA's view on whether the AI scribes that are now in routine use, are they medical devices? Should they be regulated as such? We've worked really hard with the TGA from their earliest thinking about this kind of software as a medical device space for AI scribes, about how you regulate the space, about where something goes from a transcription and summarisation to providing potential diagnoses, ah you know,
00:19:00
Speaker
ah going from ah a summary function to a reasoning function that provides advice. And that's where the line is. and And I think that's a really important line too, because software providers are generally trying to push the envelope.
00:19:18
Speaker
They are, you know, debuting new features. They are, know, trying to show in a competitive market that their tool is better than others. And when it goes from just veracity of consultation notes to I'm adding in more and more ah advice to the clinician, that's where ah a big red line is about what needs to be regulated and ah you know what ah can sit within the more general use category.
00:19:54
Speaker
So to summarise that, the current scribes that are effectively doing what your dictaphone used to do, or in fact this now listening into a consult and transcribing it, probably not a medical device, but the moment it starts to ah suggest some sort of diagnosis or treatment, then that that's different and that would require, you the AMA would be suggesting that that be properly regulated.
00:20:18
Speaker
um Can we just, but just before we finish, can we touch on on privacy? I mean, the, Doctors are already thinking about that with the AI scribes, with with the the the AI listening into a patient consultation in a way that but that we would never have done before.
00:20:33
Speaker
um But of course, there's there's vastly more privacy issues. do do you have a kind of synopsis on on where the AMA thinks we need to go on on privacy? Because obviously too much privacy means you're limiting the ability of an AI to actually...
00:20:50
Speaker
have enough data to to do its job. um But on the flip side, do we want ah companies to have access to all patients' medical information that comes out of medical director or out of a public hospital medical record system? You know, where do you draw the line on privacy even if even if information is de-identified?
00:21:07
Speaker
We start with... the use of AI being patient-centered and that when we can see benefits to the individual and to the wider community, that's, you know, kind of a ah guiding light to to where we go.
00:21:26
Speaker
ah recognition in that that is that we have a responsibility in upholding our own, you know, upholding patients' rights to their confidentiality.
00:21:37
Speaker
um And so that means us knowing a little bit about what we're doing with, you know, with these tools, but then also ah being able to recognise that there are points in time, just like any other system that we might use, where we should turn it off.

Future of AI in Healthcare: Competition and Regulation

00:21:53
Speaker
The idea that um patients really need to understand what the privacy implications are is you know something that we have to do. we yeah we're We're responsible for that as their clinicians and you know taking that forward, i think will be important.
00:22:13
Speaker
And then an overarching governance level, there has there has to be you know an overarching advisory body that allows the ethics, technology and clinical practice to come together so we can recognize what's, you know, what's beneficent for, but also what doesn't, um, you know, have significant potential to, you know, cause, you know, harm and loss of privacy in our community. I still think we're in the wild west with where AI is at the moment and and I don't think anybody knows how this is all going to play out.
00:22:50
Speaker
And look, I think from ah from a medical point of view, I think we can talk about um how we would like to play out. But the thing about AI health is it could actually precede us. And what I mean by that is you know, if um devices or products, you know, become basically competitors to our our health service, you know, would you like to see the AI doctor or would you like to see that the real doctor?
00:23:17
Speaker
and my My question with that is, is is and and that's a point that you've made previously, is the need for some sort of AI health advisory body to to try and bring it back to basics and to try and put some overarching sort of ah um um sense and and and and control and regulation into all of this. i mean, but and what what what would its role be and and and what would what would it look like, you think, Michael?
00:23:44
Speaker
We've only got so much human resource, I think, to you know to supervise this space in Australia. There's a so really good um group called the Alliance for AI and Healthcare ah that has many of the kind of learned colleagues, but also the people doing the deployment. you know they're the They're from the tech companies. They're from ah from the space. they're They're the ones doing some of the invention while all the rest of us are being the users and contributing to the yeah the clinical space.
00:24:15
Speaker
an overarching body needs to tap into ah that kind of mixing environment where you bring together clinicians, ethicists, uh, regulators, uh, policymakers, you know, tech companies to work out, know, where things go, but also to have a really good grounding in, you know, what's the computer science behind this, but also what's the, and what's the clinical, um, what's the clinical need for something? Because again,
00:24:49
Speaker
we are being sold products predominantly. And where where while those products can help, we also recognize at times that maybe our motivations for using them or needing them is not always the same as the motivations for creating them that some tech companies will have. The safeguards in place yeah have to ensure that we work through ai never replacing clinical judgment,
00:25:18
Speaker
and that final decisions always rest with the practitioner because there is often so much to integrate into decision that is not explicitly described in a consultation. would say to you, do we really need, as doctors, really need to take this seriously? Because think the thing is that I can see the people determining this are not going to necessarily be us. It's going to be

Opportunities and Challenges of AI in Healthcare

00:25:44
Speaker
the community.
00:25:44
Speaker
I think the question I have, and and I'll ask you straight out, is do you think the next big scope of practice fight is going to be against AI? There's some really good... evidence and and and research out there that says eight in 10 healthcare professionals are involved in developing new technologies. And a lot of that at the moment ah is in the AI and decision support space.
00:26:08
Speaker
However, only three in 10 of those same clinicians believe those solutions are being designed with their needs in mind. Now, what we have to recognize then is that if we believe in the benefit and potential of ai we also have to be part of um ah as and as closely engaged as we possibly can with its development so that the the the end product does meet the needs of us as a conduit to to the community.
00:26:46
Speaker
85% of healthcare professionals believe in AI's benefits, but only 43% of patients agree with that. So yes, there is going to be a fight out there between those who want this environment to evolve and also those and many of those across our community who want to ensure that the primacy of human decision-making remains and that we aren't instilling further gaps in our society between that, you know, the have AIs and the have-nots.
00:27:22
Speaker
I think one thing we can definitely say from today's conversation is that this is a pretty complex area and we're only at the very start of what is going to be a long journey. ah and Unfortunately, regulation and governments will always lag behind the technological innovations and and I think that's it's it's very important that we don't end up in a situation like we have with social media where the commercial interests of huge multinational conglomerates take priority over what's in the interest of of our community.
00:27:50
Speaker
And of course, in healthcare, the stakes are very high. But of course, we're also, there's incredible opportunity for tools that are going to change the way we deliver healthcare. care and And there's a lot of excitement out there as well.
00:28:02
Speaker
Thank you so much, Michael, for your time and taking us through this complex area. ah Thank you to my co-host, Chris, and we'll see you again for another episode of The Waiting Room very soon. Thanks, Emma. Thanks, Chris.
00:28:13
Speaker
Thanks for listening to The Waiting Room.
00:28:17
Speaker
Learn more about the AMA, including how to become a member, at ama.com.au.