Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#19 Joshua Hatherley: When Your Doctor Uses AI—Should They Tell You? image

#19 Joshua Hatherley: When Your Doctor Uses AI—Should They Tell You?

AITEC Podcast
Avatar
48 Plays9 days ago

In this episode, we speak with Dr. Joshua Hatherley, a bioethicist at the University of Copenhagen, about his recent article, “Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?”

Dr. Hatherley challenges what has become a widely accepted view in bioethics: that patients must always be informed when clinicians use medical AI systems in diagnosis or treatment planning. We explore his critiques of four central arguments for the “disclosure thesis”—including risk, rights, materiality, and autonomy—and discuss why, in some cases, mandatory disclosure might do more harm than good.

For more info, visit ethicscircle.org.

Recommended
Transcript

Ethical Implications of AI in Medicine

00:00:16
Speaker
Welcome back to the iTech Podcast. Today we're joined by Joshua Hatherly, a bioethicist and philosopher of technology. This episode explores the following question. So if a medical clinician uses machine learning or AI to help develop, say, a treatment plan, are they morally obligated to disclose that they used AI to their patients? Do they tell their patients that they used AI?
00:00:42
Speaker
Josh argues that they do not have such a moral obligation. So that's the setup, and I hope you enjoy the episode. Could you tell us about this idea of the disclosure, what you call the disclosure thesis? what um This is not your thesis, but this is something you engage with. So can you tell us what the disclosure thesis is?
00:01:02
Speaker
Yeah, sure. um Very basically, it's just that the thesis that clinicians, doctors have an ethical obligation to um disclose when their judgments are influenced by the outputs. of a medical machine learning system, an AI system.
00:01:19
Speaker
Or, I mean, it can also be stated that inversely um where, you know, clinic clinicians do something ethically objectionable when they use AI to inform their clinical judgments without informing um or disclosing that information to their patients.
00:01:37
Speaker
it's one thing to say disclosure is morally required, but it's different to say maybe it's maximally ethical, right? Because someone could think, oh, disclosing that you're using an ML system, you know, maybe it goes beyond duty and it's good because it shows the highest respect for the patient's

Minimum Standards vs Ideal Disclosure

00:01:57
Speaker
autonomy. So you might think it's like the best thing to do, you know, like the most ethical doctor would disclose. You might think that, but that's different from thinking,
00:02:07
Speaker
It's morally required. You're at moral fault if you fail to disclose. And then it's also different from thinking um about prudence. Anyway, I just wanted to like flag that. like the The disclosure thesis is an ethical um claim, which obviously you said, but I just wanted to flag that for the...
00:02:26
Speaker
Yeah, no, that's exactly right. um ah the The aim of the paper is to try to, you know, establish minimum standards rather than, you know, say that, you know, here are the reasons why it would be good to disclose this information, ah which, of course, there are many reasons to think that it would be would be good.
00:02:43
Speaker
um But the the question is, you know, are clinicians doing something wrong? Are clinicians failing in their ethical duties to pay when they fail to disclose that they've used medical machine learning system to inform their judgments.
00:02:55
Speaker
Good. and And so you you disagree with the disclosure thesis, and we'll get into that in a moment. But ah Roberta, you were about to ask about the consensus view thing.
00:03:07
Speaker
Yeah, yeah. So, I mean, I guess my my question is that you you're you're saying this has become the consensus view. And I'm wondering, you know, if you've had, we're going to talk about your article in a second, but like, did you have any difficulty ah trying to convey this? Or did you have any hard time kind of ah trying to get maybe publishers or something to like understand or listen to this particular view? Because I know that sometimes when a view becomes kind of orthodox, it's a little challenging to ah challenge it.
00:03:38
Speaker
um I wouldn't say I had any particular struggles. I don't think it's, you know, controversial enough or ah um maybe, but you know, i suspect it wouldn't have gotten published in a medical journal or I may have had more difficulty publishing it in the medical journal simply because they have more impact on clinical practice than bioethics journals.
00:03:59
Speaker
um So the stakes are higher, essentially. But with bioethics, you know you can you can explore um ethical claims. ah And you know even even if they're wrong, they're worth exploring from a philosophical point of view. So the bioethics journals that I targeted for this particular article were um relatively receptive to it. So I didn't have i didn't have um any particular struggles. So howly yeah what what we just broadly speaking, you know why do you think the disclosure thesis has become...
00:04:30
Speaker
the consentus consensus view. um Yeah, I've thought about that a little bit. And I think, I mean, partially it's because AI systems have generated all this hype in medicine and healthcare. And so people are really worried about the implications of it. And it's often conceived of as this like really revolutionary transformative technology.
00:04:54
Speaker
um And, you know, a lot of emphasis has been placed on the risks, particularly with respect to AI outside of medicine. You know, we hear a lot of things about biases and um responsibility gaps and all kinds of challenges. And to to a large degree, these risks also apply in healthcare. care um But I think ah the the implications of of them are different in the healthcare setting, um as I'm sure we'll talk about later.

Consensus on AI Disclosure

00:05:22
Speaker
um But I think it largely has to do, the the reason this has become the consensus view is that people are really worried about what the what impact the technology is going to have on medicine.
00:05:34
Speaker
And so this strikes me as a kind of reaction to that worry. Okay, that's fair. and And then there's kind of four arguments you identify as being either presented or could be presented in favor of the disclosure thesis idea that, you know, there's this ethical obligation to disclose the usage of and ml You want to introduce us broad strokes to those four?

Arguments for Disclosure Thesis

00:06:01
Speaker
um Yeah, um yeah, for sure. So the risk-based argument is essentially the claim that um medical AI systems pose certain risks to patients that are substantial or likely enough to warrant ah disclosure to patients.
00:06:19
Speaker
um The rights-based argument is that, or the central claim is that um by using medical AI systems to inform their judgments without informing their patients, clinicians infringe on one or another of um their patients' rights.
00:06:37
Speaker
um The materiality-based argument essentially claims that using medical AI systems to inform one's clinical or professional judgments constitutes what ah is called material information, um which is kind of a legal term um that we can talk about a little bit later.
00:06:58
Speaker
And then the autonomy-based argument suggests that um using medical AI systems to inform one's clinical judgments, again, without form informing patients, risks interfering with the patient's autonomy or their kind of self-determination.
00:07:14
Speaker
and So those are the very brief outlines of those thoses four kinds of arguments. I'm going i'm goingnna try to summarize. I feel like what your responses to these arguments all has in common. You tell me how off I am. yeah sure um Because you know but as you're going through these arguments, right you you read them first and it's intuitively like, oh yeah, that sounds pretty good. and And what you do throughout your article is you sort of separate the premise from the conclusion. You're basically saying like, yeah, all of these things are actual valid concerns, but
00:07:50
Speaker
Disclosing this to patients isn't the solution, right? Like you don't go from this datum to that particular conclusion. So, um you know, that's that's maybe that's my my my nutshelling of your argument.
00:08:02
Speaker
But now, now, please, did I get that broadly right? Or how how would you how can you improve on that? Which I know you can't. So. I mean, I definitely think that's one way that you could frame it. um Essentially, the the conclusion doesn't necessarily follow from the premises.
00:08:16
Speaker
um But I also think, you know, one of the main thing that unifies the arguments that I make throughout the paper is that they all exhibit some form of like AI exceptionalism.
00:08:28
Speaker
We're treating AI systems as, or you know, medical machine learning systems as this radically new technology that requires us to kind of throw out um the the principles and the practices that we've used in the past um and essentially change things radically to adapt to this new technology.
00:08:47
Speaker
And to some extent, I think, you know, we do need to to change and adapt to ah medical AI systems to, you know, manage their ethical risks and and ethical concerns better.
00:08:59
Speaker
um But I'm not sure that this is the right the right solution.

Risks of Mandatory AI Disclosure

00:09:03
Speaker
And they're not exceptional enough in the right way to warrant something like the disclosure thesis to um be appropriate.
00:09:12
Speaker
Right. and And you actually, you go at us you know a step further, right? you You can say mandating disclosure could potentially harm patients, right? Yeah, I do make this claim. And the the idea here or the worry is that like um by mandating disclosure, i I mean, part of the challenge here is that um one of the claims that I make in the article is that um many of the arguments that are made um or could be made to support the disclosure thesis essentially rely on
00:09:46
Speaker
um assuming like inappropriate uses of AI systems or implementing ai systems before, you know, pretty critical risks have been addressed. And so the the worry here is that by mandating disclosure due to these kind of avoidable risks that medical AI systems pose for patients,
00:10:06
Speaker
um could essentially be used to shift responsibility for any kind of error or harm onto patients rather than the clinicians that use these systems or the developers who create them. It's essentially saying that, well, we disclosed this information to you, you understood it, you accepted it, and so we can't be held responsible for the the consequences.
00:10:30
Speaker
It's kind of like when a private firm has some kind of safety training, right? That, you know, kind of sucks. ah But either way, it covers their their butts, right? So if someone slips and falls like, hey, you went through the training, you should you should have known and this is your fault. and all right is Is that, did I get that right?
00:10:47
Speaker
Yeah, yeah, I think that's the kind of idea. You know, it's like a terms and conditions thing where it's like no one reads the terms and conditions and then you flick through them and say yes and say, well, you agreed to the terms and conditions, so, you know, we can't be held responsible for this.
00:10:58
Speaker
there's There's a worry that it's kind of a ah way to evade accountability. um Not that it's necessarily a way to do that, but that and the worry is that it could be used to do that.

AI Robustness: Comparing with Human Clinicians

00:11:09
Speaker
Okay, awesome. um So that's great. we kind of I feel like we've gotten kind of high-level appreciation for...
00:11:16
Speaker
in broad strokes, you know, what you're arguing. And so maybe we can kind of go more into some of the details. So I was thinking we could kind of go through each of the four arguments for the disclosure thesis, and we could kind of think about what your objections are to each of those four arguments. And so we could start with the risk-based um argument.
00:11:39
Speaker
And, you know, so... you basically present a few lines, this like a few ways in which someone might mount a risk-based argument. And so the first one has to do with adversarial attacks.
00:11:52
Speaker
And so the idea is something like, you know, adversarial attacks, which are like, Basically, it's like deliberately manipulating the input data um in an ML system such that it will lead to the system making errors. um So adversarial attacks, they pose severe risk to patient safety.
00:12:15
Speaker
And therefore, due to that risk, clinicians must disclose when an ML system has been used. um So yeah, how would you, it you know correct me ver mom if I'm wrong, is that the right way of describing this issue of adversarial attacks? And then how kind of would you respond to that? What's wrong in that line of reasoning?
00:12:35
Speaker
Yeah, yeah. um I mean, this is an actual argument that's made been made by certain authors in the in the philosophical and bioethical literature. um And I think to some extent, you know, and and and like part of the claim is true or part of the argument is true. Adversarial attacks, machine learning systems are highly vulnerable to adversarial attacks.
00:12:55
Speaker
They can lead to misclassifications and they can lead to errors and force a system to make errors. The problem with the argument, I think, is that um the motivation for cyber attackers to engage in adversarial attacks against medical machine learning systems just isn't there.
00:13:09
Speaker
i mean, the primary motivation for um cyber attacks is financial, um generally speaking, and particularly in medicine. it's like, how do we make money out of this thing? And by ah messing with the performance of a a ah medical AI system, it's unclear how you would actually extract any kind of cash from from that kind of approach.
00:13:31
Speaker
um The primary concern that people have with adversarial or the vulnerability of medical AI systems to adversarial attacks is not that they will interfere with the performance of the system and affect patient care, but rather that but they'll be used to to and perpetuate insurance fraud, medical insurance fraud.
00:13:50
Speaker
This is the main worry that people have. it's It's not so much a threat, it seems, to to patient health and safety. The second issue is that while medical adversarial examples are relatively easy to generate, um surprisingly so, they are also very easy to detect. So, you know, one study used a very simple kind of detection classification system and was able to ah detect these kinds of attacks with something like 90, 95 or above percent accuracy.
00:14:24
Speaker
So the the claim that I'm making is that, well, okay, you know we they pose some sort of risk, um but also the motivation to perpetuate the attacks isn't there, and then they're easily detected. So if we're saying that we should disclose the use of these systems ah because of the risk of adversarial attacks, well, this seems to set the threshold for risk and disclosure just way too low.
00:14:49
Speaker
If we were to do that, then we'd have to disclose an enormous amount of risks to patients ah because they would pose a similar threat to their their patient ah their health and safety.
00:15:00
Speaker
um But doing that in reality is unlikely to be beneficial because... patients won't really care and it's likely to cloud information and result in information overload for patients. So essentially, you know, interfering with the information that should be disclosed and the patients need to be and need to understand ah in order to make an informed decision about their ah care.
00:15:22
Speaker
Okay, awesome. Okay, that's great. Yeah, so yeah, real quick, it's like they're more likely to be used for insurance fraud. They're actually pretty easy to detect these adversarial attacks. And then also they're less dangerous than other cyber threats like ransomware and stuff.
00:15:39
Speaker
Okay. um so So that's like the first... line of attack the the the risk-based argument could take. But another one has to do with like robustness. So someone might say, you know, machine learning systems, they face generalization ah problems. In other words, they ah struggle to perform well outside their training data.
00:16:02
Speaker
And then they also have the problem of like a distributional shift. So basically it's like their accuracy declines um when the data changes over time. And so that means they can produce unsafe errors.
00:16:15
Speaker
And because they can produce unsafe errors, disclosure, again, is ethically required. that's That's the argument. So yeah did I capture it there? And then yeah how would you respond to that kind of robustness?
00:16:31
Speaker
Yeah. yeah I mean, I think the problem here is that, you know, the the kind of argument that people make in that supports this claim is like, okay, well, AI systems are opaque and they lack robustness. And so um we ah we need to worry about, you know, whether they are used appropriately whatever.
00:16:51
Speaker
um And I think this is a ah risk to some extent, but I also think it's exaggerated. You know, um we have a variety of strategies that can be used to address these kinds of robustness challenges in medical machine learning systems, um technical methods that can improve the robustness of these systems for, you know, particular within particular narrow domains.
00:17:10
Speaker
um We have kind of explainability methods that can assist ah clinicians in improving the accuracy of their judgments, their AI informed judgments, essentially assisting them in determining, you know, when should I rely on this system or when is something maybe going wrong and i should I should lower my credence in how heavily I weight the system's outputs in my own clinical reasoning.

Bias in AI vs Human Clinicians

00:17:37
Speaker
So my sense is that, okay, well, this is definitely a risk, but again, it sets the threshold for risks and disclosure too low um because, you know, Humans are also ah not particularly robust, um have challenges in diagnostic reasoning and diagnostic error.
00:17:55
Speaker
Yet, you know, the kind of need for disclosure about, you know, clinicians, ah you know, lacking robustness in these particular areas to patients when they are treating them is not mandated or required or, um you know, deemed as ethically necessary historically. So there's a kind of mismatch here between how we how we treat human beings and how we treat machines.
00:18:16
Speaker
And my sense is that we should be treating white things alike. And so we should be treating the AI systems the same way that we treat human beings, unless there is some particular thing that warrants, you know, um some kind of differential treatment, but it doesn't seem to me like that is the case here.
00:18:33
Speaker
I feel like there's a couple of things we can talk about there before. ah But i I just want to, before we dive deep into the in those issues, I think Sam has a follow-up too, but I just want to clarify that the two points that you made right now.
00:18:45
Speaker
um and And maybe you can even give us examples because I you know i think some people might know that sometimes yeah you You train an AI model for something and it it and then you think it's it works, but it's not the thing that it's supposed to be doing. And the famous example is they trained ah and you know the army or some researchers just trained...
00:19:07
Speaker
ah model to recognize ah camouflage tanks. But all it really learned to do is to recognize distinguish between sunny days and cloudy days because all the pictures of the tanks that were camouflaged were taken on cloudy days.
00:19:21
Speaker
And so someone's freaking out about these things, right? And and so you mentioned there's there's methods for making sure that that sort of problem isn't happening within medical the the medical field in particular. So ah can you can you tell us about that? and I mean, what what sort of a procedures make sure that you don't get a false positive for like lung cancer or something?
00:19:43
Speaker
is that... Yeah, as far as I know, and as far as I recall, the the techniques don't essentially eliminate that possibility, but they reduce it. So it's like, okay, we we still have this risk, but we can reduce its likelihood. And that's that's the kind of claim I'm making here. But I can't remember the specific um ah techniques, um though I remember mentioning them in the paper.
00:20:04
Speaker
So I'm sorry about that. No, that's okay. but it But it is basically something that is is currently being addressed, right? So um so that's important. the The second thing, though, is that um you're you're you saying that humans are are similarly opaque than ML models.
00:20:22
Speaker
And my first example um that that I thought earlier was um basically we're we're very unreliable when we're recounting our own symptoms and all that.
00:20:33
Speaker
I'll give you a non-medical example. I had a plumber come in because there was a funny noise coming out of the out of my wall. And I said, i in retrospect, I felt like such an idiot because I'm like yeah, and sometimes, you know, this window right here taps a little bit. And then sometimes when I turn on this particular faucet and look at that wall, and i don't know. I just i said a bunch of random things that he as he was listening, he was like, this has nothing to do.
00:20:59
Speaker
with your thing. But as, as a patient, I feel like that's what I'm doing. Sometimes I'm like, I don't know, my knee and I'm saying things that, that aren't really relevant. And so I don't know, I'm, I guess what I'm saying is that humans also are not robust. so um is, is that why you're saying that, uh, we, we should treat ML, uh, models like, uh, like humans we're we're not robust in that way?
00:21:22
Speaker
Yeah, I mean, the worry is, you know, holding machines to a higher standard than ourselves um when that's unjustified to do so. And I think that's that's kind of a case where we're doing that because, you know, diagnostic error is a massive problem um and causes a lot of lot of harm to patients.
00:21:39
Speaker
And so if we're saying that, oh, we need to you know disclose that you know ah medical AI systems are not robust in particular scenarios and they have these risks and whatever, well, then we need to be doing that for clinicians in um any kind of clinical consultations. I'm not particularly you know robust. Clinicians in general aren't particularly robust.
00:21:58
Speaker
But then the worry is that if we do that for clinicians, then we're radically undermining trust in the medical system. kind of institution. um So I think in this kind of case, ah the the justification for treating medical AI systems different from clinicians just just isn't

AI and Human Ethical Standards

00:22:16
Speaker
there. it isn't strong enough to to kind of support that that kind of thesis.
00:22:22
Speaker
Great, and so moving on to kind of another ah approach that someone might take in terms of risk-based arguments is algorithmic bias. So someone might think, you know, ML systems, they really amplify and unfair treatment across certain like across race or gender or other categories categories. In other words, they're subject to algorithmic bias.
00:22:46
Speaker
And so that algorithmic bias, it really creates significant risks for patients. And therefore, again, clinicians must disclose when these systems are used. So, ah yeah, how would you respond to to that line of reasoning?
00:23:02
Speaker
I think there's two things. The first thing is that, I mean, we have a similar kind of ah pushback from the previous claim, which is that humans are also biased, but yet we don't require clinicians to disclose their biases to patients, and we never have.
00:23:14
Speaker
um But I think an additional point that also applies to the um yeah argument that I mentioned previously is, oh, I just had it.
00:23:26
Speaker
but Well, does it maybe have to do with, I mean, I know you you mentioned that like, Risks from ML can be reduced with algorithmic auditing, which is systematic checking for bias.
00:23:38
Speaker
that's Yeah, that's also a point, you know, about the systems or the strategies for addressing them. And actually, no, that is what I was going to say, similar. um The problem here doesn't seems to be not that the the um use of the system hasn't been disclosed to the patients, but rather that the system hasn't been sufficiently de-biased in the first place. You know, we shouldn't be implementing systems that are heavily biased in medical practice. That seems like a great problem.
00:24:04
Speaker
safety risk. And insofar as we do that, well, that's the problem. That's the reason why we should object to it, not because patients haven't been informed that ah their systems may be heavily biased.
00:24:15
Speaker
So I think it's it's just, it targets the wrong aspect of the use of the system, in my view. Yeah, that seems important to me. Like sometimes it sometimes ah it seems like when people argue for the disclosure thesis,
00:24:31
Speaker
um their premises would actually justify, like if they were true, what actually follows that you can't use an ML system at all.
00:24:41
Speaker
Exactly. Not that you have to disclose your usage of it. Right. So, um, So, yeah. So in certain cases, it's like, well, if they're really have if there's subject to that level of bias, what that would justify is not using them or and not justify, but that that would entail we should not use them.

Significant AI Risks: Disclosure or Prohibition?

00:25:02
Speaker
yeah It wouldn't entail we would have to tell people when we use them. because we should be using them in the first one. I guess I did have one one question. Are there any good comparisons, like ah so like quantitative comparisons between ah the bias of humans and bias of the AI models?
00:25:21
Speaker
Yeah, that's the ah really interesting question. i would like to see research that investigates that, but I haven't seen something that you know puts the um the algorithmic systems and human biases on like the same playing field and and evaluates them against each other.
00:25:38
Speaker
um it's you know As you were speaking too, it strikes me that... um Yeah, I'm more familiar with ah with AI in education. And you know there's there's evidence, of course, that human graders have bias. And and there's it's all kind of like there's also like noisy judgments. like If you change the order of essays that you you know that you're grade, and like if you put a good one, then a bad one, then good one, it changes like the grade that you know human graders overall give.
00:26:05
Speaker
yeah And so ah that kind of noise would be eliminated by... by um um AI models. um But, um you know, i guess, you know, there's there's a real need for like a real head-on comparison here. Like, can we there's all kinds of tricks that, you know, graders can also, human graders can also engage in to limit their bias. And it's like anonymous grading and, you know, the they break it up into parts and they they do all these different things. So I think there's a, there's like a real need for that kind of comparison.
00:26:38
Speaker
No, no, I agree with you. I mean, would that, would, um, AI systems eliminate that kind of noise though? Because, you know, from my understanding that that it matters the order that you give, and you know, training data to a system um to, you know, that impacts the the final um state of the system in different ways.
00:26:59
Speaker
And so if you have a a system that's, you know, two systems that are identically initialized then you give them the same training data set, but just order differently, well, you'll get a different output. So you you'll still have that kind of noise. Am I right in thinking that?
00:27:13
Speaker
That's a great question where we're going in different direction. But yeah, so I think it seems like whenever you have um generative AI ah grade things like that and and change the order, it does change sometimes the score.
00:27:25
Speaker
But um ah if you do, um um you know, like a purely mechanical algorithm that isn't based on generative, the noise does go away. so So they're trying to like fine tune grading models now and let's see how that goes.
00:27:40
Speaker
Yeah, nice. No, interesting. Thanks. Sorry to... DRAM. Yeah, and you know what? That's usually me, so I'm glad for once it was someone besides me. Oh, you're welcome.
00:27:52
Speaker
No, that's great. um Okay, cool. So, yeah, so we've kind of gone through the risk-based arguments for the disclosure thesis. Now we can kind of look at the rights-based arguments. So but I guess the basic idea is like,
00:28:06
Speaker
Patients have a right to refuse medical machine learning in their care. And, you know, there's kind of two ways you can think about this. It's like one would be kind of weak where it's like um
00:28:20
Speaker
a patient has the right to refuse care when it's delivered entirely by AI or ML. Yeah. Like you look but the only thing delivering it is a machine, but this is not really relevant yet because it's like ML in our time, right. Is more uses like a decision a rather than like fully autonomously. Yeah. Anyway.
00:28:44
Speaker
So that's one right to refuse. Another kind of stronger version of right to refuse is that even if the clinician uses ai at all, the person has a right, um, to refuse the care.
00:28:57
Speaker
And anyway, and and then like this right is based on the night, like this idea that like, you have a right to act on rational concerns about the future.
00:29:10
Speaker
yeah And so like patients, they might, it might be rational for a patient to be worried that ais or AI is going to replace doctors. And so on the basis of that worry, you might think they have a right to refuse a given instance of care um if it involves ML.
00:29:27
Speaker
um At any rate, so I guess the idea is like, Therefore, the clinicians, they need to disclose when they use ML because otherwise, if they don't disclose it, then how are the how are the patients going to exercise their right to opt out or refuse the care?

Patient Rights to Refuse AI-influenced Care

00:29:52
Speaker
um So anyway, I guess that's that's my attempt to describe describe the rights-based argument is that Anyway, yeah. So what's your response to that? what How do you... know respond to that type of argument.
00:30:04
Speaker
Yeah, yeah. um So, yeah, no, you're absolutely right. The the kind of weak version of the the the right to refuse doesn't really apply to this kind of situation because AI systems don't do all of patients' diagnostics and treatment planning. So we have to think about this strong version of it, which says that we can't have, or patients have a right to refuse any and in all use of AI systems.
00:30:30
Speaker
And the the basic argument is that if patients have this right to act on rational concerns about the future, then they have a right to, a strong right to refuse diagnostics and treatment planning by medical AI systems. So it relies on this um right to act on rational concerns about the future.
00:30:47
Speaker
um Again, this is ah this is an actual argument that's been made by people. This isn't one that I've kind of constructed for the sake of the the article. Um, and the problem that I see with this kind of argument is that the right to act on rational concerns about the future is simply just too broad and too vague.
00:31:03
Speaker
Um, So, for example, we can think of, like ah firstly, like what is the scope of actions that ah patients may take with respect to these rational concerns? can they Should it just be limited to voting?
00:31:16
Speaker
ah Should it be some kind of democratic process? ah Should it be include things like protest or you know taking it to its extreme? Should it even include things like you know ah political violence?
00:31:28
Speaker
um The actual scope of what a right to act on rational concerns about the future ah just is underspecified. um But I think more importantly, um patients historically just have not had a right to act on rational concerns about the future or, you know, even the present implications of new technologies, of new um healthcare systems, of new healthcare initiatives.
00:31:54
Speaker
um So we can think of things like managed care in the US, which patients, you know, despite the many rational concerns that they may have, um for example, things like, you know,
00:32:06
Speaker
austerity policies restricting the range of legitimate treatments being covered by insurance. um Despite this, they have no right to opt out of. And similar concerns could also be raised about the computer in the clinical consultation room. I mean, a variety of concerns have been raised on the impact on the quality of patient care, um on doctor-patient relationships.
00:32:26
Speaker
And these are rational concerns that are supported in the ah philosophical literature, the empirical literature. um They do have some kind of negative effect on how the clinical consultation proceeds um with respect to doctors and patients.
00:32:41
Speaker
Yet patients don't really have a right to opt out of ah you know having computers being used in the examination room. or at least not a meaningful right to opt out where they are given the opportunity to refuse.
00:32:54
Speaker
you know It's just part of clinical practice now. And the idea here is, you know, not that um we shouldn't be concerned about these things or that there there isn't necessarily a problem here, but that if we're going to make this kind of argument, it would require a radical revision to um how we how we've traditionally understood what patients' rights are within the context of medical care.
00:33:17
Speaker
And it seems to me that that yeah work simply isn't done by the proponents of this argument. And it's not it's not clear to me that it's it could plausibly be done.
00:33:31
Speaker
Great.

Informed Consent and AI Usage

00:33:32
Speaker
Yeah. So it seems like on the one hand, the rationale is too broad. um If youre allowed you refuse um any practice that they have a rational concern about, they could refuse a practice like managed care, you know, that- yeah I can just opt out of managed care. I cant yeah computers that can opt computers. out of digital health yeah in general.
00:33:58
Speaker
it's um And normally opting out, would doesn't it also imply like they would be provided an alternative? ah In the context of treatments and treatment um options, yes.
00:34:13
Speaker
um In the context of like you know aspects of lie upstream, like how a clinician makes their decision, no Roberto, do you have any questions about that? I mean, i'm that that's that's all I had about the rights-based argument. Did you have any thoughts? I'm ready for materiality. as All right, let's get to some materiality.
00:34:33
Speaker
um Okay, yeah, so... As I understand it's kind of like, look, clinicians, they have to disclose their use of ML systems because that information that they've used in ML system is material.
00:34:47
Speaker
In other words, it's relevant, it's significant to the patient. um
00:34:53
Speaker
What's the evidence as well? It's like many patients, they would likely change their consent decision if they knew and ML system was being used. so if if the patient would you know change their mind based on knowing that an ML system was used by the clinician, then it seems like the information that it was used by clinician must be material.
00:35:21
Speaker
And so, uh, you know, we gotta, we have to disclose, um, otherwise if we don't disclose, then their informed consent, uh, is undermined and, you know, informed consent is obviously something we want to, uh, protect. Um, so anyway, that's, that's my attempt to roughly describe it. Yeah. So how would you kind of respond to that sort of idea? Yeah.
00:35:46
Speaker
um So the if I can just add to the to the argument as i as I put it out in the paper. Yeah, please. yeah The claim here is that, you know, um
00:35:58
Speaker
It relies on this this evidence, this recent evidence that's come out in the kind of human-computer interaction literature about the the phenomenon of algorithmic aversion. And algorithmic aversion is when ah human users display um or ah less likely to accept the outputs of an algorithmic system compared to the judgments of a human being, even when the algorithmic system is more accurate and following its its outputs would lead to more optimal outcomes.
00:36:28
Speaker
And so um some some philosophers, researchers have have argued that, well, the phenomenon of algorithmic aversion suggests that um a hypothetical reasonable patient ah would likely change their decision if they were informed about the fact that a medical AI system was used to inform a doctor's judgment.
00:36:51
Speaker
um So and the the argument is heavily dependent on this kind of algorithmic aversion evidence. And the the problem that I see with this is that while algorithmic aversion has been observed when users are asked to rely entirely on algorithmically generated outputs, um there's no evidence as of yet to suggest that people respond similarly to human beings whose judgments are informed by algorithmic channel algorithmically generated outputs.
00:37:21
Speaker
um In addition, there's also some counter evidence here because users can in some cases exhibit something that's called algorithmic appreciation in which we essentially see the inverted response in which they they value algorithmically generated outputs more than human judgments.
00:37:39
Speaker
um So the argument from algorithmic aversion seems just to me to be inconclusive. it doesn't um The empirical evidence isn't strong enough to defend the descriptive claim. But it's also worth mentioning that me when it comes to this kind of claim that, you know, if we can see that ah most people would change their consent decisions on the basis of certain information, which is a descriptive claim, it doesn't necessarily follow from that the the normative claim that, you know, we should disclose this information.
00:38:09
Speaker
um There's a kind of gap between the the descriptive and the normative. And while while this kind of descriptive claim is useful when considering, um you know, what what should we consider, know,
00:38:20
Speaker
material to patient's decision. It can't by itself establish that kind of normative claim. um So yeah, there's an invalid argument just to go back to Roberto's. ah We just have an invalid argument on our hands.
00:38:35
Speaker
Yeah, i was i was thinking that you can probably even find single individuals, so let's call them him, me. who on one day will, will be kind of averse to the, uh, the, the algorithm algorithmic prediction or another day, if I just read some Daniel Kahneman or something, I'm like, yeah, give me the machine. i don't care what the human says. Yeah. ah So yeah, I, I, I,
00:38:57
Speaker
That sort of undermines that that premise for sure. um Yeah, exactly. It's the main problem with, you know, this kind of empirical results using them to find out what's material. It's like, well, you know, people's preferences are highly changeable and highly inconsistent and, you know, not robust over time.
00:39:12
Speaker
um So we can we can see these kinds of changes. And does that mean that what's material changes as people's opinions change? Well, legal scholars are likely to balk at that idea. So, and philosophers too, for that matter.
00:39:27
Speaker
Yeah, you listen to a Pink Floyd record and then suddenly you're really anti-algorithm, you know? shaston Anyway. You want to jump into the autonomy argument?
00:39:38
Speaker
i Yeah, I think... I think we do have that objection here that that might be good to treat at this point. So there we've been um we brought it up a couple of different times that you know humans are weird and unreliable. and um you know so we shouldn't have and And sometimes that happens with AI, right? And sometimes it's its it's a wrong answer. So we shouldn't have two standards for things that are so similar.
00:40:06
Speaker
But someone might say something like, hey, you know, um and ML systems are simply categorically different from, ah you know, clinicians in this case.
00:40:16
Speaker
ah Clinicians have ah that special sauce of, I don't know, moral personhood or and whatever you want to say. and And that makes them such that they wouldn't be necessarily in that category of an ML model or shouldn't be epistemically.
00:40:31
Speaker
I guess their epistemic considerations shouldn't be in the same place. So, I mean, maybe just like, let's hone in on just that part um because it seems like some people might, you know, all all the arguments aside, they might say, but you know, ro

Epistemic Standards for AI vs Humans

00:40:47
Speaker
i'm I'm okay. I'm willing to forgive a human making a mistake, but maybe I'm not ah with this AI making a mistake. So is that is that just inconsistent or tell us about that?
00:40:59
Speaker
Yeah, I mean, okay, so for me, i think what where the argument kind of falls down is that it's not clear what relevant aspect ah makes it, you know, qualitatively distinct from human beings that would require us to treat it in in this kind of way. Like, you know, if we think of...
00:41:17
Speaker
um ah even human beings or human clinicians getting input from their colleagues about you know what what the patient's diagnosis is or what the patient's what the most appropriate treatment is.
00:41:31
Speaker
It's not clear why um ah the the medical AI systems are relevantly different from that kind of information. um That would mean that we should treat them differently.
00:41:41
Speaker
um It's just not clear to me that medical AI systems so are also qualitatively distinct from other kinds of technical and systems that are used to inform clinical judgments, like you know clinical decision support systems that have been used for ah you know decades without being disclosed to patients. why What makes medical AI systems ah relevantly different from these systems to warrant special treatment?
00:42:06
Speaker
um To me, it's not clear. ah We just had Kostlin on, Stephen Kostlin, emeritus at Harvard on the show. And he was talking about how um you know the the the place where ai models are just not where humans are is in their you know the the way that humans integrate um and emotion into into things, right? So there is just, it's a ah deeper well of, ah you know, if,
00:42:35
Speaker
If AI is currently based off, you know, deep neural networks, we have ultra deep neural networks or something. Right. So yeah um maybe that would be the the qualitative difference that makes it such that, you know, humans just we got we got this superpower called feelings that even though we don't explicitly know why we feel some things, it's been fine tuned for however long we've been alive, plus couple of billion years of evolution.
00:42:59
Speaker
how do you how do you feel about that? I mean, I'm certainly not trying to say that the AI systems and human beings are identical. I think that's that's fundamentally untrue.
00:43:11
Speaker
um i My response to it is it's it's just not clear why that why that difference is relevant to, you know, or why acknowledging that difference means that we should disclose the use of AI systems to patients in this in this kind of context. it's not It's just not clear to me where the connection is That means that, well, okay, human human beings have this kind of emotional ability to incorporate emotion into their judgments um and medical AI systems don't.
00:43:39
Speaker
Well, therefore, we should disclose the use of medical AI systems. its There's just a gap there for me that I can't see what connects those two kinds of claims together. I mean, unless, you know, I could see it being relevant in if medical AI systems were used to automate the entire treatment process and to automate the clinical decision-making process and the shared decision-making process with patients where you know some kind of emotional capacity is is essential and is needed to engage in shared decision-making with the patient.
00:44:10
Speaker
But that's just not what medical AI systems are being used for at the moment. They are they are tools. They are tools that doctors use to assist them in in making certain ah diagnoses or predictions or recommendations and so on.
00:44:24
Speaker
um So the the relevance, I think, for that difference just isn't there for me.
00:44:31
Speaker
Yeah, when that went when I really like the the way your article made me feel speaking of feelings, because, you know, yeah there's this intuition, right? I mean, I, I feel like the, the arguments that you, that you you gave from the different authors you cited that, you know, I, I understand the intuition that they're trying to highlight, but then you're pointing out, yeah. And that doesn't get me here. Right. And and yeah that's why I framed it that way. And, and, um, and I think, you know,
00:45:02
Speaker
It's almost like an open question here. It's like, okay, so so can... What's this missing argument? What's this argument that I can't really, you know, put into words? Yeah.
00:45:12
Speaker
Yeah, well, I feel like, yeah, some of the... Like you said, you know, there's certain cases where the arguments are kind of non sequiturs, where the conclusion is in the fall. But then other times, it's like the premise...
00:45:24
Speaker
um if it's if it's correct, like if there really is this extreme, um you know out such extreme algorithmic bias, that would just justify something more than disclosure, which is you wouldn't be allowed to use it at all. So in some cases, it seems like the problem is, yeah, invalid arguments, non sequiturs.
00:45:42
Speaker
But other times it seems like not only would disclosure be required, but actually you wouldn't be allowed to use it in the first place. So in that sense, disclosure wouldn't be required because if the system is that unquestionable, reliable, you wouldn't be allowed to use it Yeah, the way I see it is that, you know, proponents of this thesis need to toe the line between suggesting that ah the use of medical AI systems generates significant problems, but not significant enough to mean that it's unacceptable to use them in the first place.
00:46:12
Speaker
And actually, you know, getting that sweet spot is is really challenging, and despite whatever intuitions we might to have about, you know, oh, this it it feels like it's something we should disclose. Well, yeah.
00:46:26
Speaker
arguing for that kind of collusion is a a whole different kettle of fish. Good. Yeah. That's really helpful. Awesome. So maybe we can now turn to the autonomy argument. um I'm probably going to botch this, but let just I'm going to try to describe it. So it's like, you know, it's based on the principle that that, hey, the patients, they have the right to to to to rule themselves in a meaningful way.
00:46:52
Speaker
They have the right to, in other words, make their own informed choices ah without too much interference. and from there, there's something like, well, look, and ML systems are a threat because they embed a lot of values. So they have built-in ethical priorities. So, um you know,
00:47:14
Speaker
You could take examples like IBM Watson. It prioritizes lifespan over quality of life. And so um the issue there is that with these built-in ethical priorities, well, those built-in ethical priorities might clash with the patient's own values.
00:47:33
Speaker
And so if you don't disclose it, then the patient might be led into... led into
00:47:43
Speaker
I'm having trouble finishing that, but like somehow it's like, that's going to undermine their own of autonomy, their self rule by having them, um, be there, their care be like guided to some degree by, these ML systems. And then the other kind of direction besides the embedded values, you also bring up as opacity. So it's like ML systems, they're black boxes.
00:48:05
Speaker
The conditions can't explain how they reach a diagnosis. So, um that undermines dialogue and maybe you might think that somehow dialogue is like really crucial to autonomy in a medical context.

AI Values and Patient Autonomy

00:48:22
Speaker
So yeah, so those two reasons, embedded values, opacity, that's leading some people to think, you know, therefore disclosure, again, it's required um because that's what's going to protect patient autonomy.
00:48:38
Speaker
So yeah how yeah, what do you think of all that, Josh? um Yeah, so there's two kind of claims there. The the the first one about embedded values. Well, I think the the problem there is the problem that, the you know, we've talked about many times throughout this is that, well, medical AI systems aren't the only things that contain embedded values. You know, human judgments in in clinician clinical medicine are renowned for containing ah ethical values and and moral values.
00:49:05
Speaker
You can't do a lot of clinical reasoning without having some kind of ah moral judgment kind of slipping in. And so if we're going to treat medical AI systems and say, oh they're things that we need to ah disclose the use ah of and we need disclose what values they have, well, then we need to do the same thing for human judgments. But then we don't do that. Again, it requires a massive revision to what we think clinicians are ethically obligated to do in medical practice.
00:49:36
Speaker
um Yet, you know, people are still kind of kind of making the suggestion. um As for the the second kind of argument, that has to do with the the idea of shared decision making that, you know, patients and clinicians engage in ah in a dialogue in which um Clinicians essentially facilitate patients in in understanding what their their preferences and their values are with respect to their medical treatment, giving them different options, um explaining what kinds of risks and and challenges are likely that to arise from those options um and and that kind of thing.
00:50:11
Speaker
But because medical AI systems are opaque, it's not really clear why they generate the outputs that they do um or the recommendations or the diagnoses or whatever. Well, then that interferes with clinicians' capacity to answer um what the the authors of of this particular argument call like why questions, like Why did i why was i diagnosed with ah why did the medical AI system diagnose me with this instead of this?
00:50:37
Speaker
Why is it recommending this treatment over this treatment? um Opacity seems to interfere with a ah clinician's capacity to to engage with that. um The problem I see with that is that, well, patients still have the option to ah evaluate or to reject the treatment suggestions that are generated by medical AI systems if they if they don't align with their like values and preferences, um just as with you know human judgments.
00:51:03
Speaker
And then also the threats that opacity presents to shared decision making are somewhat exaggerated. It's like, okay, well, we can't answer that question, but doctors can't answer every question, every why question the patients give to them.
00:51:16
Speaker
And they can answer a lot of questions about their own judgment that is informed by the AI system. It's like, okay, well, I thought this was the most appropriate um option because the AI system generates very accurate outputs for your particular patient cohort.
00:51:31
Speaker
um It's very well tuned for this particular condition. It doesn't hit some kind of edge case where there's some kind of ambiguity that it's likely to misrepresent. um And, ah you know, here all the here are all the reasons why I think that, um or I've given this recommendation that was informed by the AI system.
00:51:50
Speaker
We don't need to know exactly what the AI system does because we don't know a lot about what happens in medicine as well. ah Medical AI systems aren't the only things that are opaque. Famously, like pharmaceuticals, lithium, acetaminophen have been historically opaque. No one really knows why they work, but they know that they work.
00:52:08
Speaker
they work. Yet we still use these kinds of systems, so um or these these kinds of medications. So ah the the threat that people think opacity in medical AI systems presents to shared decision making just seems to be exaggerated.
00:52:24
Speaker
Yeah, i think we don't even understand why some ah like ah anti-depression medications work, right? but you know that Yeah, precisely. Great. So maybe, you know, just, just think about time.
00:52:36
Speaker
Maybe this is just like a last point or I'd be curious. um So we've brought up the massive, the issue of like, oh, well that would entail, you know, if if we, if we buy into this line of reasoning, that would require massive revision to current medical practice.
00:52:51
Speaker
What if someone wants to just bite the bullet there and they're like, look, you know, if fairness and honesty are the goal, then yes. Right. requiring disclosure of ML use and clinician biases would indeed, yeah, it would totally reshape medical practice, but maybe, you know, that's the reform we need.

Conclusion: Ethical Revisions in Medical Practice

00:53:09
Speaker
So, so, so this perspective might be like, yeah, massive revision isn't a problem.
00:53:13
Speaker
It's just a moral necessity. You know, patients deserve transparency about all influences, whether it's human or algorithmic. So yeah. but How do you respond to that kind of person? It's just like, yeah, like,
00:53:25
Speaker
what's What's wrong with massive revision? Revolution is, you know, let's let's go for it kind of thing. um My response is that it wouldn't benefit patients. It wouldn't benefit patients to disclose all that information to them and essentially just overwhelm them with information that they need to consider.
00:53:41
Speaker
You know, it's a ah massive issue in medical practice figuring out how to communicate information to them in a way that that patients understand. Like understanding is a massive issue when it comes to informed consent. You know, we can disclose all the information that we want, but we don't just want disclosure. We want understanding.
00:53:59
Speaker
And in order to actually get understanding, well, we need to disclose the information that is most relevant and communicate it in a way that is ah kind of digestible for patients. And we can't do that if we just ah just disclose everything to them that that seems like it could possibly be relevant. That's just not going to benefit patients. It's not going to benefit their understanding.
00:54:18
Speaker
And it's not going to benefit middle medical practice. And that is it seems likely to ah potentially result in you know greater liability ah or um ah you know, risks of doctors getting sued because patients don't understand what's going on. They're just giving this kind of ah lump of information.
00:54:39
Speaker
um Like, you know, the the practice that lawyers have where they just drown someone in paperwork and say, well, you know, we gave you all the information, we disclosed everything. Great, but I don't understand anything of it and I can't do anything.
00:54:51
Speaker
So how does that benefit me? That's how I kind of see it. Awesome. Well, thanks so much for coming on, Josh. It's been a really awesome conversation and I highly recommend to everyone listening ah the article. It's really clear, really nice structure, clear argumentation, very balanced, fair-minded tone. So yeah, thanks so thanks a lot for coming on, Josh.
00:55:12
Speaker
Thank you so much for having me. I really appreciate it. Yeah.
00:55:17
Speaker
And thanks for listening to the episode. I just wanted to quickly give the name of Josh's article so you can look it up and read it. It's called, Are Clinicians Ethically Obligated to Disclose Their Use of Medical Machine Learning Systems to Patients?
00:55:31
Speaker
um And it's released in 2025 with the Journal of Medical Ethics. So you can look look up the Journal of Medical Ethics and find his article. Thanks.