Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#4 Giovanni Rubeis: Ethics of Medical AI image

#4 Giovanni Rubeis: Ethics of Medical AI

AI and Technology Ethics Podcast
Avatar
110 Plays7 months ago

Giovanni Rubeis is a professor and head of the Department of Biomedical Ethics and Healthcare Ethics at the Karl Landsteiner Private University in Vienna. He also has worked as an ethics consultant for various biotech companies. And he is the author of Ethics of Medical AI.

Some of the topics we discuss are the history of AI in healthcare, past failures of medical AI (such as IBM’s Watson Health), the prospect of having digital twins to enable better healthcare strategies, and what we lose when we think only in terms of measurable data—among many other topics. We hope you enjoy the conversation as much as we did.

Recommended
Transcript

Introduction to Giovanni and AI in Healthcare

00:00:16
Speaker
Hello and welcome to the AI and Technology Ethics Podcast. This is Roberto, and today Sam and I are interviewing Giovanni Rubais. Giovanni Rubais is a professor and head of the Department of Biomedical Ethics and Healthcare Ethics at the Karl Landsteiner Private University in Vienna, Austria. He also has worked as an ethics consultant for various biotech companies, and he is the author of Ethics of Medical AI.
00:00:44
Speaker
Some of the topics that we discuss are the history of AI in healthcare, past failures of medical AI, such as IBM's Watson Health, the prospect of having digital twins to enable better healthcare strategies, and what we lose when we think only in terms of measurable data, among, of course, many other topics. We hope you enjoy the conversation as much as we did.

History and Limitations of Early Medical AI

00:01:23
Speaker
Alright, so Giovanni, your book is about the ethical problems that arise from applying AI to the field of medicine and healthcare. So to start off, could you kind of just give us some sense of the history of AI in medicine or medical AI, as well as the different ways in which AI is today being applied to the field of medicine?
00:01:43
Speaker
Okay, so as you might be aware, as you might know, AI is not really a new technology. Actually, it dates back to the 1950s.
00:01:59
Speaker
The first applications in medicine have been developed in the 60s, so there is this very famous ELISA, which was a program developed in 1964 by Joseph Weitzenbaum at MIT.
00:02:19
Speaker
This was basically a program that was used for psychology, psychiatry, psychotherapy. Today, we would probably call it a diagnostic
00:02:37
Speaker
a program so it would help people to conduct an interview with patients. So maybe if you will the very very early version of a chatbot that could
00:02:52
Speaker
ask very simple questions. But the problem with these early MAI, as they are called, so medical artificial intelligence, I will talk about the very concept or the very term artificial intelligence later.

Transition to Machine Learning in AI

00:03:09
Speaker
But these very early MAI systems had some serious flaws.
00:03:16
Speaker
They were so-called rule-based systems, which means that
00:03:25
Speaker
a program like Eliza that could ask you questions and then react to them, you had to program each step individually. There was no natural language processing, of course, so you had a standardized set of questions, but also a standardized set of answers as well.
00:03:49
Speaker
So you had no leeway with your answers, so to speak. And everything this thing knows was pre-programmed. So of course, as you can imagine, this is very expensive because it takes a lot of manpower. You have to do a lot of step-by-step programming. And although
00:04:12
Speaker
It might seem that the results for this very primitive technology were amazing. It was also something that was simply not efficient because the handling of this as well. I mean, you had no visual user interface. You had no operating system as we know it today, like Windows or iOS, whatever. So you had to use these punch cards
00:04:42
Speaker
And this means for medical practice, of course, these things were more or less useless because you had to have some working knowledge of informatics to be able to use these things. And as you may know, we are not talking about devices as we have them today, but we're talking about a full room of stuff that was a computer, right? And it has
00:05:06
Speaker
I had maybe 1% of the computing power of the little things that we all have in our pockets today. So, these were very, compared to what we have today, it was very primitive technology, but there was
00:05:23
Speaker
at least there was already at that time some sense of, okay, there could be something there. Maybe this could come in handy later when these things are maybe further developed and that there is a certain potential there.
00:05:40
Speaker
Another problem was that these early systems mainly focused on what has been called toy problems. They operated mostly under lab conditions and nobody knew how they would operate in the wild where things are usually pretty messy, especially in clinical medicine and so on.
00:06:05
Speaker
The another aspect was they had no probabilistic reasoning as we have today with machine learning. So they could only do very primitive computing stuff. And all these reasons taken together led to the first AI winter in the late 70s, which of course also influenced MAI.
00:06:29
Speaker
And then in the 80s, there were some progress, but we had the second AI winter in the late 80s. And then in the 90s, a new paradigm arose that changed the game dramatically. And this is machine learning. So up to then, what we were just talking about, these rule-based systems and so on, they are also called symbolic AI. That means they use symbols as representations of an object.
00:06:58
Speaker
But this propositional relation between symbol and object has to be programmed. So for each object you have a symbol and you have to program this connection. And the advantage of machine learning is that the system more or less figures this propositional relation out by themselves.
00:07:18
Speaker
So all you have to do is you have to tell the system what are the variables that are important and how should you cluster them together. And that's it. So you need the variables and you need some
00:07:39
Speaker
method awesome i'm some idea of classification and the rest is up to the system services but the system does it it. It connects variables with each other it finds out patterns it figures out patterns between these variables and it focuses on correlation.
00:08:00
Speaker
And this is the clue about machine learning. Machine learning is all about correlation between variables, how they belong together, how we can link them and build models from them in order to predict future outcomes. That's all the magic here. So no causality. They tell us nothing about causality. They only tell us something about this system, tell us something about correlation. But they do this very, very good.
00:08:29
Speaker
AI alone is worth nothing without data, of course. The simplest way to think about this is to say that AI is the engine and data, especially big data, is the fuel that you need to power this engine.
00:08:48
Speaker
This is, of course, a very brief overview over the history of medical AI. You have things like the robotic turn as well, and so on and so forth. But basically, the important difference is symbolic AI versus this machine learning paradigm that we have today.
00:09:09
Speaker
We definitely want to get to the robotics, because I know that robotics are going to be used for nursing, right? I think you might have mentioned mental health. I don't remember. But so we'll get to those. To summarize, though, there was the beginning of the 50s, a rule-based kind of approach to AI. And it petered out. And now we're moving into a probabilistic learning from data kind of approach.
00:09:38
Speaker
Should we turn to the future, Sam? Well, I mean, one question I'd have is like, would you want to... I'm just thinking about like the most famous sort of instances of medical AI. And I think one of them that jumps to my mind is the IBM's Watson Health.
00:09:57
Speaker
I don't know if you have any thoughts about that, but basically, you know, it was sort of like a spectacular failure, right?

Challenges and Failures in Applying AI

00:10:04
Speaker
Because, so basically AB, IBM Watson, you know, the most thing, the thing that's most famous for is it's great success on Jeopardy. So it beats, you know, two champions into 2011. And then I guess IBM was kind of like, okay,
00:10:19
Speaker
you know, you want to make the big bucks, you got to get into medicine. And so, and so then they tried to create like Watson health. And I was reading an article where, you know, they ended up selling all the parts of it basically for a billion dollars and just acquisitions alone. They, they spent like 5 billion. So it was like basically a massive loss. And, um, I know I'm kind of curious, like, what do you think about, maybe this is like jumping ahead a little bit, but like, what do you think about people who are just really skeptical?
00:10:47
Speaker
of the idea of medical AI, because they look at, you know, IBM's Watson Health, and they're just like, that was a super failure. So anyway, yeah, I'm just kind of curious, you have any thoughts about that? Actually, this, this is a very good example, because Watson for oncology, especially was among them, or maybe was the most promising medical AI technology out there.
00:11:15
Speaker
I remember like papers from especially that time around 2016-17 where there was an explosion of papers where people applied Watson for Oncology. They fed it with patient stories.
00:11:34
Speaker
and made a retrospective diagnosis with it and found out like, wow, we have like 98% of the cases where Watson Phoncology decided, as the tumor board actually did in this case, but Watson Phoncology only takes several seconds to figure this out. Why can't we use it?
00:12:01
Speaker
And the problem was that they found out exactly the issue that I was talking about before. Some of these systems work really, really good under lab conditions.
00:12:17
Speaker
But they fail when faced with the complexity of real-life cases. And this is really an issue. And with Watson for Oncology, we had another issue as well. It turned out that, of course, it was developed and produced in the US.
00:12:36
Speaker
So, when people, especially some reports from Korea, where people tried to adapt what's in for oncology for the Korean health system, and it simply didn't work because it was so focused on the American experience. It starts with simple things like terminology.
00:13:00
Speaker
So you have to adapt your whole terminology to it because it only works with very standardized things and so on and so forth. So a spectacular thing, but as you said, also a spectacular failure. And of course, it is one of the reasons, and I hear this very often when I talk to clinicians, to doctors about this, about medical AI, they say, oh, but what's in phoncology? I mean, there was this high hopes and what has become of it.
00:13:30
Speaker
So yeah, it's an important thing, but not only to talk about that, but also to analyze why did it go wrong or what did go wrong.
00:13:47
Speaker
Right, right. Yeah, there's so many lessons to draw from it. I mean, yeah, another lesson I've heard about, you know, is kind of like the issue of the data, the unstructured, I mean, you talk about this in your book a lot, but issue of unstructured data, basically, you know, a lot of medical information is in the form of like, doctors notes and, or, yeah, basically, like, you know,
00:14:11
Speaker
Yeah doctors jargon heavy doctors notes or that they put into the system or that are just written down and like i think i've heard that yeah that the watson had trouble like really incorporating that data.
00:14:26
Speaker
And that's going to be like, I've heard that's upwards of 80% of the health days. Are these unstructured? Exactly. And this is what medical students learn early on. They have to adapt to that. They have to build strategies for coping with

AI's Transformative Impact on Medical Practices

00:14:44
Speaker
that. And Watson simply can't do that because it relies on this very structured and very standardized data. So yeah, because it lacks context.
00:14:55
Speaker
That's really the crucial issue here. Right, and that's a big emphasis of your book as well, is the importance of contextualizing data and thinking more broadly about the social political context related to data. Good, but let's kind of shift into
00:15:14
Speaker
Kind of where you think medical AI will go, basically. Yeah, just how do you think AI will transform medicine and healthcare? I mean, you have this really useful way of framing it. You talk about like there are three key impact areas. You talk about practices and activities. You talk about social relationships, how AI can transform social relationships. And then you talk about how AI can transform
00:15:39
Speaker
the environment of medicine and healthcare. So yeah, could you give us some kind of overview sense of how you think AI might transform these three? Yeah. I mean, since I am an ethicist, there are always two levels to analysis like that. I can tell you what I think will happen, and then I can tell you what I think should happen.
00:16:03
Speaker
The normative aspect as well. But what I think will happen is, as you already said, there will be a transformation. This is out of the question. And the transformation will happen because the way doctors do things will change drastically. And this is due to the nature of AI.
00:16:30
Speaker
Because we are not dealing with a new tool here or an improved tool. AI is something that is beyond this passive tool.
00:16:51
Speaker
really, or many AI applications, at least, can be considered as artificial agents. So they do things, they decide things. And the most important thing is, in an epistemological sense, they change the way that doctors encounter the patient. Why? Because up to now, doctors are the ones who have to make sense of data, right? They are presented with
00:17:20
Speaker
data and they have to develop heuristic practices like strategies to sort out the width from the Jeff, so to speak. They have to find out what is relevant data, what is irrelevant data, which is of course a selection process and then make sense of this data. And
00:17:44
Speaker
their epistemic practices will change because using AI means that you are not presented primarily with patient data, but with a model built on patient data. So a lot of this selection has already been made before you even encounter the patient.
00:18:08
Speaker
And this is something that will fundamentally change medical practice because up to now medical practice relied heavily on the individual doctor encountering the individual patient, having all these data from lab results or whatever, and contextualizing it with the overall life situation of this person. So this is really the main job of doctors to do this.
00:18:35
Speaker
And if you're in a situation where this data is already turned into a model, and so many decisions have already been made, not at least the decision which data is relevant and which is not, and what are the relevant variables to look for, then this changes the heuristic practices. And this is why I use the term smart data practices.
00:19:00
Speaker
for describing this, what doctors will do in the future. And the same thing is true for social relationships, because we have a new agent here, an artificial agent, that is part of this therapeutic relationship somehow.
00:19:19
Speaker
We still haven't figured out yet in terms of who's responsible, who is to be held responsible when something goes wrong, for example. Can you really hold an AI responsible for anything or is it the developers of this technology or who is responsible and so on?
00:19:37
Speaker
So this is the second impact area and the third impact area is what I call environments and this means you have these technologies or these more data and technology focus practices which transforms work environments for doctors, nurses, therapists, but also you have a lot of these technologies like mobile technologies or Internet of Things technologies in your home.
00:20:04
Speaker
in order for this whole thing to work properly and this will of course transform the home environment of patients and of people who aren't even patients who use these things for example health apps to optimize their health. So this is why i came up with these three impact areas and this is this will change fundamentally the second level is.
00:20:24
Speaker
how should it transform medicine?

Ethical Design and Challenges of Medical AI

00:20:30
Speaker
And this is a totally different story because a lot of people, most prominently, probably Eric Topol, who is a very famous cardiologist,
00:20:41
Speaker
But he's not really famous for his achievements in cardiology, but more because he writes very successful books on AI and medicine. And he says that we will have this deep medicine because AI will transform medicine, and of course it will transform medicine for the better.
00:21:02
Speaker
Right so everything will be better doctors will have more time for their patients because a lot of these very time consuming repetitive mechanic tasks. Can be done by machines and everything will be just fine and I don't really share this optimism, this is a solution is.
00:21:22
Speaker
view, just because we implement induced technology does not mean things will change for the better. If we wanted to change for the better, we have to design technology with that purpose. So we have to make this an objective. And this is the difference between the normative level and the descriptive level.
00:21:46
Speaker
Yeah, it's so fascinating. We interviewed recently David Martens, who does AI ethics as well. And one of the things he kind of we focused on for a second is the pre-processing stage of the AI modeling process. It's like just labeling the data and making sure that that is done in an ethical and efficient way makes all the difference.
00:22:14
Speaker
Okay, we are so we're gonna get into these three domains that you're talking about in a little bit and you've already discussed some of the perils and and I think we you know, Sam and I share your concerns. Let's talk very briefly though about some promise because we also want people to to say, you know, this is these are important issues and I want it to work out so to take action in the right direction. So to that end, let's talk about digital twins. And I would like to
00:22:43
Speaker
completely off the rail here. I'm going to use myself as an example, if that's okay. Giovanni, I know you don't know this, but I have trouble sleeping and I have high cholesterol. But I've never wanted to take statins because I've heard that they might further disrupt my sleeping. Now, given this example, how would having a digital twin help me out?
00:23:10
Speaker
Well, a digital twin is basically a virtual model of either an organ or a whole system, like a cardiovascular system, for example, or even in some very sophisticated technologies, the whole organism, right? Your whole body.
00:23:28
Speaker
So, it all depends on the data. The thing about digital twins, the first digital twins were developed by NASA, so they weren't really a medical thing.
00:23:48
Speaker
But the thing about digital twins is if you have data, let's say from lab results, like blood tests or whatever, and things that are in your patient history, and you combine this with data from your environment,
00:24:10
Speaker
and behavioral data, for example, from smart variables that you wear on your body, so sensors that you wear on your body, mHealth technologies, Internet of Things technologies, which provide data, yeah, right, that are ubiquitous and that you produce in your daily living, right, in your home environment and so on. Then you can take this data, integrate it with the lab results and all the other stuff and your genetic data and whatever,
00:24:40
Speaker
and build a virtual model and this model that the thing about this model is it's it's it's a it could be at least a real time

Digital Twins and Personalized Healthcare

00:24:50
Speaker
representation of your body right because it's continuously fed with all this data and it is also bidirectional which means
00:25:01
Speaker
As a doctor, I could, for example, test a drug, not by giving you the drug and figuring out is this drug right for you, what is the right dose, but by simulating, giving you this drug on the digital twin.
00:25:19
Speaker
And then the system will simulate what will happen when I give you this drug. And so I can figure out what is the right treatment for you without even putting you through the ordeal of trying different drugs, trying different dosages and so on and so forth. So this is great stuff. And it could work on so many levels. It could also work as a risk prediction.
00:25:44
Speaker
Because if this model is constantly continuously fed with data, it could not only tell me what to do in a situation where you have an illness or something and I try to figure out what is the best way to treat you, but it could also help in predicting health outcomes.
00:26:04
Speaker
Even if you are healthy, the system could, for example, tell you, okay, your cholesterol is a bit high. And if you continue with your lifestyle, you may end up with a cardiovascular disease in 10 years time, 15 years time, whatever. And this could help to
00:26:23
Speaker
to figure out the risks and to recommend to you a lifestyle change or whatever. So this is really a fascinating and promising technology and it could really help doctors figure out what is the best therapy for the patients because now we have the problem, the doctor has your individual data, right?
00:26:50
Speaker
It tries to treat you as an individual but all the knowledge this doctor has stands from from from our cities or or or systematic reviews dealing with tens of thousands of of of patients right right.
00:27:09
Speaker
So what these papers tell the doctor is, I don't know, drug XYZ helps the average patient by doing this and that. But this average patient is, of course, a statistical fiction, right? It's not real. And the doctor is faced with a real patient. And
00:27:36
Speaker
So, whereas up until now, doctors have to figure out how to bridge this gap between this average patient and the real person they have to treat. So it's almost like, oh, sorry, real quick, just like it's almost like right now, you know, typically the way it works is like, I'm going to
00:28:00
Speaker
intervene and try to help you in terms of healthcare. I'm gonna try to help you by giving you something which would work if you're
00:28:11
Speaker
like an average person. If you're like the average, then this is going to go great. But if you're not, then we have a problem. It's exactly that. It's like you can shoot at something with a shotgun or with a sniper rifle. And the digital twin would be the sniper rifle. And in medical terms, we talk about precision medicine and personalized medicine.
00:28:37
Speaker
So, a diagnosis and treatment that focuses on your individual data and not the average data from tens of thousands of patients in drug trials, for example.

Role of mHealth and IoT in Data Collection

00:28:51
Speaker
Right. And just to really emphasize, I mean, you already brought this up, but can we just emphasize that role of mHealth and Internet of Things technology, right? Because it's like,
00:29:01
Speaker
The key here is that, you know, the reason we can, you know, decisively know whether, you know, what is it? What are you talking about, Roberto? What was the thing you're not sure you can take? It might further disrupt my sleep. Yeah. Yeah. And the way we're going to know if it's really, you know, based on the digital twin is because we're going to have every, you know, we're going to know everything about Roberto specifically because he's going to have like sensors on him and he's
00:29:25
Speaker
or is that ideas like we'll have a lot of like very individualized data about Roberto and virtue of mHealth and Internet of Things, right? And there's even, so I should say, just for the hopes that we get endorsed or funded by Fitbit, I wear the latest model relatively. We're really delusional here, Gio. Sorry. Continue, Roberto.
00:29:48
Speaker
But I've seen, so this is how, you know, in the, you know, Insomniac community, there's even talk of like, there's like EEG smart caps that kind of measure your brainwaves and they can kind of give you a sleep profile and what's best for you individually. So I guess it would be integrating all of this data, all of the above and then some to build my digital twin. Yeah. Yeah, exactly. Exactly. And I mean, this is,
00:30:16
Speaker
In my view, this is an interesting technology for another reason because it can be used in so many different medical contexts. So it can be used by GPs.
00:30:28
Speaker
by general practitioners. And once you have this digital twin, it's very easy for your general practitioner to communicate with specialists, right? So they can transfer you to a specialist and don't have to do all this data transfer and stuff. They just can look at your digital twin and have all the data they need.
00:30:55
Speaker
So I think this is also from this perspective, like how different medical fields interact with each other. It would be an enormous step forward. Awesome. Maybe one quick follow-up on that too. Also, I was just reading some work by Alex John London, who is a philosopher at Carnegie Mellon, I think. But he deals with medical
00:31:25
Speaker
AI in medical context. And he talks about how, you know, there can even be bias in sensors, you know, like, there's so many places where there

Addressing Biases and Diversity in Medical AI

00:31:33
Speaker
can be bias. And, you know, he talks about like pulse ox, oximeters, how do you, I'm not sure, oximeters, consistently overstating oxygen levels in blood, in the blood of black patients. So, you know, it's interesting to think about there's even issues of bias at the level of the mHealth and the Internet of Things picking up
00:31:55
Speaker
Data about the individual sure because in in in in these technologies as in every technology there's there are some some some scripts right something is inscribed in this technology in terms of. What to look for where to look for it and these these assumptions are standardized.
00:32:18
Speaker
They are built on the majority of experiences in a given community. When you see it that way, it's no surprise that marginalized groups or minority groups in a certain society
00:32:34
Speaker
that are simply not represented in most of the data we already have, like clinical studies, clinical trials. We know that ethnic minorities and even women, which are not a minority in a society, come on, also women are underrepresented and certain age groups. I mean, it's crazy to think about that, at least in a European context,
00:33:02
Speaker
The age group that by far is the group that needs the most drugs and the most medical treatment is the age group 60 and above. And this is exactly the age group that is underrepresented in drug trials and in all clinical studies. I mean, this is crazy, right? And we have this even now.
00:33:26
Speaker
And so, of course, using these technologies, sensor technologies and so on, these technologies are based on the knowledge that we have right now, which is the same knowledge that leads to these gaps in the trials and studies that we already have. So, yeah, it's no surprise that we have this bias built into these technologies already. Right.
00:33:56
Speaker
Okay, great. So let's yeah, let's kind of dive more directly into Some of the ethical issues, I mean we could just you know going back to that distinction you make between practices relationships Environments, maybe we could start with practices You know one thing you talk about in your book is is you talk about the practice of the doctor? interpreting the patient, you know, they kind of have this like
00:34:22
Speaker
They have like a hermeneutical task and the doctor has to kind of interpret the patient facts about the patient and try to reach a diagnosis trying to reach the proper treatment so yeah what kind of ethical concerns do you have about. How.
00:34:39
Speaker
AI will transform this practice, this practice of the doctor interpreting the patient. I very much like to think about this as reading the patient. And I think this is, at least from my experience in talking with medical practitioners, something that comes pretty close to the actual practice, which is a hermeneutical practice, as you said.
00:35:08
Speaker
And reading the patient means that, as I said before, you have these strategies making sense of the data you just have, like lab work coming back and you see, okay, a blood count is this and that, and you have other of this very quantifiable data. But reading the patient also means
00:35:34
Speaker
I'm contextualizing this data with the patient as a person because of course you can diagnose somebody with something and you can come up with a therapy but
00:35:50
Speaker
If you read this patient and if you know about the social background of this patient, you know that certain therapeutic measurements won't work. Certain interventions simply won't work because this patient
00:36:06
Speaker
is simply not up to it or this person does not have the support of a family which would be relevant here or this patient does not have the health literacy to cope with certain things and so on and so forth. And this is exactly the context that AI doesn't have. So the problem that I see here has been referred to as digital positivism.
00:36:37
Speaker
And digital positivism simply means a focus on quantifiable data and the belief that a model of a patient built from quantifiable data is something like a mirror image of this person. And this is all I need to know because everything I need to know is in the data. And this is simply not the case.
00:37:05
Speaker
I mean, we all know about the issue of positivism in science. And in science, I mean, we spent the better part of the 20th century to figure out how this works and how we can avoid a positivist reductionism.
00:37:24
Speaker
But when it comes to AI, suddenly we are all positivists. And this is simply because there is this mythological, epistemological power that surrounds the AI where we think, okay, this machine is so clever and so much smarter than we are.
00:37:41
Speaker
But the principle is the same. Positivism is something that you should use very carefully and only within certain limits, and you have to be aware of the epistemic limits of this kind of reasoning.
00:37:58
Speaker
Because a very, very, very simple example for this positivism is because we also we already talked about marginalized groups. People of color.
00:38:15
Speaker
have a higher prevalence. I'm talking about the US context. I don't know about this in other countries, but this is data from the US. People of color have a higher prevalence of cardiovascular diseases, diabetes. And for quite some time, people tried to figure out what the reason for this is. And they looked for genetic differences.
00:38:39
Speaker
because we know that there are some diseases that are related to these genetic differences between people of color and Caucasians and Asian people and so on. For example, several like sickle cell anemia is something where people of color have a high prevalence due to genetic reasons. So the same here, people looked at the genetics until they found out, okay,
00:39:06
Speaker
the higher risk for cardiovascular diseases is not a genetic one, but it is due to the social circumstances, so poor housing, education, other socioeconomic, like income, access to health. Those can be stressors, right?
00:39:29
Speaker
Exactly. And they can cause a certain lifestyle that in turn can cause these cardiovascular problems and diabetes and so on. So this is a good example for a positivist versus a more contextualist, one could call it, approach.
00:39:49
Speaker
If we focus too much on quantifiable data, quantifiable data is fine, of course, and we need this. But we should also always be aware that you have to take it with a pinch of salt and also be aware of the epistemic limits of AI when it comes to this data because this data is limited because it just
00:40:12
Speaker
tells you this and that is so and so, but it doesn't tell you why this is so and so. And to figure out that, you need context.
00:40:22
Speaker
So I have a follow-up, but I have a quick comment as well. So one of the things that Sam and I are both fascinated by is this kind of credulity of people towards AI models in general, right? If the model says so, then it must be the case. And to kind of summarize what you've been saying is that, well, models only deal with that which can be quantifiable.
00:40:48
Speaker
And that is certainly not everything that's relevant in a person in general, but especially in medical context. Yeah. And just real quick on that, Roberto, too. Like, I mean, I'd be curious, Giovanni, what about like, I mean, you talk about in your book of the issue of reductionism, but like, I mean, another kind of side to this is also like, isn't there also this sort of ideology, you might say, or
00:41:13
Speaker
perspective among a lot of people in science that well, the only thing that could possibly be relevant to Whether someone's susceptible to a cardiovascular problem are quantifiable factors, right? Like of course like, you know the I don't know. I'm just thinking like isn't there isn't that kind of an element to where people don't really think about how your health might be determined by social factors in the sense of like, you know, if someone's being
00:41:42
Speaker
uh is experiencing a lot of stress from you know being uh Disrespected continually and or uh, that sort of thing. I mean that can impact on your health. It's not it's not just like a bottom
00:41:56
Speaker
thing, I guess. Is this kind of making sense? Do you see where I'm going? And the thing is, this is nothing new in medicine. We know this for decades, how these things are connected and interconnected, how your psyche, your body and your social circumstances
00:42:14
Speaker
The social determinants shape your health. So health is not a straightforward thing. It's not somatic. It's not that easy to distinguish between, okay, there are psychological issues, some mental health problems on the one hand and on the other hand, there are somatic issues because
00:42:39
Speaker
A lot of them are interconnected and there's the social determinants that shape your health as well. So we know that in medicine, this is not something new, but as you said, credulity, I think is the right word to describe it. As soon as AI comes in, boom, we are forgetting about this. Just as we forget about the problems and challenges of positivism.
00:43:05
Speaker
and especially in a reductionist manner. So I do have a kind of a follow-up, and this one comes directly from some of my friends who are in healthcare currently, right? So I was asking them about, you know, I was telling them, I'm going to interview this professor, it's on
00:43:27
Speaker
AI in healthcare. And we kind of, you know, went over some of the main problems. And they kind of returned to a topic that they always kind of complained to me about, you know, whenever we're all talking about our jobs, one thing that they complain about is that patients seem to be fairly not good at reporting their symptoms accurately.
00:43:52
Speaker
Christina, my wife Christina complains about me about that all the time.

Contextualizing Data for Accurate Insights

00:43:57
Speaker
She's like, she's like, Sam, whenever I ask you what's going on, like I could not get a good answer. Anyway, sorry. It's, it's, it's everything, right? Like when the symptoms started and if you probe a little further, oh, actually it was two weeks ago and not, not two days ago. And so, so here's sort of the follow up question that might maybe add some, some nuance there.
00:44:19
Speaker
Given that many patients are a bit unreliable on providing the relevant contextualized data, I know this is an empirical question, but do you think it's safer for AI to interpret objective data or for medical experts to interpret maybe poor data?
00:44:42
Speaker
Yeah, that's a tough one. I think this is actually one of the areas where AI could be used most efficiently, exactly for this kind of thing. Because if you have a smart wearable sensor or some IoT device, this, of course, will be much more reliable than a patient self-report. But the thing is,
00:45:11
Speaker
That said, it's only reliable when you are looking for the right variables to begin with. So you have to get this right. And the whole bias has to be, which we were talking about before, the whole bias has to be eliminated from the sensors and from the way the data is analyzed and processed.
00:45:40
Speaker
I'd say of course it's preferable to use this to get the the objective data that you want to have the quantifiable data is the best way to analyze and process this kind of data but we.
00:46:01
Speaker
What we need to do is to have this data in context and as part of the whole diagnosis and the whole process of figuring out the right therapy for a patient. This is the basis for it, of course, but it's not everything. So not everything in this whole process is about this quantifiable data, but it plays an important role, of course.
00:46:27
Speaker
And as long as this is not the end of the story, I think, of course, AI technologies, mHealth technologies, and so on, are far better than relying on patient self-reports. Right. Maybe we could dive in a little bit to that, you know, like you're talking about, yeah, if we managed to debias the data, if we managed to do that, you know, then it really could be enormously beneficial for achieving
00:46:57
Speaker
um, precise and personalized medicine. And we've already talked a little bit about, yeah, the bias and the sensor data. I was, there was another thing I was, um, looking at another example of bias. Uh, so.
00:47:12
Speaker
In genetics and this is another thing that i've found in um, I mean you talk a little bit about this as well, but this is another thing I found in alex london's work where he's talking about, you know genetics is increasingly a guide to disease etiology and drug development
00:47:27
Speaker
And he was mentioning there was a study in 2016, uh, where the genetics database they looked at was overwhelmingly, you know, white and European. It was like, um, only 3% of the participants in the, in this genome wide association studies was of African ancestry. Right. So basically, I mean, so one thought is like, okay, so like, yeah, what are your thoughts about like the actual, like, how likely are we actually going to achieve like
00:47:57
Speaker
Less biased data, you know, or is it always going to be the case that I mean, is there really like a genuine possibility that We will achieve equitable Medical AI rather than just medical AI that helps like certain populations. Yeah, and then And then the other thing I want to kind of get into is like it seems like to ever achieve that we would really have to go full in on I'm assuming on mhealth and iot for like everybody right and then
00:48:26
Speaker
that brings up a lot of interesting issues about surveillance and surveillance capitalism, which you discussed in your book. So I don't know if you want to just kind of touch and start getting into that direction of like all the scary dimensions of of ubiquitous surveillance. Yeah. Yeah. But let me start with with the the bias issue. I think it's really
00:48:54
Speaker
It's really important to address this and to realize that, as you said, when we keep on developing technologies the way we do now, we will have technologies that are pretty awesome and that can really foster this precision medicine and personalized medicine, but not for everyone.
00:49:19
Speaker
So we have to be sure about that. And the thing is, how can we achieve an equitable development of AI? And it all depends, of course, on the datasets. So we need more diversified datasets as training data.
00:49:40
Speaker
I think this is the real issue at the moment because a lot of data scientists will tell you, well, this is easier said than done because where should this data come from? And high quality data is really hard to get.
00:50:00
Speaker
But I think if we want this to work in an equitable way and if we want to unleash the full potential of this technology, we simply have to do it. It will cost a tremendous amount of money. Yeah, it will be a huge effort, but I think it's worth it and we should do it because otherwise we have an elite technology for an elite group of people.
00:50:25
Speaker
and not something that is really useful for medicine, because medicine, of course, has to work for everyone. Right. Otherwise, it's not really, it's not really useful. And of course, not ethical. Right. Yeah.
00:50:41
Speaker
I mean, I always have these debates. Austria is a very rural country, so I think Austria is about roughly the size of Maine or something, and we're like a population of nine millions.
00:50:59
Speaker
So, not that many big cities, so it's a very rural country. When I talk about bias, people tend to say, well, we don't have that many black people here anyway, so is this really a big deal? And this is, of course, nothing that we can work with in medicine, of course. Medicine has to work for everyone, and of course, there are people of color in Austria.
00:51:27
Speaker
Yeah. But you have this mindset, right? To say, okay, yeah, maybe it doesn't work that well for some people. But overall, it's a good thing. And this is not acceptable. Either. I mean, it's an either or situation. It has to work for everyone or we simply cannot use it in medicine. Yeah.
00:51:47
Speaker
Should we dive into surveillance a bit? We've been talking this whole time about IoT already, Internet of Things applications, including, once again, Fitbit. I think it's twice per podcast and you got to drop it. Okay.

Balancing Big Data with Privacy Concerns

00:52:07
Speaker
But yeah, let's talk about ubiquitous data collection.
00:52:13
Speaker
Could you describe your perspective on this kind of medical surveillance, upsides, potential downsides, that kind of thing? I mean, the upsides are rather obvious, I think, because AI, of course, works best if it's fed with big data, which means large amounts of data, but also with high-quality data.
00:52:38
Speaker
Very important is this behavioral data and environmental data that you produce in your daily living, in your home environment, in your work environment, and so on. Because when you go to a hospital or to a GP's office, this is a lab condition.
00:52:56
Speaker
And if the GP does a blood draw, for example, what he or she gets is like a snapshot of this is your health condition in this very moment in time. And this environmental and behavioral data is longitudinal data. It's more reliable. You can build better models, especially predictive models from it. So this is important.
00:53:22
Speaker
But the question is, where do we draw the line here? Should we use this for people, for example, with a chronic disease, in order to figure out what is the best therapeutic regimen? What are the potential risks for this person? Or should we use it for everyone? There are people out there who say, well, everyone should have a digital twin.
00:53:48
Speaker
from birth, right? There is a concept that is called guardian angel, which is, I think, self explanatory, which means that as soon as you're born, you get a digital twin and you're you're always where you are your smart variables. And maybe you have this IoT stuff in your in your home. So that there's always the best available data for you from you, your data, even if you're not actually ill. But
00:54:17
Speaker
even it is relevant also for healthy people because then you have the baseline data and you can use that in case that you have an illness. And yeah, from a medical point of view, this would be great, right? But this is, I think, not a medical issue. It's a thing that we have to decide for ourselves. Is it really worth it? I mean, health is a really an important good
00:54:47
Speaker
But is it a good that is more important than privacy? And is it worth the downsides, the risks, the disadvantages that come with this permanent surveillance? And I mean, the most obvious, and this is a lesson from Foucault, the most obvious danger here is, of course, that this can
00:55:12
Speaker
be used for disciplining people, for dictating people a certain routine and a certain lifestyle, not because it's better for them individually, but it's simply more cost effective for the health system. So it could be a marvelous instrument for achieving a certain health agenda and help companies sell stuff, of course.
00:55:42
Speaker
Right. Yeah, it's interesting to think about like, you know, yeah, obviously, the most worrisome type of disciplining is when it's not even exactly it for it for the well being of the person. But it's interesting to think also about like paternalism where, you know, like, I'm trying to think of the name of this,
00:56:05
Speaker
ethics philosopher who uses this. Oh, Stephen Darwall, he talks about like, you know, if you're the parent who's eating
00:56:17
Speaker
Uh dinner with their child, you know their or their daughter and their daughter is like 30 years old You know they even if they even if your daughter should be eating broccoli you cannot be like Yelling at them to eat broccoli because even if it is their best interest. Yeah, it's still like a violation of their sort of I don't know maybe autonomy or something. There's something disrespectful. Yeah about um kind of
00:56:44
Speaker
being really aggressive about eating that broccoli even if it's in their best interest is interesting to think about even when you know this health intervention is in the person's best interest like there's sometimes where you have to still worry about it.
00:56:58
Speaker
disrespectful in a certain way. And this is exactly the case. I mean, at the moment, you can force somebody to agree to a medical treatment, even if my life depends on it. If the doctor tells me, if you don't agree to this and that therapy, you're going to die. And I say, okay, I'm fine with that. Well,
00:57:22
Speaker
It's my life. It's my decision. It's the principle of autonomy. But where's autonomy when this kind of surveillance and self-surveillance and self-monitoring becomes mandatory?
00:57:36
Speaker
Not an option, but mandatory. I mean, from an economic or from a financial perspective, if you think in terms of health insurance and health systems, it would be very efficient to make this mandatory. But this is also an example of this positivist idea that
00:57:58
Speaker
you are healthy or you can influence your health by your behavior, by self-monitoring and so on. This totally ignores that we don't all start from the same position. Of course, it's always better to eat broccoli than burgers or whatever, but there are some people
00:58:23
Speaker
Well, they could eat nothing but broccoli and it wouldn't really help them because they have some genetic factors or some social factors that are much worse than fat and salt and carbohydrates or whatever. So this is also an issue here. Yes, I can be responsible for my health up to a certain degree.
00:58:46
Speaker
but not wholesale and not everyone can be responsible for his or her health in the same way because of genetic differences, social differences and so on. So the whole health agenda behind this is totally wrong in relying too much on this self-monitoring self-management and this kind of stuff. Awesome, yeah.
00:59:11
Speaker
Well, Giovanni, we're becoming mindful of the time here. But we did want to just say, first of all, we haven't exhausted all the interesting topics that you talked about in your book. We didn't talk empathetic AI, I thought that was a cool thing to bring up in mental health. We do want to pose to you just a final question, just kind of to you. But is there anything that we haven't talked about that you think is really important from your book? Yeah, I think
00:59:41
Speaker
If I think of what is the most important thing I learned from writing this book, because you don't write a book in terms of, oh, I know stuff, let's write it down.
00:59:56
Speaker
It's a process, right? And I learned so many things that are really fascinating, and I wish I could have explored so many more things. But I think what is the most important thing that I learned, and this is maybe one of the most important messages of my book, is that this technology is there.

The Need for Ethical AI Development

01:00:20
Speaker
It will transform medicine completely.
01:00:24
Speaker
But we have a very narrow time window now where we can really influence this process, where we can shape the future of medicine. Because the technology will happen. It already happens.
01:00:44
Speaker
If you recall these two levels, this is what will be and this is what should be. The normative level is something that we could really realize now. So we could shape medical AI in a way that is ethical.
01:01:07
Speaker
But I don't really think that we have much time. So this is something that we have to do together. We have to connect medical experts with developers. We have to connect patient groups with software developers, system designers, and so on.
01:01:26
Speaker
in order to build technologies that are really focused on the needs and resources of people and not dictated by companies who just want to, corporations who just want to sell stuff.
01:01:44
Speaker
I mean, they can sell stuff. They can make profit. There's nothing wrong with that. But they can also make a hell of a lot of profit when they design something, produce something that really helps people. And I think this is a process that we have to shape. We can't just wait for technology to rain down on us, and everything will turn out great. This is the Eric Topol vision. So we have all this cool stuff. Let's implement it, and things will take a turn for the better.
01:02:13
Speaker
This is not going to happen. So we have to really actively shape this AI future. And I think the time window is very narrow. I'm not a big fan of predictions, but I'd say we have three to five years and after that, we have to deal with whatever is out there.
01:02:49
Speaker
Thanks everyone for tuning into the AI and Technology Ethics podcast. If you found the content interesting or important, please share it with your social networks. It would help us out a lot. The music you're listening to is by the missing shade of blue, which is basically just me. We'll be back next month with a fresh new episode. Until then, be safe, my friends.