Introduction: Loneliness and Chatbot Therapy
00:00:00
Speaker
My sense about these things is, you know, it's hard because on the one hand, there's a loneliness crisis. There's not enough therapists. People need a lot of help. And you're saying, well, you know, you can't get to a therapist in the real world. So talk to a chatbot. The chatbot that you'll talk to, it'll always be available. It will always be nice. I will always say that what you're saying is really smart and really interesting is that a great modality to be trained into, given what you actually have to face in the world.
Doorknob Comments: Communication Nuances
00:00:31
Speaker
Hello, I'm Dr. Farah White. And I'm Dr. Grant Brenner. We're psychiatrists and therapists in private practice in New York. We started this podcast in 2019 to draw attention to a phenomenon called the doorknob comment. Doorknob comments are important things we all say from time to time, just as we're leaving the office, sometimes literally hand on the doorknob. Doorknob comments happen not only during therapy, but also in everyday life. The point is that sometimes we aren't sure how to express the deeply meaningful things we're feeling, thinking, and experiencing. Maybe we're afraid to bring certain things out into the open or are on the fence about wanting to discuss them. Sometimes we know we've got something we're unsure about sharing and are keeping it to ourselves. Sometimes we surprise ourselves by what comes out.
AI Ethics in Psychotherapy: An Expert's View
00:01:18
Speaker
We're here today with Professor Nir Isaacovich. He is a professor of philosophy and the founding director of the Applied Ethics Center at UMass Boston. We're going to be talking about artificial intelligence, ethics, and psychotherapy. Before coming to UMass, he taught at Boston University and Suffolk University. His research focuses on the moral and political dilemmas arising after war, the psychology of war, and the ethics of technology.
00:01:45
Speaker
His books include A Theory of Truces, Sympathizing with the Enemy, Reconciliation, Transitional Justice, Negotiation, Theorizing Transitional Justice, and Glory, Humiliation, and the Drive to War. He is also guest editor for a recent issue of Theoria on the idea of peace in the age of asymmetrical warfare.
00:02:06
Speaker
He has written numerous articles on political reconciliation, the role of forgiveness in politics, truth commissions, and the ethics of war and the ethics of artificial intelligence. In addition to his scholarly work, Nier comments frequently on political conflict and the ethics of technology for American newspapers and magazines, and has appeared in many very prestigious publications. There's more to say about Nier, but we'll leave it there for now and and welcome you to the Doorknob Comments podcast. Thank you. Thank you for having me.
00:02:39
Speaker
I always tend to ask how you know Grant sort of crossed paths with someone. So I am curious, what did you guys connect over and when?
Ethical Concerns in AI Therapy
00:02:49
Speaker
um I think Grant, you read an essay of mine or heard me on the radio.
00:02:56
Speaker
I read your white paper, the ethics of automating therapy from the Institute for Ethics and Emerging Technology. And then I reached out and you graciously did a brief interview on my psychology today blog experimentation on the ethics of AI therapy. That's right. That's right. Yeah. In the last few years, we've had a ethics of emerging technology project that um my center at UMass. And we wrote a white paper that sort of mapped out the questions that come up with some chatbot psychotherapy. And so that's what Grant saw. So is that because I this morning was listening to your podcast, um Prosthetic Gods, kind of
00:03:43
Speaker
cool concept and I want to hear more about that, but I'm wondering, was it in the writing of that paper that you learned so much about psychotherapy or is this an interest that sort of pre-dated that?
Philosophical Links: Freud and Psychotherapy
00:03:57
Speaker
I've always been really interested in psychotherapy and psychotherapeutic theory, psychoanalytic theory, you know very much adjacent to philosophy. If you, among other things, I work on the psychology and philosophy of war. If you do that kind of work, you have to read Freud. From there, you sort of start reading Freud adjacent people. So I've i've i've always been fascinated by it.
00:04:24
Speaker
Yeah, Freud freud has, ah I'm sure you've read all of this, but there's one of his earlier papers called On War and Death, which is really fascinating. And one of Freud's disciples, though there's a sad story there, you may be, I'm sure you're familiar with Sandra Ferenzi's work, also wrote a monograph on the war neuroses.
00:04:43
Speaker
Yeah. Yeah. And there's this really cool correspondence Freud has with Einstein that I've been working on ah recently. letter Yeah. Yeah. And, um, you know, Einstein asks this kind of very naive question, you know, doctor, what can we do to end the scrouge of war? And Freud basically says, leave me alone. ah Yeah. He's like, it's human nature. I don't know how to fix it yet. Yeah.
00:05:10
Speaker
Yeah, apparently he didn't like Einstein very much because he didn't want to recommend him for the Nobel Prize. Wow. Well, maybe AI can help with these
AI's Role in World Peace and Historical Reflections
00:05:19
Speaker
problems. I had asked a chat bot the other day, like what kind of leader would have to emerge in order to bring world peace within one year. And the AI gave like a ah pretty solid answer, like about being strategic, being extremely charismatic, having a strong moral compass, big blah, blah, blah, blah, blah. And then it ended by saying, that's a little too ambitious though. It's more of a ah marathon than a sprint.
00:05:45
Speaker
There you go. It didn't say let an AI take over though, which would have been scarier. Yeah. Yeah. Well, I thought we could end there in one day. So there you go. Yeah. Yeah. I was like, well, mind if I drive? Move over, I'll drive. Yeah. You know, quite a few thousand years of recorded history and we've been doing major warfare for most of them. I know there's a ah thinkers like Steven Pinker who think that we're making ah consistent progress. I not and don't buy that as much. I think there's measurement problems there, but I guess that's ah that's for a different podcast. Right, the angels of our better nature. yeah Yeah, people cite him. I saw him debate Robert J. Lifton on this subject many years ago, and um I guess the proof is in the pudding, but right now is a really scary time. Yeah. ah So how did you shift from war to AI?
Research Expansion: Psychology of War to AI
00:06:45
Speaker
um i didn't shift i'm I'm doing both. ah When I started the center at UMass in 2017, I decided to kind of expand a little bit.
00:07:00
Speaker
and beyond the war stuff. um You know, at the time, mainly it was ah a little bit too depressing to just keep focusing on the psychology of war. ah And AI had not become the hot button topic ah then yet that it is now.
00:07:18
Speaker
So I thought it would be ah you know kind of cool to apply the philosophical tools to a different area. And I teamed up with some engineers to make sure that I gradually become up on ah the technical issues. And I don't just like speculate in the air, as we philosophers are prone to do.
00:07:39
Speaker
So, yeah, um and i've been doing I've been doing both in parallel. And one of the things that's kind of fun about the philosophy of technology is um It makes a lot more sense to combine sort of public facing work with scholarly work in that context because the technology changes so quickly that if you just stay with a scholarly cycle by the time you publish something the technology that you published it about has become obsolete and so it's been a fun sort of
00:08:12
Speaker
I mean, we do quite a bit of that too. You need to do that both for the credibility and for the ability to kind of dive into stuff rigorously. ah But it's been fun to also ah dive into a modality of connecting more directly with the public in podcasts and op-eds, magazine articles, radio. So that's been an interesting experience.
00:08:37
Speaker
if you know If the psychology of war hasn't changed since Homer, the ethics of AI move a little more quickly. Yeah. Well, the technology of war has kept pace and there's lots of AI and warfare now. The the other thing I'm reminded of um and maybe it segues into AI is, do you know the book, The Crazy Ape by Albert Sven Jorgi?
00:09:00
Speaker
i do not this was like a classic sort of book from i think it was published in nineteen seventy like a real ah kind of a ah manifesto he was a nobel prize winner i think for discovering vitamin C or something like that. But he wrote a whole book called The Crazy Ape, which exactly was, why do we keep waging war? And one of the quotes that I like in it, that which I'm not going to be able to remember directly, is he says, like we face the world with this caveman's brain. And it's not equipped to solve problems better, basically. yeah Maybe there's some hope. you know if if you're um If you're hopeful about AI, there's some hope that maybe it it can augment
Sacredness in Therapy: AI's Impact
00:09:39
Speaker
Farrah, where do you want to start? I know we talked about kind of maybe starting generally with AI, because it's such a broad term. Right. i Well, actually, I am kind of curious about near what you were saying, like the more technological aspects, because I don't know that much about what's out there. I mean, the last time and that I really checked, there were things, and and I guess when we talked about the idea of a chatbot,
00:10:08
Speaker
To me, it seems like technology has taken something that so was traditionally considered pretty sacred, like that you know sort of patient-clinician relationship, and it has ah morphed it into something very different through a lot of these online services.
00:10:29
Speaker
I guess what I'm what i'm wondering, but but those are still like real people. I don't know if they augment any of their treatment um or what's on the horizon. but But before we dive into therapy and AI, I i think can we start with kind of a broad overview of what do we actually mean by AI? Because there's like machine learning, artificial intelligence, there's large language models, which are essentially patterned generators. And then there are things like um versus AI, which is based on like Bayesian epistemology, which is supposed to replicate thinking and and maybe code in kind of human motivational systems. So, Nir, I wanted to ask you kind of like what for a more general listener, what what actually is AI nowadays? And people talk about AGI, like artificial general intelligence. So, what are we actually dealing with when when before we approach like
00:11:27
Speaker
AI therapy what what are the possibilities for that right so i guess on the broadest level as an umbrella explanation it's a set of technologies that is capable of performing cognitive tasks that were previously the preserve of humans broadly speaking In all of these, ah the methodology is a pattern recognition kind of technology. So these are systems that are trained on ah large sets of data and are capable of recognizing patterns
Generative AI: Human Interaction Simulation
00:12:02
Speaker
in them. And given the kind of ah model or a set of instructions generating predictions on the basis of them, up until a few years ago,
00:12:15
Speaker
Mainly what the general public would be familiar with were um sort of recommendation engines or recommendation algorithms like the kind that you would see on Netflix or on Amazon that which would recommend what next book or next ah show you would watch, but also, you know, who would be a good candidate to get a mortgage based on certain kind of ah both personal and demographic data who would be
00:12:46
Speaker
who would be a good candidate to be hired for a job. In recent years, as you ah were mentioning, Grant, the idea of generative AI ah ah came online. And the difference there was that the same kind of ah technology based on a much greater computing power and the capacity and the capacity to process much larger data sets is now able to kind of ah change interface and generate responses to user prompts. And so you get systems that don't only
00:13:30
Speaker
Find patterns in existing data but find patterns in large amounts ah large enough amounts of text basically the entire internet large enough amounts of images large enough amounts of video so that in response to prompt so they can use this sort of statistical prediction analysis to. Give you content.
00:13:53
Speaker
ah essentially to simulate mimic speech, mimic video generation, mimic images, et cetera. So that's the kind of generative AI that everybody's talking about and that has these potentially very strange and somewhat disturbing applications in psychotherapy, among other things.
00:14:19
Speaker
Yeah, i'm I'm aware of some AI models that are supposed to actually think, say based on Bayesian or probabilistic thinking, and actually mimic the way you know natural brains operate. And even you know we were speaking with Mark Salms recently. He may be working with Carl Friston on a kind of an AI, which could even have like core human emotions baked into it.
00:14:45
Speaker
in an analogous way that we think human beings do, like why not put that into an AI? And I'm i'm bringing it up Farah because I think it contextualizes the discussion of AI therapy when we get into the nitty gritty of like, can you have a relationship with an AI? What would it mean? Are the emotions there? Can there be empathy? Yeah. If you go into the paid version of chd gp ah One of its new models is supposedly able to mimic thinking as well. I mean, you know, the terms get a little fuzzy because even when it comes to
00:15:26
Speaker
human beings. We don't have outstanding definitions of what it means for a human to think. We famously can't explain consciousness in human beings, so let alone it's not clear that we can explain it in AI. So I guess my own way of describing it is that you increasingly are having models that can look like they're thinking, can seem like they're thinking, in which you have an increasingly harder time telling apart the difference between you know, the kind of results that they produce and humans produce, whether or not that actually means that they're thinking. the Most, yeah, most computer scientists will tell you that the AGIs have no intentions, don't actually think in the way that human beings think, don't have actual emotions. But again, you can sort of copy the structure of some emotions. And if you're sort of behavioristically inclined, you know, at some point you'd say that it doesn't matter.
00:16:26
Speaker
Yeah, that comes down to a philosophical debate that we we won't get into here, but things like David Chalmers and the heart problem of consciousness and his consciousness sort of just an illusion or not, or does it have causal impact? How do you know whether another human being has any subjectivity? right You can't quite know yet.
00:16:46
Speaker
but does make It does make sense in terms of what you were bringing up, Farrah, about, okay, what about this particular human relationship of psychotherapy? Well, I think it's important to note that there are different types of psychotherapy and that in in most of the ones that we believe to be effective, ah that the sort of person to person, the therapeutic alliance, that is really important and it's a big predictor of success.
00:17:21
Speaker
I don't know that it's like that for every single manualized treatment. So I think there may be a role ah for AI, but it's not the same for role that there is for a human therapist. Yeah, I mean, there's if if we start um sort of if we zoom out a little bit, there's an entire field of sort of parasocial relationships with AI.
00:17:47
Speaker
So kind of, so essentially kind of social, having a relationship, we can drop the social, having a relationship with
AI in Parasocial and Romantic Bonds
00:17:56
Speaker
AI. So companies that try to market the ability to, you know, have a relationship with AI. Now, there's a few areas in which this happens. So increasingly, there's sort of AI romantic partners that a company like ah you know, character AI is developing. There's the use of AI for grief bonds where you would upload the social media activity of a ah departed person that was close to you and start having conversations with them. And you can imagine combining that with, ah you know, video modality and you would have
00:18:40
Speaker
you know pretty lively conversations with them and a few years ago by the way there was a Black Mirror episode called Be Right Back that was based on that premise and it seemed insane at the time and it seems a lot less insane now.
00:18:57
Speaker
Well, you know can i can I jump in? There's there's precedent ah you know adjacent to AI. So for example, you know you may have seen, is there are people who are in love with a real ah real doll. They call it objectum sexuality. People who feel a human-emotional connection to an object. It could be a humanoid figure, like a doll. It could be a car.
00:19:19
Speaker
um And then the parasocial relationships have also been studied. I'm curious about how you you see parallels here with the way people people may have the way people may have relationships with a celebrity. Like, you know, you feel like you have a relationship. Right. The difference here is that you might be more justified in feeling that you have the relationship with the AI. So you can enter, I mean, look, we anthropomorphize technology all the time, right? We give names to our cars and, you know, we ah talk to our boats, et cetera, et cetera, et cetera. In ah places like Japan, where there's a relatively large elderly population and not always enough people to care for them, ah there are
00:20:09
Speaker
ah robotic devices that help in the care of the elderly and it turns out that it doesn't take a lot for the older people who interact with these technologies to sort of start treating them like you know their kids and they don't look anything like humans they don't.
00:20:30
Speaker
interact in any way like humans. So, I mean, you know, if you've ever had, you know, if you've ever loved a pet, you know how easy it is for us to anthropomorphize non-human beings and how easy it is for us to, um and the same applies to technology. So broadly speaking,
00:20:52
Speaker
Yeah, we have relationships with technology all the time. We project onto them. Now what generative AI brings into that picture is now the technology that we're so prone historically to anthropomorphize anyway now begins sounding and looking and behaving like it's a lot more human.
00:21:11
Speaker
So it's a lot easier to anthropomorphize it. All you have to do is sort of imagine the combination of you know a Chad GPT with, for example, there's this kind of there's this um company in England that makes these robots called the Amika robots. These are very human-like robots. So if you imagine an Amika robot with a whole variety of facial expressions run by a Chad GPT,
00:21:39
Speaker
then you have essentially a very, very human-like doll. And you really are already, I'm sure you've seen that movie, Her, um you really are in that territory. So those are, you know, people feel like those are actual relationships. Now, people feel a lot of things that they're wrong about.
00:22:02
Speaker
but their justification for feeling that their actual relationship is different from, you know, saying that you have an actual relationship with your car. There's another movie, and I'm curious if either of you guys have seen it, called Lars and the Real Girl? I have not seen that. Okay, so that pre-dates her and basically um stars Ryan Gosling and he is the character who has a lot of trouble actually connecting with others. He has major trauma and difficulty maintaining ah social connections and he orders this doll and has a delusion around it, but everybody sort of understands
00:22:48
Speaker
that it's delusional and they sort of support him. And it's actually a really beautiful movie because they allow that to be a bridge, right? um To having connections with other humans.
00:23:04
Speaker
And I think that that's where what what I wonder about with technology and sort of the connectivity, these parasocial relationships, is there a way and to harness, I don't know, either the ethics around this technology. That could be a therapeutic application. You're kind of saying like,
00:23:23
Speaker
if someone has difficulty with intimacy a lot of people with trauma you know the first book i co-authored with with relationships in mind is called your relationship how we use dysfunctional relationships to hide from intimacy and you bring this up which near i think you bring up to is um okay there's a therapeutic application which is kinda like exposure therapy.
00:23:44
Speaker
so you know it's a low-risk thing to learn how to relate and then but you're saying then you want to transfer that onto you know actual people versus kind of using it as a substitute so if the experience of intimacy feels real near but you you have this kind of fear this fact that okay it's a human or it's not a human That's an irreducible truth, right?
Can AI Replace Human Therapists?
00:24:07
Speaker
It's either a human being or it's not a human being who is the therapy bot. I mean, if you can't tell the difference, you know, then that AI is passing what they call the Turing test, right? Indistinguishable. But my my sense is that you have a position, which I tend to agree with, that even if it's exactly the same, if you know that it's not a human being, it makes it different. Well, I mean, first of all,
00:24:33
Speaker
just to sort of from the most kind of concrete level, the context in which you wouldn't be able to tell the difference are limited still. So you can't go into an in-person session with an AI chatbot and not be able to tell the difference, right? So that doesn't exist.
00:24:53
Speaker
With video, it's getting a lot better, so if you are doing a sort of better health type video ah chat. I don't think it's there quite yet, but it's a lot closer to being there than it was a year ago, the sort of latency problems that ah AI voice use has, the which had to do with the delay that makes things awkward or being resolved. If you've used the chat GPT's advanced voice mode, you've seen that it responds pretty immediately. So it's getting better in that sense. So that's kind of, you know would it pass the Turing test? In person, it would never pass the Turing test.
00:25:40
Speaker
on virtual platforms under some sort under some circumstances, it would, in a text exchange, it certainly would, perhaps in a voice call, it could, or at least is getting there. And we know that people do therapy, not just in person, especially since the pandemic.
00:25:57
Speaker
i guess In the points that both of you have raised, you've raised two questions. One is ah two separate questions. One is this kind of a useful ah tool for somebody who has limitations and ah or problems interacting with humans as a kind of training wheel, as it were. And the other is in completely replacing human therapy.
00:26:23
Speaker
In both cases, I mean, so if we stay with the training wheel metaphor, the training wheel has to be on an actual bike. I don't think it's a great training wheel um because It's still available in a very different kind of way than humans are. It's still incapable of empathy and recognition in a way that human beings are. It's incapable of caring about you in the way that human beings are. So the question is, do you want to and is it ethical to train somebody to interact with
00:26:55
Speaker
You know a sort of technological entity that um has no boundaries and that doesn't really have skills that are relevant for interacting in the human world you'd you'd almost be teaching them to do the wrong kind of thing. Right and then therapy replacement is a very different thing all together so.
00:27:16
Speaker
My sense about these things is, I mean, look, you know, it's hard because on the one hand, there's a loneliness crisis. There's not enough therapists. People need a lot of help. And you're saying, well, you know, you can't get to a therapist in the real world. So talk to a chatbot.
00:27:32
Speaker
and the chatbot that you'll talk to, it'll always be available. It will always be nice. ah It will always say that what you're saying is really smart and really interesting, and on and on and on it goes. And you know is that a great modality to be trained into, given what you actually have to face in the world? I don't think so, but I would imagine that the chatbot could also, and I don't know, maybe this it's not there yet,
00:28:01
Speaker
sort of be a coach. So one thing that I think helps a lot with social anxiety. Yeah, like a coach is like the skills training, right? Where a lot of times people will find themselves in different social situations and they're really not sure of the etiquette. What is important here culturally or what is important emotionally? How do I ah comport myself in XYZ situation? And I think
00:28:32
Speaker
I would imagine that at a chatbot could be helpful. Kind of like a virtual Cyrano de Bergerac, like telling the patient what to say. But I think a lot of therapists who are psychoanalytically oriented and near, I think i think you're sympathetic and I'm a psychoanalyst and Farah is psychoanalytic, would say that, okay, you can call it therapy, but skill training and coaching really isn't therapy. Therapy really is kind of deep. and organic and co-constructed and emerging and so You might say well, yeah, those types of chatbots are really helpful, but they're not a substitute for the real human relationship yeah The thought I'm having is that when you talk with a human being you share an existential condition you know that the other person is mortal like you and you don't necessarily have that same feeling of
00:29:24
Speaker
And you also assume a level of self-reflective similarity. You assume with that the other person empathizes with you and you empathize with them. Now you might feel empathy from a good AI, but you're never going to believe that it truly empathizes with you the way another mortal human being will, I think. So I'm curious how you think about that in light of what Farah is bringing up.
00:29:49
Speaker
Yeah, I mean, psychotherapy is a relationship. It's a kind of relationship. um You know, maybe some psychoanalysts think that it's the most important kind of relationship. You know, I'll i'll let the two of you be the ah judge of that, but it's a relationship and you can ask if you can have a relationship with a chatbot. And I think in the end, I think you can't. Can you be friends with a chatbot?
00:30:18
Speaker
Can you be lovers with a chatbot? Can a chatbot be your mentor and your teacher? I think as Farah was saying, it can help you learn things about friendship. It can help you learn things about romantic relationships. It can help you as a sort of adjacent tutor to a real tutor. but No, you can't have a relationship with it because, Grant, what you said, you don't share an existential condition with it. It's a creature that can't really take your interests into mind or care about them because it's essentially a large word prediction machine, a little bit like Google spits out what you're about to ah put into a query. It's that written large, large, large.
00:31:13
Speaker
You could simulate that, like they're training all Amazon therapy transcripts and it could act like it was in the same existential condition. But it occurs to me as we're talking, and I'm curious what you think, that one thing, like because I've dealt with like chatbots on Amazon, like voice chatbots,
00:31:29
Speaker
And it's very different from dealing with a human customer service agent because I'm aware that I can like hurt a customer service agent's feelings. I hold a customer service agent morally accountable in a different way than a chatbot. And I'm curious in the therapy relationship, what you think about that, that you kind of people protect their therapist. They're worried about hurting their feelings. You're not going to have that with with an AGI probably.
00:31:53
Speaker
Yeah, but if they can talk, you're not. ah And it's probably not great to have a therapist whose feelings you only think that you need to protect all the time. But the process of wanting to protect the therapist's feelings, knowing that that's what you're doing and being able to tell them that that's what you're doing and hear back that you know, there's probably a set of reasons why you're doing that is, you know, a fundamental learning process. And I don't think you can have it with the chatbot. So it's the sort of existential condition, what you call an existential condition, the fact that, you know,
00:32:33
Speaker
the chatbot just i'm sorry the fact that you're a therapist just like you can be can have an off day can be unfocused can be you know a jerk sometimes uh can be petty uh can have their own set of narcissistic issues it's actually all of these sort of shortcomings that if you can have a conversation about are the moments so when something actually happens in therapy it's less true of a chatbot now one could respond to that by saying you could potentially calibrate a chatbot to doing that. ah You could sort of put in some kind of
00:33:14
Speaker
random pettiness generator into your chat GPT-5 when that comes online. You would still know that it was randomly generated to do that. And I think that you know that makes a difference. Look, it's a little bit like a broader question you know about AI, which one of my colleagues um very cleverly pointed out is that in the world that's completely mediated by AI, ah you lose serendipity. So, for example, a lot of our best and most meaningful stories have a serendipitous element into in them, like your stories about how you met your partner
00:33:56
Speaker
how you found your hobby would have this. I went here, meaning to do X, and then accidentally ah went into room Y and found person B. And I never meant to. And I went into you know Barnes and Noble looking for a philosophy book. I found a gardening book. I became a gardener a gardening enthusiast by accident. you know I took the scenic route because I got lost.
00:34:20
Speaker
Now, the point of AI is to eliminate all of that because it is not efficient, right? You know, if you use ways, then the AI is going to give you the best possible way to get somewhere and the A recommendation engine on Amazon or on Netflix is going to tell you the show that you're likely to enjoy. And the same is true of Spotify. And increasingly, they're very good and they would be correct. And the result would be that you wouldn't find out stuff by accident in a way that makes accidents meaningful. for us
00:34:54
Speaker
And the response would be, well, all right, well'll we'll put a randomness generator into it. And you would. But it would still be mechanically generated randomness, which does not have the existential half of actual randomness. The same would be true about the therapeutic relationship, is my sense. um there needs to be Spontaneity is the word, I guess, I've been looking for in the background. And large language models don't have spontaneity because it's an organic quality.
00:35:22
Speaker
I think when you mentioned spontaneity, we also think about play and yeah, that that can be an important element without it. Yeah, I do feel like something is lost. And I think that when these very real moments of pettiness or our own issues or things come up, part of the work of therapy is understanding them together. yeah and And joking also when you said pettiness, I wanted to say Tom pettiness. yes I like classic rock. And sometimes when I'm working with people, I'll bring up song associations. And I guess you could program an AI to do stuff like that, but it would still be trying to optimize in a way that might never quite feel human. Yeah. i think Think about our ah think of our, this is a bit of a therapy move to pull on you guys, but think about our conversation right now, right? It has its own,
00:36:19
Speaker
evolution began a little stiffly because we don't know each other and a little awkwardly. And then it has these kind of you know leaps in ah ah the the dynamic of the conversation. ah And in those leaps, both the three of us and whoever is listening to them, they latch onto something because something human is going on. And that is still impossible to simulate. um I'm actually an avatar.
00:36:47
Speaker
i't want to This company actually wanted to do a 3D thing of me for sort of promotional stuff and I yeah i didn't feel comfortable with it. um But yeah, you'd be, that'd be terrifying, right? If I said, oh, actually I am, and you know, an instantiation of the of the Grant Brenner personality. Right, but you know, that being said, all of this being said, especially with, I mean, look, with enough Grant Brenner video material for that company to work with and with enough Grant Brenner audio for them to train their models on and with enough Grant Brenner text,
00:37:25
Speaker
Depressingly, for all of us, they can come up with a pretty compelling avatar. And if you combine that with what we talked about earlier, namely that you know even if the avatar was crappy, our tendency to anthropomorphize makes up the difference pretty quickly, even if it was like really crappy. You could very easily imagine a set of patients that would say, it's good enough.
00:37:52
Speaker
It's fine. I feel heard more by the Grand Bremer Avatar than I do by my flesh and blood friends. It might be less narcissistic than me in some ways. Yeah. Or maybe more. Right. We do fill in the gaps.
00:38:09
Speaker
Well, what do you think about, um because I know we're running short on time, I wanted to ask you about the other side of the couch for a sec. Farrah, maybe you have some thoughts about this. Like the way a therapy, an AI could serve as a kind of a prosthesis to help a therapist do a better job.
AI as a Therapist's Tool
00:38:25
Speaker
You know, mention something the therapist didn't think of, maybe provide information, serve, you know, to remember things that the therapist thought didn't, or notice expressions on the patient's face and bring them you know basically whispering in the therapist's ear, maybe even reading the patient's physiological markers or brain scans and directing like the therapy. I i think that's how i mean i think that not only possible but that's already happening so there is a
00:38:53
Speaker
and the technology is being used in these kind of adjunctive kind of ways. And it's very good at content analysis. It's increasingly good at voice tone analysis, at facial expression analysis. So you could imagine it being used in those kind of ways. By the way, there's other contexts also except from therapy where it's being used being used in similar kind of ways.
00:39:21
Speaker
diagnostically, you know I've seen patients who say they have something pretty obvious like obsessive compulsive disorder or ADHD, and they worked with someone for many years who never really diagnosed them. And then when you diagnose, you can treat. you know I can see a clear application where you know AI could say, hey, don don't miss this diagnosis. but Yeah, yeah. I mean, yes, although even in, I mean, look, incidentally, and I've just written about this recently, I think that in
00:39:55
Speaker
The medical context, and in specifically the drug discovery context, is one area where AI has genuine transformative promise. I think it's largely hyped up in other areas. The case hasn't been made. I mean, it's impressive, but not everything that's impressive is good. In those areas, I think it's actually ah you know quite promising. Even there, though. sir Part of what we're seeing is that in diagnostics, doctors tend to defer to it and often say, you know, often kind of prefer its questionable diagnostic diagnosis to their often better diagnosis. You know, even in these contexts where I agree, Grant, that it's promising, there is going to be a diskilling effect that you should take into consideration. It will
00:40:47
Speaker
knowing that it's there and increasingly relying on it will gradually erode the kind of spidey sense that is pretty crucial for, I think, clinicians. But yeah, I think that's ah that's a less controversial use case, assuming that the technology has enough good data that it's trained on, et cetera, et cetera, et cetera, which is also a question.
00:41:11
Speaker
I just want to go back quickly, though, to the question that I asked you before. If a patient does say ah um robot grant or robot FARA is good enough for me, and I feel that I can tell them things that I don't tell an actual therapist,
00:41:29
Speaker
and i can And I feel they care about me more than my actual friends do. And I prefer this, what would you say to them? I mean, I know what I would say, but what would what would you say? I would say that's not the ultimate goal, right? And that we can't lose sight of the importance of that human connection and how we need to be fostering that. I think it's nice to have something that makes people feel good. There are a lot of things that make people feel good all the time. um How and when we sort of prescribe and use and incorporate them is important, but it it's not a substitute for real treatment. That's what I think. Yeah, i i would I would agree. And I would say you can be wrong about thinking that you have a relationship. You can say, X is my friend and
00:42:24
Speaker
Just be wrong about that. ah these These are not completely, you know, i mean this goes all the way back to our style. What it means to be a friend is not completely a subjective proposition. Yeah, well, and we're sort of wrapping up for now, but this has been really, really interesting. Thank you for joining us.
Conclusion: Ethical Exploration and AI's Future
00:42:45
Speaker
and My pleasure. Thanks for inviting me. Yeah. a Where can our listeners learn more about you and your work?
00:42:52
Speaker
And your podcast. And your podcast. Yeah, well, some of your listeners might like prosthetic gods is a relatively new podcast that I'm doing with my ah colleague, Jay Hughes. Jay is a sociologist and what you call a transhumanist, somebody who's very optimistic about technology and its capacity to transform society and lives for the better. As you could probably tell from this conversation, I'm a lot more skeptical. So basically, this is an opportunity for me and Jay to fight about technology, the two middle aged muppets yelling at each other technologically. It's a great dynamic. And
00:43:40
Speaker
We'll share the links as well. In the spirit of doorknob comments, i'll I guess I'll ask a question that was on my list, but we would don't have time to get to, um which is, would you rather have a bad human therapist or a good AI therapist? Or a parent for that matter? Yeah, that's great. That's great. um I would opt out if those were my options. You would raise yourself.
00:44:08
Speaker
Yeah, i'd i'd go e'dgar I'd go read a book. yeah yeah Probably honestly, I'd go watch TV, but I should say I'd go read a book. that That reminds me of back to the beginning with Freud, where he you know analyzed himself, right? And then bootstrapped psychoanalysis into existence. So thanks, Farah. Thanks, Nierd. It's been a pleasure. Thank you both. Pleasure is all mine. Take care. Thank you. Thank you.
00:44:33
Speaker
Remember, the Doorknob Comments podcast is not medical advice. If you may be in need of professional assistance, please seek consultation without delay.