Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Can Ethical AI Democratize Therapy and Higher Quality Care? image

Can Ethical AI Democratize Therapy and Higher Quality Care?

S4 E11 · Bare Knuckles and Brass Tacks
Avatar
0 Playsin 1 day

Clinical psychologist, Dr. Sarah Adler, joins the show this week to talk about why “AI Therapy” doesn’t exist, but is bullish on what AI can help therapists achieve.

Dr. Adler is a clinical psychologist and CEO of Wave. She's building AI tools for mental healthcare, which makes her position clear—what's being sold as "AI therapy" right now is dangerous.

Chatbots are optimized to keep conversations going. Therapy is designed to build skills within bounded timeframes. Engagement is not therapy. Instead, Dr. Adler sees AI as a powerful recommendation engine and measurement tool, not as a therapist.

George K and George A talk to Dr. Adler about what Ethical AI looks like, the model architecture for personalized care, who bears responsibility and liability, and more.

The goal isn't replacing human therapists. It's precision routing—matching people to the right care pathway at the right time. But proving this works requires years of rigorous study. Controlled trials, multiple populations, long-term tracking. That research hasn't been done.

Dr. Adler also provides considerations and litmus tests you can use to discern snake oil from real care.

Mental healthcare needs innovation. But you cannot move fast and break things when it comes to human lives.

Mentioned:

A Theory of Zoom Fatigue

Kashmir Hill’s detailed reporting on Adam Raine’s death and the part played by ChatGPT 

(Warning: detailed discussion of suicide)

Colorado parents sue Character AI over daughter's suicide

Sewell Setzer's parents sue Character AI

Deloitte to pay money back after caught using AI in $440,000 report

Recommended
Transcript

Introduction and Legitimacy of AI Therapy

00:00:00
Speaker
first of all, is there clinician leadership in the company? Investigate, especially Gen Z. Like you all are the Yelp, you are the double click generation. You're researching these products. Go look. Who is funding it What else have they funded?
00:00:13
Speaker
Who is leading the company? Is the is there C-level clinician on board? and what kind of research are they doing? Will they show you their outcomes data?
00:00:24
Speaker
Or are they just moving fast and breaking things? And I do think, right, again, the argument is like, Look, if we want to make real change, we have to move fast and break things. But human lives are a different story. So be very, very, very mindful about anything that's calling itself therapy, AI therapy. AI therapy does not exist.
00:00:51
Speaker
Yo, yo, This is Bare Knuckles and Brass Tacks, the tech podcast about humans. I'm George Kay. And I'm Jorday. today, at long last, we have Dr. Sarah Adler, clinical psychologist from Stanford and also CEO and founder of Wave, which is an AI-enabled therapy platform. i'm going to be very careful with my words because as she says in the episode, there's no such thing as ai therapy.
00:01:20
Speaker
We could not wait to have her on the show. The topic could not be more pressing and Sarah delivers. This was an incredible interview. So glad we made it happen.

Skepticism and Ethical Standards in AI Therapy

00:01:32
Speaker
Yeah, I mean, look, we came into this like not really known to expect. We were excited, but you and I are both very much on the same page of like, there has to be a lot of guardrails for AI to have psychological impacts on humans as a product.
00:01:44
Speaker
And, you know, we came out swinging and she ah she caught us real good. And i I really took pleasure in her ethical approach to how she's building her business.
00:01:57
Speaker
The fact that she considers herself a clinician first and foremost, and that's what she holds herself to, that's the standard. And I think this is just a really inspiring and and refreshing approach to someone who's using ai to found and build new technology that's not just purely for the sake of their own profit.
00:02:15
Speaker
And it, shit, it might, it I don't know, it just, it gave me hope because we've seen in this market are just profiteers who are coming up with jank ideas and this is actually real.
00:02:26
Speaker
Yeah. So listen in We get a little nerdy in the model architecture, but that's what we're here for. Let's turn it over to Dr. Sarah Adler.

Risks and Ethical Concerns in AI Therapy

00:02:40
Speaker
Dr. Sarah Adler, welcome to the show. Thank you so much for having me. I can't wait to talk. Absolutely. This has been a long time coming. We've been very excited to to have you here to talk about therapy, the role of technology.
00:02:56
Speaker
We're going to talk about AI because that's in the zeitgeist. And um yeah, so why don't we just start in the most obvious place, the current state, right? We live in a world where we have people saying things like, I use ChatGPT as a therapist.
00:03:14
Speaker
I am highly skeptical of LLMs in that regard, but we'll get into that later. so We have um new terms emerging, ai psychosis.
00:03:25
Speaker
We have, unfortunately, a lot of headlines about a a young girl whose name escapes me. We also have most recently Adam Rain. All people who were in distress, suicidal ideation, using chatbots as some kind of
00:03:43
Speaker
I don't know, talking companion something. So I guess I want to just set that as the scene and then like, let's just sort of see where you stand vis-a-vis that. And then we'll we'll get into where you think maybe there's some nuance and some difference.
00:04:02
Speaker
Oh, yeah. So I think and we are still nascent early, early, early days in terms of understanding how LLMs work and how they can be effective or not effective in terms of therapy. I am just, first of all, very, very, very skeptical and anti, not technology in terms of how it can support evidence-based and therapy.
00:04:24
Speaker
But in terms of what chatbots are out there today, I'm also a huge skeptic. um And I do not actually believe that we are anywhere near where we need to be in terms of regulation, in terms of safety in terms of is this healthcare versus is this a direct to consumer product? So I think we're in super, super early, scary, scary days where technology is kind of advancing at hyperspeed. There's a lot of things that it can do or it says it can do that it can't do.
00:04:54
Speaker
um and and and as always, anytime we sort of breach this intersection between technology and healthcare, care we're kind of in ah in a void and in a scary place.

Human Interaction vs. AI in Therapy

00:05:06
Speaker
Yeah, I think that makes sense. And also, for the benefit of our audience, right, you are a clinical psychologist. You are also and Stanford, which is home to, you know, in Silicon Valley. So you're like right there at the nexus of these two things. And we'll get into it. I mean, I think also there's a lot to be said for I don't know, let's them what it is, like the snake oil salesmen, the people who are like literally just touting capabilities that they, yeah there's a simulacrum of human conversation, therefore it must be a therapist? Like no evidence to that claim, right?
00:05:44
Speaker
None. No evidence to that claim. And even more scarily, when we see that LLMs, especially like ChatGPT, are designed to keep the conversation going, which is, by the way, as a clinical psychologist, not how I was trained. My job was actually to do evidence-informed practice within a 50-minute period and then let the person take skills and generalize them outside of my therapy room.
00:06:06
Speaker
When you have um sort of asked, what is the objective of ChatGPT when it starts to get into a conversation? The objective is not the objective of the user. It's not the objective of it's we see things like sycophancy. We see things like um keeping the conversation going. There are lots of LLMs out there that are literally designed to engage, engage, engage, engage.
00:06:31
Speaker
Engagement is not therapeutic inherently. And there's a huge difference between love keeping someone going, keeping someone talking and actually having the ability to guide them towards what actually um are their own goals and values.
00:06:47
Speaker
that's That's really interesting too. And I kind of like the whole approach. and And first of all, I do have to say i was pleasantly surprised by kind of what your approach is because it it is a nuanced kind of way of thinking. And i believe I was perhaps mistaken because you you are in the space. You are in in the technology space. You are trying to help utilize this stuff. And i think, you know, when you see that and you're like, oh, well this person's like, they're trying to make money off this too on some level.
00:07:14
Speaker
And so... It's tough because you exist in a place where a lot of profiteering happens based on taking advantage of real human problems. And this is what's led to a lot of toxic things. like Like a lot of what social media has damaged in society is based on simple ideas of trying to make certain things efficient for people socially.
00:07:36
Speaker
So, I mean, that to me kind of leads to to more of an empathy paradox. And, um you know, like I, for full disclosure, I nowhere near as qualified as you, but I do have an undergrad in psych. So at least i I've read something in my life.
00:07:49
Speaker
I have to say, you know, therapy is fundamentally about human connection, right? And, you know, post-ARMY, I went to it as well. And it's really about, in my opinion, the attunement between two people.
00:08:00
Speaker
Like I know that with with my therapist, who's been my therapist for almost 10 years, He's the guy. I met a couple. He was the guy that I trusted. And that that kind of chemistry sticks with you.
00:08:12
Speaker
Even if an AI model can simulate that empathy, it doesn't actually feel it. So from a clinical outcomes perspective, how can we trust in an algorithm to deliver something as relational and intuitive as genuine compassion?
00:08:28
Speaker
Yeah. I think it's it's, we can't right now. We're way too early to have that happen. But I think I would, I'm going to throw something out there to think about it in a little bit ah differently. It's like, what actually happens to you on a neurophysiological, neurochemical level when you are sitting with another person and you are sitting and experiencing that compassion?
00:08:50
Speaker
understanding what is going on in your brain when you are sitting with someone who is reflecting, who is empathizing, who is listening, who is gearing. There's a down regulation that happens. There's a lot of this, a lot of really good research in the PTSD literature. You said you were in the army. Thank you for your service or you were in service. Thank you for your service.
00:09:08
Speaker
um and And we get a lot of understanding um in in trauma research is that like when you actually have another person sitting next to you mirror mirroring you, there's a down regulation that happens. There can also be an up regulation that happens and in some cases.
00:09:24
Speaker
But the question that you're asking, I think is really important is can you get that from a machine? Can you actually get the real neurochemical change sitting and talking to a machine that you can with a human being?
00:09:36
Speaker
And the answer is, I don't think we know. and Maybe. We might be able to get there. And we might not. We are certainly not necessarily there now. And so

Wave's Approach to Ethical AI

00:09:46
Speaker
I get that anyone who has been in real talk therapy with a person sitting next to them might argue you can't get that same neurochemical interaction that you get talking to a person.
00:09:56
Speaker
I think maybe we can. I just think we're pretty far away from understanding, labeling, measuring. And that actually is fundamentally the biggest problem in the therapy world, whether it be therapy with a bot versus therapy with a piece of technology, with therapy video or therapy human, is that we don't measure.
00:10:13
Speaker
We don't measure what is actually happening in the room. We don't measure outcomes. We don't measure. And to me, that's the biggest problem is that you've taken this like black box, crappy system where you could put a totally unqualified therapist sitting in front of you and have it not work to you.
00:10:30
Speaker
But because there isn't a demand to measure outcomes and to measure what's actually going on in the interaction, we don't even have the tools to start thinking about whether or not ah a machine is better than a human. Because not all humans are great either.
00:10:41
Speaker
Let's be really honest. And um when it comes to that measurement, the the studies you were citing, are you were saying that the studies on those emotional regulation mechanisms, that's a specific neurochemical study, right? It's not done on the regular. It's like, i am setting up this experiment in order to answer this question, right? so very sort of small studies. Okay.
00:11:06
Speaker
I'm talking about what happens to human beings on a neurochemical level when they're actually in the room being attuned. what did That attunement that George A. was talking about, that attunement does something in the brain.
00:11:18
Speaker
It does something to allow you to feel seen, to feel heard. That actually has an impact on your cortisol levels, on your stress levels. It makes you more receptive to being able to facilitate change.
00:11:30
Speaker
Now, can a chatbot do that in the same way? you don't know. and I would say no right now, but ultimately we're not measuring, we're not studying, we're not being rigorous the way we would be with a ah cancer or a diabetes treatment. It's ridiculous. So so just follow up to that then, can we then also say that the rapid rise in the state of loneliness in society, especially Western society, more particularly Western society actually,
00:11:59
Speaker
Could that be a reason why so many people are turning to LLMs instead of actually doing all the things that human beings used to traditionally do before even needing to go to a therapist, which usually just involves going outside, touching grass and making friends?
00:12:15
Speaker
Is the loneliness? Community. Community, nature. i mean, yes, but I think I would 100% agree with you on the um the responsibility of social media and the polarization that we get in terms of the eco ego chambers that we, echo chambers, my daughter corrected me. She's like, it's not an eco chamber, it's an echo chamber.
00:12:38
Speaker
That's different. It's an echo chamber that we kind of sit in where we're getting fed back. And actually it's the same problem, right? In terms of sycophanty. We're getting the same information, our own views reflected back to us over and over and in social media, which actually creates silos, which creates isolationist perspectives.
00:12:56
Speaker
Same thing happens on LLMs when you chat to it. If the LLM is constructed to keep you going, keep you engaged, which is exactly what social media does, right? it's the it's the same It's the same boat. So does it actually help you touch grass? Does it help you go outside?
00:13:10
Speaker
ah have conversations like these where we're like mind expanding and connecting through differing opinions? No. And it's actually the very, very similar problem as to social media, right? It does the same thing to you. It keeps you in your silo. It keeps you thinking the way you're thinking. It doesn't challenge your ideas unless you specifically ask it to.
00:13:29
Speaker
I saw a hack um um and I haven't tried this out myself, but I saw a hack on for chat GPT where it says, if you actually really want good advice, not therapy, that's the other thing.
00:13:40
Speaker
Therapy is not advice, okay? So if you want good advice from ChatGPT, ask it um to to to give you advice on how to tell a friend something because apparently that disconnects it from trying to keep you engaged.
00:13:55
Speaker
It changes the objective. It changes the idea. And so it will be less ask-kissy. It will tell you less what you want to hear versus potentially be more helpful. Now, I haven't tried this myself.
00:14:06
Speaker
um And also, again, advice is really different from therapy. Yeah, I really like that you have brought up this neurochemical response because I think, one, a lot of us are knowledge workers. We sit in front of screens all day and we're just sort of like out of our bodies and we forgot that the hardware that's about 100,000 years, 100,000 years old in our head is designed for face-to-face interaction, right? I always joke that like At the end of the day, i can actually pay closer attention if I do traditional phone calls than like a day full of Zoom calls because as the ah technology critic L.M. Sikostas put it at the beginning of the pandemic, I cannot look you in the eye
00:14:51
Speaker
and see your facial expression at the same time, right? I have to look into the camera directly like this to see you, but then I can't see your face. So I'm not getting any of the things that my brain has been evolved to get.
00:15:03
Speaker
And also, I love that you brought up this point about what is the optimization? The optimization is for engagement versus the time-bound part of therapy, right? And I think um the other...
00:15:15
Speaker
teenager who tragically died, Sewell Setzer, who was interacting with character AI, which was very much this fabulous conflagration that is designed to draw you into something and keep you there for as long as possible.
00:15:31
Speaker
So I want to turn the conversation to... to the technology company that you have also founded to address some of these issues, ah WaveLife, um and not necessarily to give WaveLife the commercial, but what really stood out to us is that you talk about ethical AI with respect to this. So now that we've kind of set some parameters around this discussion and you've talked about the rigor and the need for measurement, I am keen to understand, like, how are you approaching
00:16:04
Speaker
What does this ethical AI system look like? And then and then we can turn the conversation there.
00:16:11
Speaker
Yeah. So first of all, we think there is massive, massive potential um for AI in the future in terms of a product layer, so to speak, that can be built on LLMs, that can refine and train.
00:16:24
Speaker
We believe that fundamentally, and we think we're just scratching the surface in in terms of getting there. Wave uses AI, first of all, um currently always with a human in the loop.
00:16:35
Speaker
And so you've heard that that term is there's always a human being who is monitoring the AI, checking the AI, um ensuring that the ai agrees with clinical judgment.
00:16:47
Speaker
And so what that allows us to do is train systems better and more ethically by saying we're still so early on and we are developing these massive data sets to be able to do really fucking cool things.
00:16:59
Speaker
with the AI eventually in terms of personalization, in terms of potentially um giving you an experience that might be um not better than a therapist, but not all people need therapy at all times. So there's real massive potential there.
00:17:16
Speaker
But in our grounding of these models, in our building of these models, we are using real data, real clinicians, and um real transcripts.

Data-Driven Individualized Care Pathways

00:17:26
Speaker
to actually create something that has to have at least 99% agreement with a human being before it ever touches a user.
00:17:33
Speaker
And so when we're talking about ethical AI, we're saying, yes, we're building the foundational layer to create something that could actually be a lot more efficient and effective for lots more people. And I'll tell you why I think that's really important.
00:17:47
Speaker
Um, Right now, we have sort of a one-size-fits-all model for psychotherapy. You have a problem. You need to see a therapist or a psychiatrist, right? Like that's pretty much the gold standard is that your doctor, if you go to your primary care physician, they're going to say, go see a therapist, go see a psychiatrist. First of all, therapists and psychiatrists are two very, very different things and do very different things.
00:18:07
Speaker
Not everyone actually understands that. so we're talking about a very undereducated consumer, which is a huge problem and a huge bummer. We fundamentally believe at WAVE that um in order to ah equalize, democratize access to mental health care, we need to, on a precision level, understand who belongs in what care pathway at what time. so George, you might need, which George am I talking to? George K, you might need um skills, skill building, meditation. You might need advice, answers to questions, a specific spot solution.
00:18:39
Speaker
Whereas George A might actually need individual psychotherapy to process a traumatic event or something else. So people need different levels of care at different times. And we need to be measuring constantly to understand how to direct the right people to the right evidence base at the right time.
00:18:57
Speaker
And ultimately, that's what we do at WAVE. We've always done that at WAVE from the very beginning. Before Gen AI existed or when it was just starting off, we knew we were this stepped care model that could give psychoeducation to people who needed it, could give health and wellness coaching to those who needed it, could step people up to a psychiatrist or a psychotherapist when they needed it.
00:19:17
Speaker
But we understanding who needs what when is fundamentally at the heart of what we do. Now, Gen AI has kind of been the super juice to that because the kind of the green juice, as I would say, to that because it's allowed us to understand those precision pathways much, much, much faster.
00:19:33
Speaker
Because we can use people's language. We can use people's conversation. We can use the data that they give us to feed into those algorithms and to make suggestions about this is the content you need or this is the care pathway that you need.
00:19:47
Speaker
But we combine that with 15 years of research about understanding what evidence-based pathway should get matched to the right person. um Okay, so I'm going to nerd out for a second here.
00:20:02
Speaker
Please. more I'm going right after you. Go ahead. Yeah. Yeah. So this is where I get nerdy folks. Okay. So it sounds like before generative models came out, if I'm understanding you correctly, what you were doing with machine learning was trying to determine what are the signals that would help, uh,
00:20:23
Speaker
I guess classifiers decide like, if you see this signal, the evidence suggests this is the personalized pathway for this person, right?
00:20:34
Speaker
And you said the green juice being Gen instead of using, let's say, for example, some kind of a numerical assessment or something else that could be basically just mathed,
00:20:47
Speaker
Because Gen.AI models, at least let me, sorry, let me be so very specific. Transformer models gave you the power to understand context in natural language.
00:20:58
Speaker
So you sort of stepped up the natural language processing element. So you're like, oh, I'm hearing cues in this last therapy call. And it's helping guide like it's not a recommending engine, but it's like a it's a classifier to be like this is the most likely next step. But the end state is just more personalization.
00:21:18
Speaker
Right. Like, as you said, like I may be at a point in my life where I do need the one on one psychotherapy. I kind of get through that hard time and then maybe it's like ladders down to something else or different.
00:21:30
Speaker
So is am I understanding that correctly from the technical level? that's That's exactly right. And I would say that we actually do, we call it a recommend gen, but we've always called it a recommend gen. Before Gen. AI existed and was in that was in the zeitgeist, we called it a recommend gen. And we used natural language processing. We used and I mean, AI, right?
00:21:49
Speaker
Like back in machine learning regression. So I call it sexy AI versus unsexy AI. We were all in on the on the unsexy AI. We use machine learning. We use natural language processing.
00:22:00
Speaker
We took the evidence base, but we were so limited by the by the mathiness, right? By the constraints. And yes, LLMs have allowed us to, um actually, interestingly, we don't just use LLMs. We still use all of that 15 years of research.
00:22:16
Speaker
And combine it. It strikes me that you would have to chain together a lot of different models. Yes. And my CTO and I actually had, because he's a hardcore technologist, you might have had a more interesting conversation with him because he's like very, very, very bullish extensively on on all of this.
00:22:32
Speaker
But he's great too. He and I had these massive arguments about a year ago when this stuff all started coming out about how we were going to use it And he was like, we don't need the last 15 years of research. We don't need all of that stuff because LLMs are going to be able to do it by themselves. And I was like, no, they're not. And it turns out once in a while I i was right on this one, right, is that you need that human expert.

Ethical AI Development and Liability Issues

00:22:54
Speaker
We're starting to see this um in starting. If you look at ChatGPT's job announcements right there, OpenAI just is trying to hire thousands and thousands of psychotherapists right now to to do that labeling, to do that expert work.
00:23:08
Speaker
modeling to do the retraining of their models in a way that they can do drive safety. You need that human layer at this point because the models are learning from social media. The models are learning from the internet. The models are learning from all the written material out there.
00:23:22
Speaker
Transcripts, human interactions are incredibly hard to find and are a huge commodity. We have them because we've always recorded them. We've always recorded. We know What an eight session, um and because we measure everything, we know that you come in for eight sessions and we see a downward trajectory in your symptoms. It's been effective for you.
00:23:43
Speaker
We know why we have those data and can feed them back into the model and train them. Now, unless we have 99 agreement with a human being like me that we agree with it, we're not getting it we're not we're not using that to push anything out into our public, into our public and into our our users.
00:24:00
Speaker
And that's what I mean when I say ethical AI. Now, eventually, I do actually see a world where our models get good enough so that the dependence on the human in the loop decreases and the people who can be treated with content, with technology, with interaction probably expands and the reliance on the human being becomes less, which is awesome in some ways because, again, the reason my company is called Wave is that we look at mental health as a sine wave.
00:24:29
Speaker
um I can't. No, I mean, it' a wonder you should bring up. Literally before this recording, i I always equate my own depressive episodes as a wave. Like I always talk about the, the swell can kind of come out of nowhere after months of balance and calm.
00:24:45
Speaker
And, I know I have a lot of people in my life who I think are trying to like solve my depression. And i have also for years learned like sometimes I just have to ride the wave. Right. It's sort of like it's a thing that's to be managed. It's not like a switch that just um I'm not depressed anymore. So I totally understand the the wave metaphor for sure.
00:25:07
Speaker
and And i am to be to be quite honest, you know, just I'll just make an editorial comment about your process, which I think hopefully all your listeners who struggle with depression can learn from. But that acceptance piece of knowing that this will pass and that this is a wave is possibly one of the most dramatic things you can do for yourself. Is that when we get stuck in that mindset of like, I can't get out of this. I'm stuck. This is horrible. This is horrible. But being able to take that step back and say, this is just a wave right now. I'm down here.
00:25:37
Speaker
but soon I'm going to come out of it is an incredibly powerful tool, nothing to do with AI. But we can actually, let's say if you're starting to notice that you're coming into going down that, to the trough of the wave, um and you're hitting that depressive episode, and we actually have biometric data to support that. You have an aura ring or you have something that we're starting to see.
00:25:59
Speaker
We can actually then reach out to you with our recommendation and be like, hey, we're noticing some stuff Here's some content for you. yes surfaced by the AI, but actually reviewed by your human helper to make sure that it's accurate and and accessible for you.
00:26:16
Speaker
Maybe that helps you. but Maybe that keeps you from going down as deep. And so that's where we really see the power of technology and the power of AI is to super help. Or we say, hey, do a check-in, like tell us what's going on. And you speak into your phone and you tell us, hey, this there was a trigger, this is what's going on, these are the physiological symptoms I'm noticing.
00:26:36
Speaker
And we, but great, you know, we let your your coach know and we're gonna send you some content that potentially helps. If it does, great, be on your way. If it doesn't, there's a human being if needed. And that's the kind of stepped precision care that we think AI can actually be incredibly helpful for.
00:26:52
Speaker
Because it allows you to extend the capacity of that human being by matching you onto those markers.
00:27:03
Speaker
Hey listeners, we hope you're enjoying the start of season four with our new angle of attack, looking outside just cyber to technology's broader human impacts. If there's a burning topic you think we should address, let us know.
00:27:16
Speaker
is the AI hype really a bubble about to burst? What's with romance scams? Or maybe you're thinking about the impact on your kids or have questions about what the future job market looks like for them.
00:27:28
Speaker
Let us know what you'd like us to cover. Email us at contact at bareknucklespod.com. And now back to the interview.
00:27:40
Speaker
Yes, I think very much like the ideal state being that like AI is very good at spotting signs of lung cancer, for example, earlier than your average radiologist.
00:27:51
Speaker
This doesn't take away the need for oncology. It just gets you to a care plan better and faster, right? Rather than like, let me wait until it's stage three before, you know, detected by human eyes.
00:28:03
Speaker
Which is really different. Yeah. Then pretending you're a therapist on a chatbot, like is to say like, oh, I can help you teenager who's suicidal. Let me help you by by demonstrating the the right rope to hang yourself from. Right. Like that's not ethical.
00:28:19
Speaker
That's not cool. That is not safe. Oh, my God. Yes. but but Sorry to get graphic on you. but But again, like and and we're having an adult serious conversation. So actually, thank you for that. It's a good example of real thing that could happen.
00:28:32
Speaker
That did happen. Yeah, and did happen. Oh, yeah, that's right. Sorry. Jesus Christ, that's a derailing comment. ah I'm sorry. No, no, no. It's when it really happens, like it's a shocking thing. i mean, we're we're folks with empathy on here and that's that deserves it.
00:28:50
Speaker
um You know, there was a lot to unpack there. And I think to kind of just take that digression for a sec and my conclusion on it and in using this stuff and having to spend the last two plus years figuring out how to securely implement it in technology and then at work at a social media platform is actually...
00:29:07
Speaker
um The one conclusion that i have after all these pitches and all this marketing is that AI, like even the agentic AI, whatever, put in whatever program you want or whatever server you want.
00:29:22
Speaker
AI in its current state, first of all, is is still based on learned language, learned data. It requires a core set of data to actually train itself on before it provides your results.
00:29:33
Speaker
What AI does really well is transactional questions where if you provided a context, the scenario, and you're trying to figure out an output based on variables that you explain to it, it'll provide you that.
00:29:47
Speaker
so That's, that's, that's, if you're going to use it in business, if you're going to use it in life, that's the best way to use it because that's the most accurate way to use it my opinion. What I really have concluded though is that The one thing AI can't defeat in terms of human beings and why I think this whole workforce replacing thing, first of all, it's going to backfire on these companies because they've invested in all this AI and all these data data centers and all this power generation and fired a bunch of good people.
00:30:13
Speaker
And then the AI can't actually replace the capability. Because at the end of the day, the one thing we as human beings have over the AI is the power of our imaginations. Right. Is the power to listen and visualize a scenario and actually like have that, that attuned sort of connection. that that's That's the one thing I will state on that.
00:30:34
Speaker
And kind of the other part of it is, you know, we have to look a lot like like you talked about where there are potential for consequences if the right guardrails are in place. And I think guardrails are a really important conversation to talk about if we're going to talk about AI assisting human therapists in delivering more accurate, better therapy.
00:30:53
Speaker
Right. So, you know, we have to think like and then full disclosure of the audience, being like the nerdy detail orient oriented CISO I am, I definitely did look at your company's privacy policy in terms of service beforehand and ran my own risk assessment on it.
00:31:07
Speaker
And there was like a whole line of questions that like, I think this is more if she ever wants me to come consult for it's not meant for being on air. But I was trying to understand like, OK, what is this thing that you built and where are the actual risks with it?
00:31:21
Speaker
And I think at the end of the day, I have to ask, you know for example, we talk about the but the bad advice piece. If an AI system gives harmful therapeutic advice, like say it misinterprets a crisis statement, we have to look at the the issue of liability and accountability.
00:31:36
Speaker
Who is ultimately accountable? Is it the developer of the the therapist overseeing it, or the institution deploying it? Like, how do we operationalize accountability in such a diffused system?
00:31:50
Speaker
Yeah, I think that's a great question. i am super open to and reevaluating. We're actually currently in the process of reevaluating our are ah data sharing liability. and and And what you're pointing out is so important because we are in a really, really gray area. I will tell you in terms of like who is morally and ethically responsible, all three of those entities, who is legally responsible, who's I don't know because the law is still super gray about that. And so there's going to be case law to figure that out. But I will tell you, if something happens to one of my patients, I will get sued, the company will get sued, and the developer will get sued. So all three of those people will get sued and will will potentially have. So the reality of like, think the question of like, who's actually liable versus what does the law actually say? Like that's a whole other podcast of like healthcare care law and the gray area that is like,
00:32:45
Speaker
If you ask my attorneys, they'll be like, yeah we're in a massive gray area right now. But you know who is going to be the first company to figure that out or defend against that is going to be OpenAI, right? We've already seen that with Anthropic and OpenAI and the lawsuits that have happened with these cases that we've been talking about. Who is ultimately responsible?
00:33:03
Speaker
But I'll tell you, like, everyone is going to get sued. I will also tell you that like that's also part of a cost of doing business. Like I am a clinical psychologist. I care deeply, deeply about every single user on my platform. I know that in order to bring this kind of product to market, I have to take venture dollars to do that in order to do it um in a way. And I have to promise them investment. So there's a really huge tension between money making and paying back my investors and the actual kind of clinical work I'm doing.
00:33:33
Speaker
Now, the reason I came out of academia is because We pretend that there is no investor there, but really there are financial disincentives and academia as well that push you towards decisions.
00:33:43
Speaker
But it's slow. And I wanted to help more people faster. So I am constantly, constantly asking myself those questions about like, we're not move fast and break things. We're not meta.
00:33:54
Speaker
This is not building the airplane and flying it while you build it. i think Reid Hoffman said that right when he was building LinkedIn, like about startups, like with health care, you have to be methodical. You have to do things with intention and you have to document the hell out of it to make sure that when you do and it's not an if, it's a when you are dragged into court because some and adverse event happened that you can say, honestly, we did the best we could.
00:34:20
Speaker
with the knowledge that we have. So I agree with you. There's a real bind there. And I'm sure our data policy needs to be our privacy policy and our needs to be updated and because I don't think we've updated it since before Gen AI happened.
00:34:35
Speaker
And so you're pointing out something that that's that's real. Yeah. Yeah. At the same time, um if I were still doing my research in academia, we wouldn't see these levels of being able to help people in the same way for 10 to 15 years.
00:34:50
Speaker
And maybe that's OK. Health care is inherently, inherently slow. But then what that allows is bad actors to come to market much faster than me and to saturate the marketplace with things that don't work.
00:35:02
Speaker
So I don't know. I believe in what I'm doing. I will say. First of all, George and I talk to CEOs every week and we see that in all this and all the headlines. And that is so refreshing as a founder and a CEO of an organization that's venture backed. So you are you have responsibility to make money.
00:35:23
Speaker
But that was one of the most honest and genuine responses to like a somewhat critical question about the core concept and risks of your product. um So I really am thankful for how honest you are about it. Like, again, I don't know your company. I'm not, we're not here necessarily straight promote it.
00:35:40
Speaker
I'm dealing the with the integrity of the individual in front of me, who is the most representative person for this technology and for this company. um Well, because you're you're the CEO, you're the face of it.
00:35:51
Speaker
So I, I wish, I hope more people listen to this, see this as an example of what good technology leadership looks like in the startup space, because I think we need more honesty.
00:36:07
Speaker
And I think listening to someone like you being as honest about, hey, I acknowledge there's a risk. I acknowledge that this is going to happen. Mm-hmm. it gives me some degree of comfort that we're not just dealing with psychopaths who are trying to weed our money out and they don't care if we die and they just want to go off in their island and hang out in the Caymans. So thank you for at least giving us an honest response.
00:36:31
Speaker
And I do hope that you guys do figure it out. But George, i'm I'm blown away at just like... There's a real founder who's like not psychotic. We found we found one. I'm just kidding. Founders.
00:36:41
Speaker
Just

Clinician Leadership and Data-Driven Care

00:36:42
Speaker
kidding. that i I will say one thing, though, that that I'm a clinician first. I'm a clinical psychologist. My license is on the line, right? Like I i have, and we don't have a Hippocratic Oath in clinical psychology, but we do have an ethics code.
00:36:58
Speaker
um I'm doing this because can't. it Sounds super hokey. I want to increase and democratize high quality care because most high quality care goes to people who look like me, white, high SES women and and men.
00:37:14
Speaker
And the reality is there are so many people out there who need high quality evidence based care and technology is a way to do that. And so I'm trying to embrace that. um And yeah, the the easiest, fastest way to do that is to is to take venture dollars um so that we can get to market quickly. So it's real. It's a real tension. I also would want to say I think there needs to be more clinician leadership because as I i love to say, it is really, really, really hard to and teach the business folks the clinical side, it is really easy for us to learn it, to learn the business side. It's it's the the financial aspects are super simple and super easy.
00:37:54
Speaker
They can't learn the clinical side. So more clinician founders, more clinician leaders, more clinicians at the table, I think you'll hear a level of integrity that's real. I love that you said like you have a license, right? And this is that one of the things that I bring up all the time is like, oh, the chatbot community is like, no, there's a reason you have a medical board and there's a reason you, not everyone can just hang a shingle and be like, come in for therapy, $5 a session, right?
00:38:21
Speaker
Because we as a society have recognized that there is a level of care that is required. And also if we didn't take that level of care, there would be,
00:38:33
Speaker
great harm. I mean, everything from like the same reason you have a realtor's license, you know, to house inspections. So that there's that, there's that note. The thing that I wanted to return to is at the beginning of the conversation, you talked about this rigor of measurement that we have studies that have looked at neurochemical response and stuff, but it tends to be highly academics, maybe smaller sample sizes and not a feedback loop that's in, in the the treatment. and Um,
00:39:01
Speaker
Now, in the ah previous answer, you talked about biometrics, Aura Ring, stuff like that. Can you talk a little bit about the role that that plays? Because that also strikes me as like, were our roots are in cyber. We're a naturally skeptical bunch. It just strikes me as a lot of data coming into a system. And I understand your intentions are good, but just want to also understand like where that measurement is taking place.
00:39:25
Speaker
Yeah. So right now we have ah a patent pending and we, I i do see ourselves as a data-driven care company um in in the long run. So I do, I think that's real, but again, it has to be done methodically, intentionally with with rigor, with scientific rigor behind it. So In the future, I see a world where we're connected to whatever data you want to give us. The same way that like if you look at the Apple Health app right now, you have an opt-in. You can choose to link your Oura Ring to your Apple, to your iPhone in this way. You can choose to link your bed or whatever techno layer of technology you're using.
00:39:59
Speaker
and And ultimately, I would love to be able to harness that data if you, George K., are happy to give it to me. And if you understand the risks involved giving it giving it to me, not just the risks in terms of data breach, right? That's a whole different basket, but in terms of how we're going to use the data and how it's going to enter the feedback loop. But again, until we're at that 99 percent risk,
00:40:20
Speaker
of agreement between a clinician and between the technology, that's never, you're never going to see that. But do we see a world where we can harness all of that data to really personalize care in an effective way? Absolutely.
00:40:32
Speaker
But I think the initial vision from this came from, was sitting as a postdoctoral fellow at Stanford doing a care delivery model fellowship. And um we stumbled upon the algorithm that drove, I think it's eHarmony, which is I don't even know if they still exist. It's like an old matchmaking, like early Tinder, but for relationships, not for just sex.
00:40:54
Speaker
And i think what was really fascinating to me as a concept that drove us this is the more you sat there and answered questions, the better your outcomes would be in terms of match. So you could do sort of the cursory, I'll answer 10 questions and get matched with a whole bunch of people.
00:41:08
Speaker
But the more information you give the system, them better the better the match would become. And so ultimately, but that was it was voluntary. It was volitional. was your choice and you understood the risks or the benefits.
00:41:21
Speaker
um And ultimately, that was sort of the impetus for all of this is the more data that you give the system, the better we can personalize your care. And I also want to be really clear. It's not static data either. Yeah. It's not, it's the more data you give me today. It's the more data you give me next week because you're in a different place on the wave next week, right? So it's it's a little bit dynamic. It's very dynamic.
00:41:44
Speaker
In terms of what we're doing today, which was your question, um we do self-report data. We do journaling data. We do, and if you want to hook up to... and different kinds of mood logs. So we do semi-structured data, we do unstructured data, we do language data. However you want to interact with the app, we will be taking your information and feeding it into the algorithm to make personalized recommendations.
00:42:09
Speaker
But more importantly, feeding it to the human that's in charge of you, who's in charge of your case, so that that human can make sure that all the the technological data, the recommendation is actually working properly.
00:42:20
Speaker
So as much data as you'll give us, we'll be happy to take. But we want you to know That kind of similarly to eHarmony is the more you give us, the more personalized care we can give you, but it's totally up to you.
00:42:32
Speaker
And it's ultimately your decision in terms of what you're comfortable with, which, by the way, is identical to my how I would behave in a therapy room. Mm-hmm. right right The more information you give me that you're comfortable giving me, the more my algorithm and my brain can conceptualize and use evidence-based care to personalize CBT for you, to personalize ACT for you, to personalize the intervention that I'm going to give you. So it really is trying to model not just on, as Jarjay pointed out, that like the LLMs and how it's learning from
00:43:04
Speaker
the body that it's learning from, but how it's it's learning from therapist's brain and and a coach's interaction and real interactions that drive results

Biometric Data and Privacy Concerns

00:43:13
Speaker
and outcomes. Now, when you say data to people like George and I, there's a, the red flag that goes up is like,
00:43:23
Speaker
You know, i I was an early adopter of BetterHelp, for example, because I just couldn't find somebody locally. I'm sorry. Yes. And then. Yeah. Bummer. And then to find out that the same shit, different day, they just start selling it off to advertisers. Like, my heart was broken. I was like, God damn it. I don't You know?
00:43:41
Speaker
And so, but i I understand there are financial incentives. I understand that. Anyway, I guess so I want to ask with the amount of data that you may be ingesting and data as a requirement for modeling, where do you see guardrails that you can put in place such that there isn't a financial incentive to, you know, just make it part of the attention economy again?
00:44:06
Speaker
Well, there is. There is a financial incentive, but we're not a direct-to-consumer product. And BetterHelp is, um I know a lot about BetterHelp. I know the CEO and and founder, I know what he's doing now. He's doing a new company called Strawberry.me, which is like a, it's a coaching company that is entirely outside of healthcare care because he doesn't want to be burdened by healthcare, by the guardrails of healthcare. And so he's he's doing something different.
00:44:31
Speaker
So and I know him his his his kids went to school with my kids. it's the small Silicon Valley world. It's a choice that you make. Like he's a direct kin tokin consumer guy. The guy who founded Slingshot is the founder of Casper Mattresses. These are direct to consumer.
00:44:46
Speaker
I'm a healthcare care provider. So I ultimately believe in selling to um insurance companies and selling to employers and selling to people who I'm not giving you that I will never sell your data. Yeah. Are the financial incentives there to do it?
00:45:00
Speaker
Yes. There's plenty of money to be made for my investors in through other sources. And that's just not a play, a playbook that we would ever care about it in terms of the guardrails on that. Like, again, it's my team all believes in that. It's about hiring people who are not incented in that way. We're healthcare people.
00:45:20
Speaker
And healthcare care just, again, there's horribly perverse financial incentives in healthcare as well. Don't get me wrong. They're a different beast, but it's not about selling your data. Great. Thank you. We'd shut down before we sold your data. Like I don't, that doesn't help anyone. Yeah. And I think, so for me, actually, it's funny because you brought up accessibility and that was kind of um something i want to ask you about as well.
00:45:41
Speaker
ah Knowing that, you know, I think a big thing with, with getting therapy is the financial ah um pitfalls and costs. um You know, in Canada, we're a little bit better because we have public health care, you know, you know, but consecutive conservative governments love defunding that and that's a separate conversation. But, you know, we, you know, we do have options and, and care is available.
00:46:06
Speaker
A lot of folks, if they're not on a private health plan, and especially considering some of the issues you guys are facing right now with your Medicaid practices, I think, I think, you know, it comes down to even if AI can make therapy more accessible, because I think people want therapy and they can't afford it yeah or they don't know how to find it or they don't know how to pick a doctor or pick a clinician.
00:46:27
Speaker
So they just go to the AI because it's private. It's what they know. or And they think that it's, they think that it's private. They don't know any better. But, you know, even if this this AI revolution can make it more accessible for people, are we then at the risk of normalizing replacement of a human therapist rather than augmentation?
00:46:48
Speaker
Like, in other words, if insurers or clinicians find AI cheaper, what's to stop them from sidelining human therapists altogether? Because profit is king. So, and I would argue that the day that ah ah an AI therapist is better than me, they should.

Measurement Culture and AI's Potential

00:47:08
Speaker
Like, I want to be really clear, but this takes rigorous measurement. And I am a believer of that. If that day comes where a bot can do what I can do better, faster, cheaper, put me out of a job.
00:47:19
Speaker
Like 100%. Do I believe that that day is anytime soon? No. But I'm not an anti-technologist in terms of saying that like, hey, if it does better than I can do, do it.
00:47:30
Speaker
Right now, should there be a human in the loop making sure that that system? Yes, always. But will that reduce potentially the need for human capital? And I'm sorry to say human capital because that's not very empathetic. Would that reduce the need for human therapists or would it allow us like talk about the Canadian health care system? Like, oh, my gosh, so, so inspiring in some ways.
00:47:51
Speaker
um Right. In terms of um allowing me to work at the very top of my license. Right. To make the decisions that I can only make that an AI can't make, right? Stepping up to care in that capacity. So I would say um normalize. I'm not, i don't care about normalizing it or not normalizing it. I care, are you feeling better?
00:48:11
Speaker
Like, and I care, like, are you getting better? Are you feeling better? Are you living the life you want according to the values that you have? um But ultimately, if if a bot can do better than I can, why shouldn't it?
00:48:24
Speaker
As long as there's a human being is is making sure that it's working properly. And so if we go through the scientific method of of going through surveys and testing and and actually seeing what the results are based on structured clinical programs where a certain set of patients are going to self-articulate what the results are, which I think we are at least like five or 10 years away from.
00:48:46
Speaker
We really can't have that conversation, right? Because you need to have the metrics and the various of ah multiple studies, taking multiple groups of patients from different, like you said, racial economic categories and seeing if we can find a way to make this effective, right? Because again, it's replacing the human is possible, but the actual science and and nuanced statistical study that has to occur to make it happen And this is where I fear our our our rush for profiteering won't let this happen.
00:49:17
Speaker
There's years of research that still needs to take. And I'm not you. I'm not you i'm nowhere near as qualified as you. But the little bit I know about dealing with scientific research, yeah we are years away from being able to pull these studies off.
00:49:30
Speaker
But here's the thing. So here's the thing. The second thing I did after incorporating my business was to get an external IRB so that every single thing that I could do, internal review board, every single thing that I could do could be tested and studied.
00:49:43
Speaker
We built our technology structure, infrastructure on measurement-based care so that every single thing we are measuring, we're giving information not only to you as the user, but we're giving it it to the provider. We're collecting data. It is a culture of measurement that we have developed created and instilled and built into the backbone from day one.
00:50:01
Speaker
And we are also publishing because you can do both at one time. Now, it does mean that you move towards marketability slower. So I have a very good friend who has a company started at the same time as mine, who has raised $150 million dollars in a totally different space, who is providing care, providing care, providing care, and is not doing research on that. It has allowed her to grow much more quickly.
00:50:23
Speaker
In this phase, I fundamentally believe in this space and mental health, you have to be measuring and publishing as you go along. We actually have a peer reviewed publication on a controlled study that we did just on our users. We had them opt in. We compensated them from our their time. And we have us an outcomes study that we're going to be published. it's actually coming out in three weeks, I think you'll see it.
00:50:44
Speaker
um based on our efficacy. But because it isn't the standard practice in mental health at all to measure any outcomes at all, like better help, not measuring their outcomes, right? No one knows.
00:50:56
Speaker
Because it it takes an incredible amount of and implementation science and culture to measure those sorts of outcomes and then report on them. It can actually absolutely be done, but you have to build it from day one. And that's a very hard thing to do. I spent...
00:51:10
Speaker
Seven years implementing measurement-based care at Stanford. When I started at Stanford, when I started this project at Stanford, 11% of clinicians at Stanford were measuring any sort of outcomes whatsoever, okay?
00:51:25
Speaker
It took me six and a half years to get that number up to 88%. That's because it's it's not the technology didn't exist. It's because it's really, really hard to get providers to actually buy into measurement without demonstrating.
00:51:37
Speaker
So when I started this company, I built measurement-based care. There's a huge differentiator, a huge part of what we do. So we can show, because I fundamentally believe, you can ask my team, they're sick of me saying this, but if I can't show that what I'm doing is working in a scientifically rigorous way, I should not be doing it.
00:51:54
Speaker
I mean, what a novel concept. Yeah.
00:51:59
Speaker
But it's crazy. Yes, I mean, and I mean, every time I read an article about somebody using a chatbot for therapy or something, I feel like I'm taking crazy pills.
00:52:10
Speaker
Like, or Or that they're just, prompt I mean, our last episode before this one was, you know, are we going to solve cancer are we just going to double down on the attention economy? Because they came out promising LLMs would help climate research and do all this thing. and they're like, you know what?
00:52:25
Speaker
Here's some weird ass video memes you can generate on the fly. Cool. Like, thanks. Thanks for that. Well, and I think it goes also to the point is that like it's to and George A's point is that all these people who are firing their employees who are subject matter experts need them back because the reality is in order to create technological tools that are advanced enough to drive the outcomes, you have to have human beings who know what they're doing.
00:52:51
Speaker
You cannot rely on um on the LLMs to just have learned to that themselves. You have to have really good prompt engineering. Right. You actually have to have someone like me who's telling the LLM or training the model to think like I think.
00:53:05
Speaker
Again, that's why back to why OpenAI i is now trying to hire. Yeah, have extremely qualified reinforcement learning from human feedback because your humans are like dealing with a data set that's like extremely rarefied.
00:53:18
Speaker
Yeah, shout out to Deloitte Australia who's refunding the government after issuing them a consultancy report laced with generative AI errors and made up shit. So um as we, man,
00:53:31
Speaker
Sarah, we could go for days, but we can't. So I want to turn the question now back to our audience. What questions would you tell our listeners to ask when they are being sort of promised pie in the sky, AI therapy stuff? Like what can you arm them with as kind of a litmus test of like real or not real? Like what would you have them...
00:53:55
Speaker
thought I mean, anything that I say is going to get, and I have no power over the attention economy, so I'll do my best. But first of all, is there clinician leadership in the company? Investigate, especially Gen Z. Like you all are the Yelp, you are all the double click generation. You're researching these products. Go look.
00:54:14
Speaker
Who is funding it What else have they funded? Who is leading the company? Is the is there C-level clinician on board? and I mean, really, it doesn't matter. You can put a C-level clinician on board without listening to a word they say.
00:54:29
Speaker
What kind of research are they doing? Will they show you their outcomes data? Or are they just moving fast and breaking things? And I do think, right, again, the argument is like, Look, if we want to make real change, we have to move fast and break things. But human lives are a different story. So yeah, don't move fast and break humans.
00:54:47
Speaker
Right. And and although, you know, yeah, don't move fast and break humans. Exactly. We're not we're not just things. um And I think also is that be very, very, very mindful about anything that's calling itself therapy, AI therapy. AI therapy does not exist.
00:55:04
Speaker
We've seen really awesome legislation in a couple of states that are saying you can't say that. Yeah. and And which is great. And hopefully we'll move towards that that type of, but there is no such thing as aren is AI therapy.
00:55:19
Speaker
It doesn't exist right now. There are bots, the really good ones aren't going to call themselves therapy. They may, they'll say who they are. They say what they're doing to George's point, right? It's like, how transparent are they being in terms of explaining where they are and what they're doing? And is there integrity there?
00:55:34
Speaker
Again, it's, I'm screaming into the void, but i don't know. Hopefully it's helpful. Well, Dr. Sarah Adler, thank you so much for joining us and for the time, especially, you mean, you're actually calling from tomorrow.
00:55:48
Speaker
ah So thank you for for doing that time zone. Talk about future. Yeah, talk about doing that time zone tango. So thank you very much. I mean, this was an incredible conversation. we're very excited ah to kick it off. And also, I'm sure we will continue it as the technology evolves.
00:56:04
Speaker
Awesome. Thanks so much for having me. You guys are great.
00:56:11
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:56:24
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. It helps others find the show. We'll catch you next week, but until then, stay real.
00:56:38
Speaker
We're not here to like say gotcha. Like this is a very intellectually open minded place. And I really, assume both of us really do encourage like people with different ideas because I think the audience learns when there's a genuine conversation.
00:56:51
Speaker
I'm also come from a family of litigators and I'm a state champion debater. So bring it. Let's do it. I love it. Love it. All right, here we go.