Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Why Tori Westerhoff says we should talk to strangers image

Why Tori Westerhoff says we should talk to strangers

Hanselminutes with Scott Hanselman
Avatar
0 Plays2 seconds ago

Tori Westerhoff joins Scott to explore the intersection of AI, human psychology, and personal growth. As people increasingly use LLMs for introspection and decision-making, Tori argues that we're missing the diversity of thought that comes from community, even particularly random encounters with strangers. She reveals her own practice: a daily noon reminder to talk to strangers. "If you sycophant yourself, you're never going to grow," she explains. The conversation delves into how LLMs can create echo chambers of thought, and why the randomness of human connection, even just someone on the same bus, helps us update our mental frames and break out of programmed decision-making paradigms.

Check out https://textcontrol.com for industry-leading document editing and PDF processing SDKs for .NET developers

Recommended
Transcript
00:00:00
Speaker
There's so much rich neuroscience research around how humans interact in communities that that it is just kind of an undercurrent of how most people think and behave. It's funny that you mentioned the strangers thing because I actually have i have a reminder on my phone to talk to strangers every day noon.
00:00:21
Speaker
Dude, that's so good. Now for so long. And but like my theory on this, and I actually think it probably is really closely to what we're talking about and the introspection tool of an LLM is that you need diversity of thought.
00:00:37
Speaker
If you sick of fans yourself, you're never going to grow. You're never going to find the new solution on the things that you're stuck. And you also are going to be biased with in-group, out-group, like all of these classic human things that you're not going to search for the really, really different thinker unless you're being incredibly intentional.
00:00:59
Speaker
Like really specifically not looking for things that seem familiar and comfortable and good. And talking to strangers is exactly that. random people Hey, friends, you probably knew that text control is a powerful library for document editing and PDF generation. But did you also know that they're strong supporter of the developer community? And it's part of their mission to build and support a strong community by being present, by listening to users and by sharing knowledge at conferences across Europe and the United States. If you're heading to a conference soon.
00:01:31
Speaker
Maybe check if TextControl will be there. Stop by and say hi. You'll find their full conference calendar at textcontrol.com. That's T-E-X-T, control.com.
00:01:44
Speaker
Hi, I'm Scott Hanselman. This is another episode of Hansel Minutes. Today, I'm chatting with Tori Westerhoff. She's a principal AI security researcher with the Red Team at Microsoft. She's got a Wharton MBA and a background in neuroscience from Yale. How are you?
00:01:58
Speaker
I'm doing great. I'm stoked to chat. So we we got into it in a random meeting yesterday, and it was super interesting. And sometimes you have meetings at work, and you're just oh, we got to keep this meeting going, but we got to hit record.
00:02:09
Speaker
Then you're talking about work stuff. And ah so I said, I want to have Tori on the on the podcast. I have not talked to a lot of neuroscientists right now, especially not ones whose job it is to be adversarial towards an AI.
00:02:23
Speaker
And there's ah almost a pun there because everyone's adversarial towards AI and how they feel about it and how they're dealing with the moment that we're in right now. And I feel some kind of way. Are you positive or negative or indifferent?
00:02:38
Speaker
ah I think I'm strategic. i think I think because a lot of what Microsoft's AI Red Team does is to look at things from a misuse and also adversarial space.
00:02:54
Speaker
We don't just test like, hey, I am an adversary. We also test, I am a well-meaning person. We'd be using this in a way that veers harmful.
00:03:08
Speaker
So I think I'm really intentional about how I let AI impact my personal life. And I try to like really fit the solution to the problem.
00:03:20
Speaker
So it's it's not that I'm negative or positive, but I think I try to be really smart about the scope creep that AI can have on how you interact with the world, how you think about yourself, how you think about others.
00:03:36
Speaker
The part about how it how you think about yourself, I grew up in the 70s and 80s, and they always, in our kind of like middle school, elementary school psychology classes, when they're kind of telling you how the brain works, they would say things like, you've got these tapes that are running in your brain. Like, they're tapes are a background. We would imagine cassette tapes spinning in our brains. And they would say, make sure that these tapes have good information. and They're saying good things. It was one of those, like, is your internal monologue negative towards yourself or positive to yourself?
00:04:06
Speaker
And we kind of grew up in a time that said that introspection was a fundamental part of being a person who is a healthy and thoughtful person. And i kind of, I'm always taking a pause and going, how did i feel about how that meeting went? And how did I feel about that interaction with that person? And what can I do to do better?
00:04:26
Speaker
And it may be generational, but I start to wonder, if those tapes either don't exist or exist differently with the new generation, as they may find them in infinite scrolling.
00:04:37
Speaker
What I have internalized, they have externalized. Right. i I do think that there's a lot of self-concept that can can start to... bleed into how you're interacting with platforms, right? Especially at those super formative ages. And we've seen a lot of studies about how we know the doom scrolling can impact folks' ideas of themselves, right? Because it's all baselines.
00:05:05
Speaker
So if you have a different perception of how the world is, how you perceive yourself in it is going to change. What I think is really interesting about AI is that it we're seeing in more and more published studies that people are using AI as the introspective tool.
00:05:26
Speaker
But now there is a system that's part of it. Instead of you having your conversation with yourself or someone in your life or pretending to have a conversation with role model, right? Like what would insert person do?
00:05:44
Speaker
You're interacting with an LLM that actually in a lot of cases is primed to have your information as context, right? So it's not necessarily the worst introspection tool at all, but it is very different than us having that mental back and forth and assessment, I think.
00:06:07
Speaker
because the assessment could potentially come from that outline. Yeah, and and biases and context and randomization and all of those different things. Right. it's That's interesting because I, again, i'm I'm going to use my generation as a foil for the purposes of the conversation. I'm not fixated on it, but I'm using it for the for this this conversation.
00:06:26
Speaker
I have had moments where I would meet a stranger on a bus and have like a total random old person give you some thoughtful piece of advice. right? Some magical oracle that you bump into and then you don't know if they exist or not, but they told you some piece of advice or an auntie or, you know, ah a treasured elder in your life. And then I might talk to myself in the mirror or, you know, mumble to yourself on a walk.
00:06:50
Speaker
and And I use the analogy that you and you're talking to an LLM, you are largely talking to yourself in the mirror, but you are also talking to some world computer that has the weights and balances and biases of whatever corporation decided to feed into that.
00:07:04
Speaker
um i It makes me sad that my kids may not have that moment to talk to a random stranger on the bus and get advice because they might have as better advice than the LLM.
00:07:15
Speaker
But that might be age speaking. i I actually don't think it's age speaking. And I think that all of the science says that we're community based creatures.
00:07:28
Speaker
Okay, that's not a theory. That's like a a pretty understood thing. Exactly. That's how my evolution got us there. So touch grass is not an insult? I don't think so at all. um And you also see there's so much rich neuroscience research around how humans interact in communities. that that It is just ah kind of an undercurrent of how most people think and behave. It's funny that you mentioned the strangers thing I actually have i have a reminder on my phone to talk to strangers every day noon.
00:08:01
Speaker
That's so good. Now for so long. And but like my theory on this, and I actually think it it parlays really closely to what we're talking about in the introspection tool of an LLM, is that you need diversity of thought.
00:08:17
Speaker
If you sickle fans yourself, you're never going to grow. You're never going to find the new solution on the things that you're stuck on. you also are going to be biased with in group, out group, like all of these classic human things that you're not going to search for the really, really different thinker unless you're being incredibly intentional.
00:08:39
Speaker
Like really specifically not looking for things that seem familiar and comfortable and good. And talking to strangers is exactly that. Random people on the bus, the only thing you have common is theoretically the bus.
00:08:54
Speaker
Depending on the bus line, that's not a lot. And that actually disrupts your pattern of thinking. it gives you new information. it helps you update decision making. it helps you update frames. That's kind of how...
00:09:08
Speaker
concepts become new. And when those concepts become new and you really think that they're impressionable, like that's actually how your decision-making starts to change. So in the context of that LLM where, hey, maybe there is a bias towards a particular type of decision-making or a really particular, maybe perhaps neutral frame on how people should be or how they should think about themselves or others, you're not breaking out of that one programs, trained, decision-making paradigm.
00:09:43
Speaker
And that's actually, I think, a really interesting element of growing up with an LLM in tow, or even even just being an adult with an LLM in tow and replacing it with the randomness of a community.
00:09:56
Speaker
Yeah. i i i ah I have had people say that when I give my kind of like onk level advice, like go out and talk to people and shake their hands, look them in the eye and Like, oh, you know, like, but my, they'll say, thing oh, my ADHD could never, or my social anxiety could never.
00:10:12
Speaker
And now I'm realizing that on on TikTok and on Instagram, and I'm very active on both and I enjoy it very much. I have tuned them to be joyful for myself. um So my algorithm is working.
00:10:22
Speaker
I am surprised that there seem to be two camps. There are those who doom scroll and say, oh, my anxiety could never. And then there are those for whom simply going out on the street and talking to strangers in a public square is a kind of content, like whether it's the guy who knows 20 languages and he wants to like meet random people in surprise and surprise that he speaks his language, or it's like, here's how you rise up, you know, the opposite sex or whatever. Like there's all these different people who like, oh my goodness, here's how you can be extroverted as well.
00:10:53
Speaker
And they seem to think that there's two states, there's either rotting or there's being an extrovert. And honestly, when I tell my kids or young people in my life, like, you know, you can just talk to people at Walmart. It's not a problem.
00:11:06
Speaker
They're no, no, my social battery. i can only do that once a week. there's There's all these weird limitations that we seem to have put on ourselves. And we've given them language to say that like, no, no, I'm broken. My brain doesn't work that way to be social.
00:11:20
Speaker
So does the science support that? Or did we break all these kits?
00:11:27
Speaker
I think the simplest take on it is that it's a muscle. And as ah as an extroverted introvert, I really felt that where I think if you if you met me pre-COVID, I was talking to strangers all the time. No need for an alarm for it.
00:11:46
Speaker
Right. I think I was I love people. The whole kind of premise of my obsession about life is that I just find humans fascinating. Right. And any anyway, I'm going to understand that I'm going to go and grab. And I think that was an interesting experiment for me because I really was very comfortable being in like an introvert-like setting. And I lost the muscle and I felt it when I went back into the world, I really did feel it was more effortful. But you did something about that. You noticed it though. And that's it that feedback loop itself is is significant to call out.
00:12:27
Speaker
And I think I also had, going back to it's important to have experiences and almost like Beijing and processing of different social interactions.
00:12:39
Speaker
Like I had a roadmap. I had done it before. I could go back to that. My brain knew it. It wasn't as good. And i had different feelings about it. But I had trained methods.
00:12:54
Speaker
If you don't train the methods, then you're really starting from your comfort level or your habit. yeah So i think I think that's what something to that in some instances, like practicing in a non-social setting, that's such a rich neurological stimuli setting.
00:13:13
Speaker
You're getting so many more cues interacting with a person than you are in LLM. You're encoding so much more and really helping yourself learn in a different way than if you were to practice that or you were to say, hey, LLM, how am going to make friends? Like, how am I going to go talk to that guy in the square?
00:13:33
Speaker
and and I think that's a part of learning that will be difficult to really intentionally intend to. While you were saying that, something popped in my head, which is ah you know one of my superpowers is bad analogies. And there's there's a machine at the gym called the Smith machine, and it's basically squats on rails. And the idea is that it keeps the squat. you know like Everything's on rails, and you're moving the weight in a single plane.
00:14:03
Speaker
And people will say, well, it's really great because you can target your quads. But then other people are like, well, no, it doesn't have all of the heaviness and the awkwardness of the weight and all the other little tiny muscles. So when you were like... saying you're talking to a human and all this additional pieces of input are coming. It's like, you're absolutely right.
00:14:18
Speaker
Micro expressions and every, every, all the six senses now or a seven or eight, depending on how many senses you decide the count we have. But like all the senses, while simply yapping in an LLM is you know barely one.
00:14:33
Speaker
It just clicked on me. And it's just like, wow, do I want to hold the weight and all the little tiny muscles that make it under control? but do I want to kind of artificially move the weight on rails? And you're going to build a differently shaped persona or body if you if you think about like that.
00:14:51
Speaker
i ah I love that analogy. Big analogy fan. I'm going to ride with it. The stability muscles that you get from the off-the-rails training, they actually get you different things in different scenarios.
00:15:09
Speaker
Yeah, yeah. Freeways. That's also pretty key in how... Humans make connections across things. You need to have cues, right? You need to have a signal in your brain to eventually move into an action or decision.
00:15:27
Speaker
Now, if the cue is just text or abstractly understanding a scenario, that could get you somewhere. But say it's a micro expression and you're picking it up not just in talking with a guy this square, but with your parents in an interview.
00:15:43
Speaker
In all of these different scenarios that you didn't apply the maybe thought exercise of working with the LLM on, that's actually kind of how you get dynamic skills.
00:15:59
Speaker
So i think I think that's kind of an important element to take note of. Because you also need to sample like all of the diversity that you're going to interact with. You're not just going be interacting with like different models. You're going to be interacting with different people. So so that's it that's a great analogy in my opinion.
00:16:16
Speaker
I worry sometimes that social media has caused labels, unnecessary labels to spread. but faster than necessarily they need to because Gen Z, Gen Alpha, whatever the current Gen is, and they did they love a label because everyone wants to get a diagnosis. And if you have a diagnosis, then you understand it. Then you're like, okay, i can do the thing.
00:16:39
Speaker
That's how I used to think about it. You get a diagnostic. Now I know what's wrong with me. But now we we stop at the, and now I know what's wrong with me. We don't actually fix it. The reason that I'm saying that is that when someone says, well, I have this flavor of that thing and that's what prevents me from...
00:16:54
Speaker
talking to people and not being awkward. And they're thinking it might be biological or serotonin or something in their brain. And it could be just that we broke them during COVID or we broke them during their formative years or the social media broke them. It makes me wonder if anyone's done any research about who is clinically got a problem that they need to work on versus who has simply adopted a label and made it a thing.
00:17:17
Speaker
It's kind of like when you think you're lactose intolerant and you discover that your mom just didn't like milk and you were totally fine. You could have drank milk your whole life. don't know, like whatever the lactose equivalent of Munchausen by proxy is.
00:17:29
Speaker
Right. i So I think there is a lot of robust research to figure out what is going on neurologically when folks have this experience. Yeah.
00:17:41
Speaker
Generally, my take on this is that brains are machines too. So if someone's feeling that way, it's likely the same exact undercurrent of a neurological signal or system.
00:17:54
Speaker
And so I think that there's truth in that. What I think is a good thing, and we we actually think about this a ton in red teaming, which is kind of like a hard fork. A really good way I contextualize the label and what that means is that every single one of those labels is a massive spectrum.
00:18:15
Speaker
And then it's overlaid on all of the context that someone's individual scenario is bringing. And that's actually what is making your brain fire. Not this label that has like a really specific suite of Googleable top five behaviors, for example.
00:18:33
Speaker
And it reminded me of red teaming because we're really pushing how we red team really complex sociotechnical scenarios. And this is something we think about at work a lot. Like how are we layering context so it's not actually just defining people or folks who could be interacting with AI with like the top three Google search results of something?
00:18:57
Speaker
Because LLMs, humans, whatever scenario you're working with, Everyone's going to be a really, really complex recipe of those ingredients.
00:19:09
Speaker
And it's going to taste different. It's going to feel different. So my, yeah, that's, I think, I think it's more complex and probably quite truthful and honest in all of those experiences.
00:19:24
Speaker
But there is great research about generally how it's happening to folks and like systematically what does it look like in the brain when we get these types of inputs. Yeah, because I mean, that you can tell that I'm trying hard to speak correctly and explicitly while offering an option that I'm wrong.
00:19:42
Speaker
And I want to be respectful of people whose lived experience is that XYZ is a thing that they have. And it sucks because, you know, like I'm a type one diabetic and I always hate it when people are like, oh, can you eat that? Or are you sure? Oh, my my friend cured that with cinnamon. Like there's always like, oh, I have a thing. Oh, well, that's not really a thing. No one wants to feel like that. But at the same time, if I tell someone, you should go out and touch grass and you should talk to people and you should put a reminder on your phone that says talk to strangers, they might be able to say, well, I can't do that. you know I can't talk to strangers. that's I'm prevented from doing that by some from some ism that I have. But I still think it remains good advice if you believe in the root issue, which is we are ah community-oriented social beings and as our optimal state
00:20:25
Speaker
is out in the in the market or in the town square and not in our beds with the pillow over our heads. And I think it actually goes back to that cassette idea, right? The spirit of the cassette playing in your mind is that whatever gets put in to your brain machine what you're working with. That's how you're training it.
00:20:47
Speaker
Those are all the patterns that will result in decisions and actions and stuff that makes up your life. And there is a universe where you choose to say, okay, the cassette tape is long form LLM chatting.
00:21:03
Speaker
i But if you zoom out from that, you would never want to listen to the same CD for your entire life. Or cassette.
00:21:14
Speaker
Or mp3. Or whatever version of music listening we are now. If you think about the research on dementia and getting people to play games and do the Wordle and try new things. Time...
00:21:28
Speaker
time when you Well, the days are long, but the years are short. And you'll find when you get older that days fly by and you go and you think, why are this why are these days flying by? It's because they're the same day.
00:21:41
Speaker
And the brain JPEGs them. It compresses them all into kind of just a randomly kind of mediocrely blue sky because like nothing really happened this week. So forcing yourself to do something and like on JPEG your life is an important part of being present. So like my parents are in their 80s and I'm always trying to like get them out of their, you know maximize the time so that their brains are still operating as opposed to listening to Creedence Clearwater revival for the same.
00:22:12
Speaker
yeah My dad, i don't think the CDs left the car. I actually think I have that on vinyl. So I was you're goingnna if you're going to get stuck on one CD. I mean, Credence definitely the move, but I'm still, the point is try new things. Yeah, exactly. i And I'm thinking that when it relates to AI, there are some there are some ways going back to one of your first questions, like I said, I use it strategically.
00:22:40
Speaker
i think there is a skill to use technology as a tool. that is probably going to need to be the solution to the Gen Alpha native AI user.
00:22:55
Speaker
And that's for getting things done. But it's also for individual relationships with AI and how it leads into your life. So there's a universe if someone is in that spot where they're like, I do not want to go to the square.
00:23:10
Speaker
Using LLMs as the tool to get you to the lived experience you want That actually is a really now starting to be studied proactive use of an LLM.
00:23:28
Speaker
But it's intentional, though. It's not it doesn't happen accidentally. Like if i like I got this little robot next to me and my mom was here earlier and she thinks the robot's adorable and it has expressions and stuff. And there was a great article in The New York Times a couple of weeks ago about a shut older person who got one of the robots. And, you know, they're huge in Japan, the companion robots. And really, it's just somebody to talk to because people are lonely.
00:23:49
Speaker
And like the loneliness epidemic is a thing. But I feel like those are all like allowable, at least by my um if i'm the if I'm the arbiter of this thing, because they're intentional. What I worry about is unintended use. And i was on some call, some third party call a couple of days ago, and someone was saying that they were looking forward to LLMs being like the movie Her.
00:24:12
Speaker
And I like winced because I have no poker face on a team's call. And i was just like, that's not what we want at all. like And I was wondering to myself, is this person intentionally looking for an ai girlfriend or was that an accident and they fell into that? Because I would think that would be not an optimal scenario.
00:24:31
Speaker
Yeah. the The psychosocial impacts of AI are so interesting. And our team actually really does look into what are the impacts of that um in our work, actually.
00:24:45
Speaker
the the interesting thing on, hey, I need this or want this relationship with Mm-hmm.
00:24:54
Speaker
is that I do feel like going back to that, everyone's everyone's their own unique recipe. I do you feel like that sometimes like can help people. But I think the awareness of how LLMs interact and replace or using the same exact neurological systems is maybe the part that I always ground myself in.
00:25:19
Speaker
So if you're getting to the place where it's activating that same idea of like, oh, this is my LLM best friend and this is also my best friend. i think having a really hard look on it, being able to peel yourself away and say, is that actually how I want my brain to think of this is how I manage it Right. With building on the tools to do that, I do think is like going to be the frontier of how AI starts getting integrated in relationship building, coaching, education, folks whose brains are forming hu so that the awareness is not just coming from, hey I like studied a lot of neuroscience, so I think a lot about this constantly. and instead comes from this is how you use the tool. This is how you should use it to get you to where you want to go. Yeah.
00:26:11
Speaker
At the root of all of that is, though, and is intention. And like, I didn't let my kids have a phone until they were 13, you know, and maybe that should have been longer. That seems to be one of these generally set upon 13-ish is a good time to give a kid a phone. And I think we've all, you know, given a two-year-old an iPad to shut them up.
00:26:30
Speaker
And then we've also all been at Chili's and seen a two-year-old on an iPad and said, oh, look at those horrible people. They gave that kid an iPad to shut him up. you know, ah there's like the the Overton window of like what is reasonable seems to be be shifting. And I'm trying not to be judgmental, but I want to give people the tools, like you said, so that they can be intentional.
00:26:53
Speaker
and make the choices. But what i'm that requires cognitive, that requires cognition. And it feels like there's almost a a new generation of the subcognitive where people are simply reacting rather than being a member, like being at the tail of the teller of their own story.
00:27:11
Speaker
And there's, you know Certainly capitalism and all the different things that cause the hamster wheel to have to spin that make that happening. But like maybe it's that the Maslow's hierarchy of needs doesn't let people have the space to talk about these things like we are. We are blessed to have the half an hour that we have to be able to do that. But if I was working two jobs, I might not have this luxury.
00:27:32
Speaker
I think that's so real. Yeah, I think about this a lot in how LLMs are used for decision making. Again, that that's kind of where my research was. And the instance that I see most is often that LLMs are just really, really prompted to give options to select.
00:27:56
Speaker
Oh, yeah. Multiple choice is always an easier test. Exactly. And when you're busy and stressed and managing a ton of things, that's actually the most human instinct. Like, give me something to pick from.
00:28:09
Speaker
Give me two to pick from. What I think about a lot is that there's a ton of research around how we get decision fatigue throughout days. We get decision fatigue when we have a ton of choices. And humans get bad at that.
00:28:25
Speaker
Say, for example, you have a ton of options. And thinking about how LLMs just route very closely to, hey, select instead of assess, you end up missing, back to our conversation about talking on the square versus talking to an LLM, you miss a ton of information that would get you to your decision quicker with more assurity if you thought through the assessment to get the options versus selecting blindly from options. Yeah.
00:28:53
Speaker
Because it it helps you figure out how you how your values interact with that decision. And that's actually like a pretty well studied decision framework. So maybe that's just one example of how I see these tools flooding our systems and interacting with us, changing the muscles that we're working.
00:29:17
Speaker
Yeah. No, I'm feeling better about my Smith machine analogy though, because the stabilizer muscles, like you, you know, when you have no more reps in you, you can switch to a Swiss miss machine and you can probably get a couple more reps out.
00:29:30
Speaker
And it also makes me think about my buddy, Alex Falcone, who's a comedian, has this great bit where he talks about, cause I've been married 25 years. This is year 26. Being married that long, your entire relationship boils down to just what are we eating tonight? Yeah.
00:29:45
Speaker
And like that, like if you if you're in a relationship for long enough, like ultimately, know, going to be married 50 years and we'll be like, oh, what do we need? I don't know. What do what do you want to eat? I don't know. Like, do we want to and I'll say to my wife, do you want to go to noodles, noodles and company? And she's like, no, I don't want this.
00:29:59
Speaker
The one thing that she gets at noodles. And I'm like, well, there's like 50 things at New Orleans. There's like 10,000 different options. No, no, no. One restaurant, one, one choice. And I'm realizing now that that's decision fatigue. It's just like, I don't, I don't know what I want to freaking eat. And I'll say, what about India? Not Indian.
00:30:15
Speaker
well then what do you wanna eat? you You have to have the spoons to make these decisions. And when you don't have the spoons, then yeah, just ask the LLM, spin the wheel. And I think that's a great that's a great example of how LMs can help.
00:30:35
Speaker
Because you may only think of Indianan noodles. LMs are trained to think of more. So that's like a positive interjection, I think, because it does get you out of potentially a bias of the initial set, maybe.
00:30:52
Speaker
But it still doesn't get you the answer that you want because you haven't said, OK, do I need something warm? Do I need comfort? yeah There's no context. and Vegetables like the assessment part tends to be where we get the most insights a about ourselves, but be better outcome.
00:31:11
Speaker
And i do think that's something to really pay attention to. And I think about it a lot in a Red Teaming and how I translate it. Because yeah it's of it's a core difference from what I think my generation was, which was the big data generation obsessed with assessment and analysis.
00:31:30
Speaker
And now that big data assessment lives in an LLM. Yeah. And you just get the exact... output really, really, really different decision-making paradigm.
00:31:43
Speaker
Yeah. i could There's so many directions. We're basically out time, but there's so many directions I'd make it take this conversation because now when you thought about the the assessment generation, my my cardiologist, because I'm at the age where I have to get my heart checked on a regular basis, says that he has an absolute hate relationship with the Apple Watch.
00:31:59
Speaker
And I said, why? And he says, because it's a one-lead EKG or ECG. It's one lead. It doesn't have like seven leads. It's on one part of your body and it'll tell you like, oh, I don't think I'd be wrong. And yeah, like you hear the story, it saved guy's life. He was had an AFib and he went in and he saved his life. But he's like, for the every one guy that does that, there is a thousand that come in and they're like, resting heart rate is too high. My RO ring said this and this and that and the other thing. And it's just like, just listen to your body.
00:32:29
Speaker
So I worry about the externalizing of signals that turn us from looking for validation from computers and machines and measurement systems that are imperfect when to your original point, when we started this thing, it's like, well, I thought to myself, I don't talk to enough people. So I've made a post-it note that says, talk to more people.
00:32:47
Speaker
That's the input. That's the, that's the tool. That's the feedback loop that I want is the one where i think about myself, think about what's going on me, listen to the signals I have available and make a decision. I don't know if I always need to go and validate that with Apple health or chat GPT.
00:33:02
Speaker
Yeah, I think that's really it. And it was refined over years and years of having to do it without the tools.
00:33:13
Speaker
And there yeah there it is. There's the rub right there. So then here I am telling all the kids, well, you really need to drive stick shift. And they're like, but Ubers are so fun. i love to Uber.
00:33:24
Speaker
Right? And that's that's the analogy that people who listen to this podcast are sick of me hearing. is like There's an old man who shakes fist at cloud. is like Do it the old-fashioned way. ah Because if you don't, then everyone's going to forget. And then it's the end of WALL-E and we're all just running around getting tacos delivered directly into our mouths.
00:33:42
Speaker
And then somebody you know comes and moves my jaw for me because chewing is too hard.
00:33:48
Speaker
Sigh.
00:33:51
Speaker
Yeah. I think it's It's an interesting place or inflection point to think through because I also, i like fundamentally believe that tech and tools like this do help humans get to places that they want to do. Like humans are so good at tool use.
00:34:14
Speaker
That's another thing. Like we are community animals and we love a tool. um But the trade-offs maybe, i think, aren't necessarily maybe highlighted in the way that tools used to be.
00:34:31
Speaker
So I don't think that the feedback loop on what you're missing or how it's changing is as evident as we'd want it to be.
00:34:42
Speaker
And I also wonder, the technology moves so fast and the ability to do these things is honestly quite new. Right. I don't think that the uses that we're talking about were relevant two years ago.
00:34:56
Speaker
Oh, they weren't relevant like last summer. Exactly. Yeah. So tracking how folks are going to interact with it and what they replace is going to be a big part of the science.
00:35:10
Speaker
Understanding how the evolution of AI starts to almost like co-create. the reality of folks who really invest in it is that human computer interaction bit that I think is getting really, really quite interesting now that capabilities are so advanced.
00:35:31
Speaker
And it becomes a social technical problem rather than just a society problem or just a technical problem. Because at this point, the capabilities are are really meaningful And they're very good. right they' We'd be having a different conversation if LLM chatbots gave horrible advice that didn't make any sense because people just wouldn't use It wouldn't be a good tool. right So that i that's an element that is going to be really interesting to watch.
00:35:59
Speaker
Yeah. And the thing is that with the normal curve, there is going to be bad advice. There has been bad advice. There have been bad advice with bad outcomes. Have we decided that this power tool is is safe and should be unleashed? It's too late. It's happened.
00:36:13
Speaker
But I'm glad that folks like you are studying it. And I appreciate you taking the time to chat with me today. It was a joy. Thanks for chatting with me. We've been chatting with Tori Westerhoff. She's principal AI security researcher and a member of the AI Red team at Microsoft.
00:36:28
Speaker
This has been another episode of Hansel Minutes, and we'll see you again next week.