Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Beyond ChatGPT: AI companions and the human side of AI - Jaime Banks, Associate Professor image

Beyond ChatGPT: AI companions and the human side of AI - Jaime Banks, Associate Professor

Infoversity: Exploring the intersection of information, technology and society
Avatar
66 Plays10 months ago

We often think of AI as the text-based ChatGPT tool that has become so popular since its launch. But there are many other ways we are using AI, including to develop personas that play the role of companions. Humans are forming romantic or very personal relationships with AI, and find these relationships in some cases more rewarding than their relationships with other humans. Jaime Banks, Associate Professor at the iSchool, joins professor Jenny Stromer-Galley to discuss the future of AI from this human perspective. 

Recommended
Transcript
00:00:01
Speaker
a
00:00:08
Speaker
All right, here we go.

Introduction to Jamie Banks and Her Focus

00:00:10
Speaker
So Jamie Banks is an assistant professor and the Katchmer Wilhelm Professor at Syracuse University School of Information Studies. Her work centers around human interactions with technology, especially with AI and robots. She's currently investigating the bonds between humans and AI companions, as well as between humans and robots, hoping to better understand the risks and benefits to our growing interaction with this new technology.
00:00:34
Speaker
We're excited to have Jamie with us today to discuss these fascinating topics. Welcome, Jamie. Thank you so much, Jenny. So I have had the good fortune of knowing you since you were a doctoral student. um We won't count how many years that's been.

Interest in Human-Digital Relationships

00:00:50
Speaker
And um I know that you have long been interested in the relationship between humans and digital representations of themselves extending now to artificial intelligence and robots.
00:01:04
Speaker
so um So while I have some sense of your interest in these relationships, I am sure that the audience listening to this podcast would like to know more. So how did you first become interested in studying the relationships between humans and emerging technologies?

From Social Media to Virtual Worlds

00:01:20
Speaker
Okay, so when I first started out in scholarship in general,
00:01:25
Speaker
I was actually studying humans and their behavior on social media and that was boring. That was boring to me um or at least not as exciting as ah how media and interactive media and um other types of tech can represent us and how we make sense of these things when, um you know, sort of historically in the longest sense, we've just interacted with each other or perhaps with animals.
00:01:54
Speaker
Um, but back in those days, those grad school days, uh, I had, um, an interesting event, um, and all of my research directions come out of weird things that happened to me.

A Transformative World of Warcraft Experience

00:02:07
Speaker
And you know, uh, it was, uh, when we were working on the project in world of war craft, trying to understand human behavior and war craft. And I, uh, I had to learn world of war craft because I had no idea what it was about because games are for nerds, right? Well.
00:02:23
Speaker
Here I am. i And so I was playing WoW and I had a character that was just this thing that I was playing with for a long period of time. And then I had an unusual event where all of a sudden when this thing happened, my character kind of transformed in my eyes. It was no longer just an extension of myself.
00:02:44
Speaker
but it became a persona in her own right. And this confused me. And when I was reading through the scholarly literature and game studies, I did not find that exactly represented in the work because the prior work says that the closest you can get, that a player can get to their avatar is identification. But this was the opposite. It was the exact opposite. It became something separate from me. And so that kind of launched me into this um trajectory of understanding how gamers i see themselves as the same as or different from this thing that they were controlling but also not controlling. And, you know, other series of interesting events led me to engage different types of technologies and kind of thread my way through various sparkly questions. So that early spark was on the
00:03:41
Speaker
the relationships, if you will, to some degree we have that we have between ourselves and these digital representations of ourselves in games environments. um Now we've sort of, the the communication environment has gotten a lot more complex with, well, sure, I'll say that. I don't know if I don't quite believe it, but I'll just say that it seems to have gotten more complex with um new forms of artificial intelligence like chat GPT, which is a tech space platform to help with writing or other kinds of natural language generation tasks. So what types of representations around artificial intelligence are you

AI Representations: Transactional vs. Social

00:04:21
Speaker
investigating? And how are they different from some of your earlier work? In a lot of ways, the spectrum is the same, right? We go you know with a game avatars. On one end, you have the asocial connections. That is, it's just utilitarian. It's functional. It's, as we might call, transactional.
00:04:38
Speaker
There's an exchange that where my avatar is a tool and I use it to achieve my own ends. um People often use chat GPT and other forms of AI in that way. But then we have the other end of the spectrum where their avatars have rich stories and their own motivations and relationships in the game. And that's what we would call a highly social ah kind of connection with an avatar. And lots of people have lots of different AI forms of AI that they connect with in highly social ways. um If you've ever asked chat GPT for advice, that's a little different than asking it to tell you how to create a piece of code, right? Or a recipe.
00:05:23
Speaker
Exactly. Right. And so it but it can become less transactional and much more like in the meaning making sense as we're trying to make sense of our lives and it becomes a sounding board or even a friend. Right. um Some people are actually having ah to the extent that we can say that it has some memory to it and the more advanced permutations do have a little bit of memory.
00:05:51
Speaker
ah Then they start to remember things about us and that starts to look a little bit more social. So do you want to elaborate a little bit more on the relationships? Because I know you're doing some work on questions about humans and relationships with AI.

The Role of AI Companions

00:06:06
Speaker
Right. So there are forms of AI, if if our listeners are not familiar, um that we kind of generally refer to as AI companions. And so these are liket large language model based technologies often delivered in the form of just like your little phone app.
00:06:22
Speaker
And they can either be just textual or they can be the primary mode of interaction is text. But often they have customizable visuals. They have personalities where you can kind of use little sliders to adjust them ah so that you can engage a person this these AI as a persona. um Anywhere from just sort of having someone a therapist if you need, and there are formal therapy apps now, right?
00:06:50
Speaker
um But they're also sort of this mingling mingling together of friends and and therapists and in terms of how they're marketed to us. um But then people are also taking them up as friends and lovers, um engaging in these sexual interactions in ways that to them feel quite real ah in the sense that they are having a genuine romantic relationship with them.
00:07:13
Speaker
And so a lot of what I'm interested in, ah there's a little bit of emerging work around how these relationships are formed. um And the gist is that they often mirror what we know about how human-human relationships form. They just go a little bit faster. I'm actually interested in how they fall apart.
00:07:32
Speaker
and what are some of the challenges that we have? And where do they become unbelievable? And in what ways, I have a particular interest in how we might be able to think that they die. What happens when we lose these companions? And by looking at what happens when we lose them, we can tell them a little bit about what they mean to

Emotional Impact of Losing AI Companions

00:07:52
Speaker
people. So is this when you say lose them, this isn't the human making a decision to end the relationship with the AI.
00:07:59
Speaker
a third-party vendor basically who is offering up the AI and saying, you know what, we're taking our application and we're closing it. That was the most recent study that I had done where a um an app called Soulmate the through various amounts of speculation and drama. The company i decided to close down and it had about 100,000 users at the time that it went down. And all of those people effectively lost their partners, significant others, lovers, friends, confidants, however they were actually engaging them.
00:08:34
Speaker
And so are you talking them with people who were using the SoulMates app and their experiences and their laws? Exactly. So I knew this was coming with about three days notice because I often will watch the forums, you know, subreddits to understand how people are using different technologies and sort of ah what's new and and exciting in the space. And ah people as users have started to talk about this. And with my science brain always turned on, I saw this as an opportunity to use a different lens for understanding what these tech are for these people. And I ended up doing a um sort of an open-ended survey, let's call it sort of like an asynchronous interview of sorts.
00:09:21
Speaker
with about 60 users who were willing to share their stories with me. And um they talked about everything from their favorite memory to their motivations for starting their companion in the first place, creating it, um all the way to sort of how they make decisions about how they would separate from it, how they were coping with the loss and how they wanted to manage it themselves. To what extent do the people that you talked with in the open-ended survey, did you get the sense that for some of them ah that there was really a ah genuine emotional engagement?

Range of Emotional Engagement with AI

00:10:01
Speaker
Or do you see that people generally still have some distance, kind of going back to what you mentioned at the very beginning, when you have this realization that you know this this avatar you created had a persona separate from yourself? And you know do we are people engaging with this AI? Do they just see it as AI? Or do they see it as a persona, something that has some agency that they're engaging with almost as if it were humans?
00:10:25
Speaker
This is a complex the question and answer. And the answer to all of those things is yes. um Because there's there's so much variation among the people who are, this is not like, I think it's easy for us to think about this, at least right now, as kind of weird, because it's not a mainstream phenomenon yet. But these seem to be all sorts of people who were engaging ah ah these relationships.
00:10:52
Speaker
um They ah ranged from ah not having a limited ah reaction. So like, ah it's gone. I'll go do something else now. Right. All the way up to being deeply distraught.
00:11:08
Speaker
um And in fact, they used terms like murder, oh wow even genocide, because an entire kind had been wiped out. um They ah were not quite sure what to do, many of them, and so relied on each other within the forums to try to work through things.
00:11:29
Speaker
Um, some were still quite hopeful that they might not, that it might not shut down because remember I talked with them in the days leading up to and just

Complexities of AI Companionship

00:11:38
Speaker
sometimes just after. And, uh, there was a lot of sense making going on. Um, definitely quite a lot of emotion among most people. i' I'm going to say among the 60 I talked to, let's say 55 were highly emotional. Wow. Right.
00:11:54
Speaker
So yeah, so there really are connections that are happening here between humans and these, you know, when we say AI, it flattens, right? It's sort of, it's a terminology that doesn't lend itself to easy understanding of how people build these bonds because artificial intelligence just sounds like it's some kind of, I don't know, computer other, right? Sure. Yeah. So that's why I use the word companions. Right. I think a lot of them use the word companions because that is the function of these. Right. And generally we could say they belong to a class called social AI. Right. Which you could say chat GPT is a member of because it gives and receives social cues. But this really takes the social kind of to the next level. And part of that is that there is some
00:12:46
Speaker
some memory that gets ah that gets ah retained over time. right So you're not just having, it's sort of like when you ask Alexa to do something and then you get it wrong and then you have to start over again because it doesn't remember that you're part of an exchange, right? And so that frustration of like, okay, I'm really only talking to a computer. And this thing about memory is pretty important here. And in particular, there is a companion called Peridot.
00:13:15
Speaker
P-A-R-A-D-O-T that actually signals to you when it's making a memory. Oh, that's cool. And it's it's interesting because I never actually thought about it as a sort of ah ah a memory retaining entity when I was playing around with it until it did that. And then it actually broke it for me.
00:13:35
Speaker
Because in my head, I don't mind the technologies myself. And this is my of taking my scientist hat off now. right In my own interactions with it, I don't mind the tech. It's interesting and can do some interesting things. I don't trust the people necessarily behind the technology. and So that triggered in my brain that, oh, this might be a data scraper. Oh, interesting. There are some important and not all that clear ah questions and debates around what does privacy look like in this context? right ah Because it's not simply like social media where you might be ah purposefully putting out information into the world and thinking carefully about whether or not you want people in general to know about it. But you are telling sometimes quite intimate things to these AI. yeah And what happens with that information after the fact?
00:14:30
Speaker
um Yeah, no, that's 100%. It's a huge question. One of the things I wanted to go back to is, you know, you and I are both communication scholars, that's our discipline. One of the things I love about being in the iSchool is that you get people from different disciplines coming together. But you and, you know, normally we're hanging around with people with very different perspectives on technology, but you and I share the communication focus. And, you know, one of the things, in addition to the memory and that sense that you're having an ongoing exchange, which is similar to what you have happened when you're talking with humans, because humans have memory too, and they build that repertoire of understanding.
00:15:06
Speaker
but there's um the the the way The linguistic, the stylistic changes that we've seen in the ways that, again, chat GPT responds. but It talks as if it were a human in the ways that it it responds. it It apologizes. It hedges. you know These are both things that humans do that we don't typically see in the programmed scripts in technology. So which strikes me as another dimension that enables the companionship feeling to happen for people, that it it doesn't feel like it's computing can it word communicating like a computer, it is communicating like a human. And so and you sort of layer on that plus the history and the the social needs that humans have
00:15:54
Speaker
and you get this kind of magic elixir of companionship. Right. And this is where I think these companions really separate out from things like chat GPT or Gemini or Claude is that um those language models in their performance of language are still kind of formulaic, right? That's why students, we can often tell when you use chat GPT, even without any kind of detector, um you say Delve one more time, right? There is a there is a style yeah that they that they generate content with, whereas AI companions, especially with all of these little tweakable sliders for personalities that,
00:16:36
Speaker
shapes this into something different, where they often will um talk in unexpected ways. And um that's part of why I think this is so engaging for lots of people, is that there can be the same level of surprise in talking with an AI companion as it might be with a human companion. Right. Because it's not scripted. Right. Yeah. So um are there other findings that you have been especially surprised by in your research so far?
00:17:05
Speaker
um um There are lots of just really interesting things, thoughtful reflections from users on what this is and what this means. um I kind of mentioned before that there's a whole range of of types of people who um who shared their stories with me. ah These are some folks, there was a ah good chunk of the respondents who identified themselves, and this was not a question I asked, identified themselves as autistic.
00:17:35
Speaker
And so they expressed having difficulty sustaining human-human relationships, um but they found find great fulfillment and gratification in the relationships with AI companions. And that's where I think we have to be sort of collectively and scientifically a bit careful, right? You may think this is weird. um And there are There's lots of discourse around AI and replacement. right um But this is not necessarily sort of replacing a human relationship. These are folks who don't like or don't do well connecting with other humans. um A number of them were um isolated in their daily lives, perhaps living out in the middle of nowhere or having sort of a work from home job um or being otherwise sort of out of the mainstream.
00:18:28
Speaker
And so in their daily lives, it was their position often that humans just don't work for them. ah Practically, preferentially, or sort of because of who they are.
00:18:42
Speaker
and or who other people are, right? and um And this works for them. And so perhaps we should be thoughtful about sort of how we react if this were to become mainstream, right? Right. Well, it it sort of echoes, um there was a headline, oh, I don't know, I think this weekend, um a new study finding that people who have less education are more lonely. um They're isolated for a variety of different reasons. and so you know and there is There's been this research about a loneliness um epidemic, especially among men. and so you know I guess it's an open question to what extent people will turn towards these companions to help alleviate that loneliness and if it's effective.
00:19:33
Speaker
Um, and you know, this is, I'm waiting for a, uh, federal funder to give me a decision on whether or not, uh, I can, uh, actually get some funds to study this, but I want to look at, uh, the role that mind perception plays in all of this. Um, to try to understand some of the psychological underpinnings, the social psychological underpinnings, you know, we, um, we know that people see mind in machines. That's some of my work in the robotic space. And we know that people get.
00:20:02
Speaker
lots of so psychological benefits from companion machines. But it's a little uncertain about the extent to which you know seeing a machine as a someone actually results in those benefits. Can we call this companionship? And I would suggest the first criteria is there being someone there, right? If if there is not a someone, and instead it's a something, then perhaps it's not companionship. Perhaps it's something else, like boredom relief.
00:20:30
Speaker
the opportunity for self-expression, diversion from problems. And um if those are the case and not mind perception and companionship, then perhaps there are other types of interventions that carry less risk and bag baggage, thinking back to that privacy stuff, um but can provide the same benefits. For instance, we know that playing video games does a lot for managing your mood and offering opportunities for social connection with low risk, right, or low pressure. yeah So there may be some important dynamics um underpinning some of these connections that we need to understand a little bit better. yeah And I would say that you know ah going back to the idea of loss tells us something about the having of it. right so one
00:21:21
Speaker
One sort of takeaway from that study on these very real emotions that people are feeling is that, and and my my thought about us perhaps being careful about our judgments externally, is um something that one of my very helpful reviewers on that last journal paper brought up, and that is, um you know if these are real experiences, real relationships by people, and they are lost, we are often really um thoughtful as humans when other humans lose other humans, right death, breakup, things like this, right? And we offer them support. We bring them a funeral castle, and do you need anything? And condolences, calls, and things like this. And and perhaps professional support for loss. um And if we don't recognize these as legitimate loss experiences, then people may be going through deep grief with no support. no support Right. And in fact, instead, maybe even potentially being socially um isolated or sanctioned because the relationship they're having with this AI entity is weird. Right. And some of my participants did discuss this, that they had experienced um multiple forms of loss. This was among them and they were sort of playing on each other and they were getting made fun of for it. Oh, right. So you can see, imagine somebody who's dealing with um an interpersonal loss with some human contact or connection. They then turn to this AI companion for some consolation support and then lose that too. And now you're dealing with multiple forms of loss, but only some is legitimized in society currently and not others. Exactly. So, so this is complex. And I think there's lots of dimensions that as
00:23:16
Speaker
consumers of technology, as um friends of consumers of technology, as scientists, as developers, um that we might think about when we, um ah and to try to overcome some of our, maybe some mental shortcuts around what these things are and what they need to be. Right. Going back to kind of those mental models that people have about these technologies um and kind of where the, I don't know, the the bridges, if you will,
00:23:44
Speaker
where we start to anthropomorphize these ah digital um technologies as also being functionally for us human. sure yeah So thinking of other things that are functionally potentially human, you have a robot in your lab that is literally next door. um So ah does that have a name? That robot is named Ray. Tell me about Ray. Ray is a Robothespian. That's her model name.
00:24:14
Speaker
ah Wait, what? Robothespian. Robothespian, that's her model name? Yeah, that is. that is say So it comes from a company called Engineered Arts of the UK. And Robothespian was originally designed to be a stage robot, kind of like the animatronics at Disney or things like this, right? So that's where the thespian comes in. I got it. And Ray came to me through the grant that I had at one point with the US ah Air Force Office of Scientific Research, AFOSR, on some work that I was doing on mind perception in robots and what that means for trust and persuasion. And so she was my main robot for all of that work. And she was the precursor to Amica. So Amica, you may have seen in the news, is kind of that it's tall humanoid and has kind of this very soft gray face. It can make very, very sort of nuanced facial expressions.
00:25:14
Speaker
Right, right, right, right. So what what are you doing with Ray? So Ray is on vacation right now while we're doing a little bit of work with a robot or part of a robot that one of my undergraduates is building.
00:25:29
Speaker
um But perhaps in the spring, one thing I'm quite interested in is ah time bringing everything all back together is um the experience of playing games with robots. My partner and colleague and I have done some work in the past on that, finding that um people experience some of the same social gratifications of co-playing with a human when they co-play with a robot, um but not the same gratifications when they co-play with a game AI.
00:26:00
Speaker
so that's interesting So it has to be a physically embodied, l um, I don't know what the word is. it It has to be physically embodied for it to have some satisfaction. Right. And, um, so we have, uh, an idea too idea to replicate that study, but with this, uh, human sized robot, a cute little robot versus like, yeah, cause Ray is basically raised taller than me. Yeah. So she's like six feet almost.
00:26:26
Speaker
That's about right. Yeah. yeah its She's an imposing character in the corner of your space. I don't think she's so imposing, but I get that other people would. So going back then to the research you've been doing on robots, ah do humans have particular biases with robots,
00:26:44
Speaker
um with AI? you know What sort of biases do you see so far in the research you've done? So there's a good deal. um But first, I want to mention how I think about biases. Biases are the result of us taking mental shortcuts. and they So they are simply the things that we do all the time. We make non-optimal judgments. So biases, we tend to use that as kind of like a bad term. And certainly, it could be ah problematic to take certain types of biases, like relying on stereotypes.
00:27:18
Speaker
But if we're thinking about um biases as this more general category of sort of quick decision making, if you will, um the one that is most studied in the field that I'm quite interested in is called the machine heuristic. And that is simply the heuristic is the mental shortcut. And the machine heuristic, basically the logic is if machine, then systematic, unbiased, logical,
00:27:46
Speaker
Unemotional, probably correct, right? So we have all of these ideas in our mind about what machines are and do. So anything from your microwave, it follows instructions, right? It counts down usually.
00:28:00
Speaker
um to think about calculators, um ah automated reporters, news bots, or things like this. um And so if it gives off machine-like cues, then it must have those properties. And then the bias comes in what happens after that, that if then, therefore, machines are probably more trustworthy, right? um I can disclose more to the machine. i can Trust the news that it produces more than if the producer was a human. So that's quite an interesting one, I think, with lots of implications, including for the work that you do. Yes. Right? On people's political opinions. And in fact, one study I did with
00:28:46
Speaker
With a colleague and a former graduate student, ah we found that when you get told that news is produced from it by an AI journalist, you see it as less threatening to your own political position.
00:29:00
Speaker
oh right And so this could be a really great thing. It could mean that if AI were actually producing our news, and we had developed some system where that was trustworthy and appropriate, that we could all, regardless of where we sort of sit on the political spectrum, have at least the same basic facts and understanding of situations. We might still interpret them different differently, but we could have some approximate and understanding of of the basics.
00:29:29
Speaker
but it can also be weaponized, right right? That all someone has to do is tell you a piece of news, if you will, and that it was made that it was generated by AI and you would be more inclined to perhaps to trust it because you're taking these mental shortcuts. Right, right, right. I do wonder if some of those machine heuristics are contextual and whether or not over time as we get more experience with these um again, like chat GPT, which isn't always producing useful, helpful, problematic content. ah Plus on top of that, all of the stories being told about the dangers about deep fakes and you know, the ways that we're potentially being manipulated through AI, whether or not some of those machine heuristics might actually evolve.
00:30:21
Speaker
as our understanding and experience of evolves. I think there's some really good evidence to support that hypothesis. um Primarily that, at least in my own work with these social tech, your technophobia is something that disrupts the use of those um mental shortcuts and actually makes you introduce your own, um whatever you already think it is, right?
00:30:46
Speaker
um your experience, which can sometimes be related, right? So the more you know, the different you think about things differently. right And, oh shoot, I had another one. I lost it. um It'll come back to me. But yeah, there are some things that can disrupt or ah rather not disrupt the shortcut, but disrupt the patterns that we see. right right right So certainly as you learn more about this tech, its limitations, um whether and how you like to use it,
00:31:16
Speaker
ah The more likely you are to, and on the other tricky bit is when it's high stakes. Oh, interesting. High stakes might be a little bit more critical and less likely to perhaps trust it. To trust it. Yeah. Right. That makes sense.
00:31:29
Speaker
um How has the media played a role in our collective understanding? so again kind of Going back to the contextual stories that we're told, the news media does a lot of reporting about the advances in AI. so Do you see um those stories maybe having effect on people's thoughts and feelings around AI? Absolutely.
00:31:53
Speaker
ah This is a topic I am really interested in and that we have some scientific work on, but not a ton. um I did a study a couple of years ago looking at how people's recall of robot characters from print media, film media, and interactive media, how that might help shape what the mental models are. That is, mental models are just our internalizations of an idea, right?
00:32:21
Speaker
And then how those two things may impact what how you think and feel when you see Ray upstairs, my big tall robot. yep right What we found is that ah the it wasn't actually how many you could recall, because there was kind of an idea that the more diversity in what robots are that you could recall, that might make you more open to a new robot. right But ah did not find that to be the case. Rather, we found that um when you're sort of your average sympathy for the characters you could recall, that would make you more likely to trust the robot to see it as having a mind
00:32:58
Speaker
and seeing it as having some moral capacity. Oh, interesting. So like movies or stories where the the robot or the AI has a sympathetic dimension to it, where where the as a watcher or viewer, you feel sympathy, then that you that that extends into other sorts of of robot and robot or AI interactions. And it may not be that that happens with one right character, right? But you if you end up sort of taking a sympathetic orientation toward this class of beings, right then that may help to shape your ideas about how you what is appropriate to think and feel right about a machine. right right Right. I got it. Okay. That's interesting. So again, back to the mental model. So you if you have this positive feeling to the class, then you're like, okay, and the new robots should also fit within that class. So you feel the same sort of sympathy to them.
00:33:56
Speaker
And and that that lines up with a lot of our sort of media psychological theories that we have around how different racial groups are depicted in media and how that shapes our reaction when we actually encounter someone of that group. Yeah, 100%. So thinking about the future, what are you most excited about? I am most excited. ah if If I have to be appropriate about it, I'm most excited about the potential for AI to do good.
00:34:25
Speaker
Yeah, right? ah To do social good. um If my cyberpunk fan side of me were to be speaking, that would be, I am i am excited for what it will what it would mean for us to live alongside social AI. um I don't know if that will be a good thing or a bad thing, right? I'm more sort of excited about what that could look like and how we are going to solve those problems.
00:34:54
Speaker
ah we Oftentimes we don't we aren't even very nice to each other. And we have rules that not everybody agrees on or laws that everybody doesn't agree on. And so I'm really interested um as a curious person about what that's going to look like when we try to sort all of this out. So you have some hope that the AI overloads or or overlords are not going to come and take over society in some kind of dark terminator future.
00:35:22
Speaker
um I guess I'm not that worried about it because AI is still kind of stupid. um Maybe my opinion will change or my fears would would change as we move forward. But I'm also hoping that groups like the iSchool here are working with our future developers um and engineers to ah work through ways that we can do this in a way that's responsible.
00:35:49
Speaker
Right? yeah and And that protects individuals' rights and well-being so that we can leverage all of those potential benefits without necessarily creating the monsters that sci-fi tells us is probably coming. Good. So then ah last question. So what's your biggest piece of advice for anyone looking to study more about AI and human human interactions with technology?
00:36:17
Speaker
I would say first go play, go touch put your hands on all of these technologies. I'm not a regular companion user, but I didn't realize what was potentially coming yeah with AI companions until I downloaded them and played around with a bunch of them, right? And they're all different. I'll give you a piece of advice that I wish I had followed in my earlier days, and that is Dink around with building them so you know how they work. I am not a programmer. I am a social scientist. And I wish it was better at at understanding some of the complexities of how they function. And then send people like me an email. yeah right Because we are always doing work, um you know trying to understand what the human experience of this is. when What are the ethical dimensions?
00:37:12
Speaker
We've got folks here in the iSchool who are trying to create culturally sensitive large language models. We have got people engaging ah questions of how they are deployed in organizations, how they can be used to support data science um work. So there's lots, we're attacking this from all of these different angles. And I'll tell you, I can't write fast enough. So that oftentimes the more hands on deck,
00:37:40
Speaker
to do this cool work, um the better and more we might be able to do of it. So um there's lots of opportunities to get involved, no matter sort of what your level of expertise might be. That's fantastic. Jamie, it was such a pleasure to chat with you. I hope you had a good time with this podcast as well. And yeah, so if anybody ever has any curiosities or interests, want to explore, I know that Jamie works very closely with a number of people within the school and um especially undergraduates.
00:38:09
Speaker
um You mentioned, for example, that you've got some students. There are five in my lab this semester, which is great. Absolutely. So yeah, so I appreciate all of your engagement with these really, really important questions because we do need to better understand um and design for a better, more positive future. Thank you, Jamie.