Intro: Challenges and Technologies in Player Engagement
00:00:00
Speaker
Welcome to the Player Engage podcast, where we dive into the biggest challenges, technologies, trends, and best practices for creating unforgettable player experiences. Player Engage is brought to you as a collaboration between Keyword Studios and HelpShift. Here is your host, Greg Posner.
Focus on Trust and Safety in Gaming
00:00:16
Speaker
Hey, everybody. Welcome to the Player Engage podcast. I'm Greg. It's my pleasure to bring you a very special episode of the podcast today about a very significant topic in the gaming world, the role of trust and safety, with a special focus on the challenges faced by moderators, and particularly during the holiday season. Today, we're exploring the crucial role of trust and safety in gaming. We'll start by understanding what safe gaming environments entail, examining the roles of moderators, and the types of games that depend on their vital contributions.
00:00:43
Speaker
The journey will take us into the core gaming safeguard mechanisms.
AI's Role in Moderation
00:00:47
Speaker
We'll then venture behind the scenes with a day in the life of a moderator to understand their daily challenges and the balance between proactive and reactive moderation. Following this, our discussion will pivot to the role of AI and trust and safety, delving into how AI is revolutionizing moderation, but managing routine issues and prioritizing critical cases.
Meet the Experts: Sharon Fisher and Chris James
00:01:06
Speaker
Today, we're joined by Sharon Fisher, a trust and safety veteran with over 15 years of experience in the gaming and social platform industries.
00:01:13
Speaker
She got her start as a moderator for the popular kids MMO Club Penguin, which was acquired by Disney in 2007. She held several roles at Disney until 2016 when she joined Two Hat by Microsoft, a moderation software company. She founded Real Game Consulting in 2020 to help platforms create and maintain positive community interactions.
00:01:33
Speaker
Today, Sharon leads a trust and safety team at Keyword Studios, where she sees the team of over 300 superhero moderators. The unit has established itself as an industry leader by prioritizing superhero well-being and optimizing workflows through the AI and HI approach.
00:01:49
Speaker
We're also joined by Chris James of Modulate. Chris has many years as an audio data specialist, and Modulate is using machine learning and AI technologies to create a safer and more inclusive voice chat experience with their tool called ToxMod. So that was a lot. I usually don't say that much, but thank you very much for joining me, everyone
AI Solutions for Safer Voice Chats
00:02:08
Speaker
Let's jump into this. And before we do, Sharon, you want to give a quick introduction of yourself? I mean, I don't think I can top that one, Greg. You've said it pretty much all. Hello, everyone. It is a pleasure to be here. I'm super excited to talk a little bit more of how we do it, why we do it. The more that I go around the globe, the more that I learn that not everybody knows what a moderator is and what we do behind the scenes. So looking forward to connect that with AI and Chris's expertise.
00:02:38
Speaker
Thanks to Chris. Sure. Uh, hi everyone. My name is Chris James. I'm the audio data specialist at modulate. Thank you, Greg. Again, for that amazing intro.
00:02:45
Speaker
Uh, I work on the ground floor for studying the voice chat data that we use to develop talks. Cool. And both these are really interesting,
The Importance of Community Health
00:02:53
Speaker
right? We have modulate, which is an actual tool that's being used. That's analyzing voice and online games, which is nuts. Uh, I guess maybe Chris, you want to give us a high kind of level, uh, high picture idea of what modulates doing. Sure. Yeah. Uh, so talks about as a product, it's a AI solution that helps.
00:03:11
Speaker
go through multiple instances of voice chat in a game and pick out only the most toxic material. So, you know, a lot of the stuff that we moderate for is stuff like hate speech, you know, stuff like harassment, things like bullying, and
00:03:26
Speaker
being able to escalate those two moderators, human moderators, so that they can take the proper action needed. Solves the old problem that Two Hat did as well with text where you have to go through, you know, hundreds and hundreds of instances of voice chat to find anything toxic. But with our solution, it escalates that stuff for you and you don't have to go through everything.
00:03:47
Speaker
That's crazy to think that these tools even exist out there that are monitoring both voice and text. I could understand it's easy to understand and read text, but once you start analyzing voice and kind of understanding that there's a time and latency and how it's all happening so quick, it's kind of this mind-blowing thought that, hey, there are people behind the scenes that are protecting these people that are playing online.
00:04:06
Speaker
Before we get too kind of deep into the subject, let's start high level and understanding what trust and safety actually is. And maybe we'll start with Sharon. Are you able to kind of give us a high level understanding of what trust and safety is in gaming?
00:04:19
Speaker
Let's go very, very high level because I think trust and safety itself is something that continues to be shaping and finding its own place within organizations. So trust and safety, the way that we define our keyword studios is the area that will take care of our gamers, but also our superheroes, making sure that content
00:04:40
Speaker
That is not aligned with the community standards of our clients. It's not present on their platforms, but also if a content that is unlawful happens to be within the platforms, we're able to escalate it for real life.
00:04:56
Speaker
I'm real time action we would be a party so that's one tiny piece there's the other side obviously like partners and technology that helps us a lot to make sure that we get there at the right time it is not more about the censorship or anything like that it is technology at a keyword studio sees our best ally because it just help us.
00:05:19
Speaker
focus with that now. What else is stress and safety? New policies are coming and everything that has to do with privacy. So we also have to be aware of any and every that are happening right now and that they are boiling currently.
00:05:35
Speaker
Again, working with our partners on technology really help us rather than going one by one policy and trying to apply in every single of our projects manually, having technology behind really help us get there. So, what is trust and safety? Caring for our gamers pretty much the real-life world protection through the internet and pushing the envelope to also think about superheroes slash moderators while they are
Training AI with Evolving Language and Culture
00:06:05
Speaker
Great. Chris, is there anything you want to add to that? I think Sharon covered that beautifully. The other thing I would say about it, I think, is just looking at community health on online spaces. I think Sharon mostly covered it, but definitely want to make sure that we have healthy and happy communities.
00:06:23
Speaker
It's an interesting thing that you both brought up, Sharon, you were talking about kind of not just policing the gamers themselves, but also protecting and the moderators themselves. So let's start with maybe what is the role of a moderator? And Chris, I don't know if you want to take that one. I don't know if I want to call out each question or you want to just raise your hand. But Chris, you want to talk about the role of a moderator and kind of maybe what your role is? Sure, I can. So a bit about my role first.
00:06:50
Speaker
or at least I started being one of the data labelers to help train our AI solution to know what is and isn't bad. So for example, I would have an audio clip for voice chat and I'd have to tell, you know, inside a tool what is and isn't bad about that clip. In terms of being a moderator, which my role has evolved into kind of being a sit-in moderator for us behind the scenes, what I mostly do is look at actions, see if our tool has
00:07:20
Speaker
taken the correct action to escalate that thing, and also tell the people who are at the studio that we're working with, how bad is this? What should they do about it? And also, you know, help inform them on community standards and things like that in terms of
00:07:38
Speaker
you know, my particular experience being a moderator, I would say that a moderator is kind of a combination between a first responder for a situation, and then also a bit of social work, because what they have to do is figure out the right thing to do about that thing. But curious also to hear what Sharon has to say about that.
00:08:02
Speaker
Yeah, Sharon, I want to hear your perspective because you are managing a team of these superheroes, right? So how do you look at the role of a moderator and how do you make sure that they're getting the right well-being and support?
The Impact of Removing Trust and Safety Teams
00:08:14
Speaker
I think that's why we are partners with Model 8 because we align a lot in our visions, but I think I will add to Chris's description of the role.
00:08:24
Speaker
those two are really key but also they have to be experts on pop culture and the specific to the project lingo but also we need to look into bias and multiculturalism so as you can see the role becomes more and more rich something that and why superheroes and why we're upgrading them to that is because I think for the longest time all this work has been done but on the deep
00:08:51
Speaker
background and right now what we're trying to do is bring up to the attention that first of all superheroes exist but also the challenges that it represents for them. I will say the role at Keyword Studio of a superhero is utilizing all of these different skills but also looking at different signals and that's very important for us especially nowadays where the way that I pitch the idea of a superhero is imagine that you are in a room
00:09:20
Speaker
surrounded by a thousand people, sometimes way more than that, and everybody's looking at you to make a decision. Nowadays, thankfully, everybody has a voice, and it doesn't matter what decision you're going to make, somebody's going to push back. It's going to be the majority, it's going to be the minority, but with that, we also have
00:09:39
Speaker
the responsibility of trying to understand as much as we can the context of the situation so that's why i'm talking about signals and that's why again technology can bring us that signals at the right time to make sure that when superheroes are making a decision
00:09:56
Speaker
they have look into the different pieces and make as much as informed of a decision that when somebody pushes that, we have the receipts to call it somehow.
Supporting Moderators Effectively
00:10:10
Speaker
So it is not just saying yes or not to content or just passing it along. There are so many pieces that we are trying to balance to make sure that when you come to our partners or to Keyword Studios as the HI side of things,
00:10:26
Speaker
We are making a decision based on all of these different pieces. Something that kind of both overlapped on what you both said, and I think is a fascinating thing. Chris started off by talking about training the models, which I think is fascinating that you're listening to clips and talking about what's good and bad. But then, Sharon, you mentioned kind of pop culture, and that's a fascinating thought because I hear my kids talk these days, and I don't understand all the words that they're saying. And, you know, when you start talking online, A, you have the
00:10:55
Speaker
benefit, for lack of better words, of being autonomous or anonymous, so people tend to be mean-spirited when they're anonymous. But also, how do you train on all these new lingos that get born what seem like every few weeks a new term is out there? How is the model continuing? How do your agents also kind of, I guess these are two questions, a question to you, Chris, would be, how do you continue training it? And Sharon, how do you kind of keeping your team up to date on what's happening out there?
Proactive Moderation Strategies
00:11:25
Speaker
Maybe we start with Chris. Sure. Uh, so in terms of iterating on, uh, what the bottle can catch, uh, new language is something that we rely on partly on data labelers, but also on just route research, just going in, finding language and finding the new stuff and training on that specific new thing. Um, I think.
00:11:47
Speaker
A lot of times when we talk about this space, with voice, it gets a little gray, right? Because especially in text, there's a lot of ways that people have figured out how to circumvent the measures that technology has helped us make. And in voice, we haven't gotten necessarily that far yet, although there are some cases.
00:12:11
Speaker
Regardless, in the meantime, we do have sort of a more complete sense of what's going to be said. I'll also call out that each platform and each format, to me, it seems to have a vernacular.
00:12:26
Speaker
So for example, the way that you say something in voice isn't going to be the same way that you type it out in text. And that's partly just because of the medium, but also because of the culture of each one. So being able to iterate on that, it just means we have to be as in touch as possible with what's going on in the data. And a lot of times that means research.
00:12:46
Speaker
and I think that's again where we close the loop right from our side of things is the HI or human intelligence and what we are betting here and how we are leveraging the globality that we have as a team because again 350 moderators around the globe is
00:13:05
Speaker
These people are the ones that are like everyday learning all of it and seeing it sometimes for the first time, right? So the way that we have built this engine is it is not again just saying yes or no, but also learning from what we are coming.
Balancing AI and Human Moderation
00:13:23
Speaker
across and then not just learning, sharing those learnings within the community of moderators to make sure that number one, they're aware of what the trends are, but number two, we are also able to pass that important information, even if it's just something that is gonna catch for two days and then the internet will forget about it, to our partners to make sure that in technology, to make sure that that power just like multiplies
00:13:51
Speaker
rather than us going into each of the projects and be like, okay, capite, or whatever Trump tried to say at one point, added to project one, two, three. No, we rely on our partners to help us push that further. In this case, we're talking about lingo and just internet stuff that didn't really impact, but what happens when this new term or new trend is actually impacting or just inciting people to hurt themselves or hurt others?
00:14:20
Speaker
That's where time is of the incense and that's why again, and I know that I continue to say this, but we love technology because we find it, we inform it, and then technology just extrapolates it, vaccinates everyone to call it
Escalation Process in Moderation
00:14:37
Speaker
somehow. Maybe vaccination is not a good word, especially in these times, but it just spread it over all of the different networks. And then for the next instance that it happens,
00:14:49
Speaker
our networks are already aware and they can act accordingly. Something you said that I think is interesting is the fact that you're managing 350 moderators. And from the outside looking in, that sounds like a really high number. What types of games are typically utilizing moderator type roles? Like if I'm going to go out and buy the latest Call of Duty, right? What's it protecting me from in that game? What's it not protecting me from? What should people who are listening that maybe
00:15:15
Speaker
our parents, the younger kids, be understanding that where the protection starts and where the protection ends? That's a great question. And I want to say, 15 years ago, the focus of protection were kids, right? Like, the internet was really new. There was not a lot of understanding what will happen if a kid was to go into the internet.
00:15:36
Speaker
But today I can say like every single network game that has anything that has to do with user-generated content, so in the form of text, video, images, or voice needs to have moderation, like it is just the standard map.
00:15:52
Speaker
Thank you. We finally are at that point where people think about these things. And I think the degree to your point, Greg, on how much moderation or how many moderators even you have into each of the projects, it totally varies. There is no perfect formula in Call of Duty to your example. Obviously, the moderation, it might be integrated first on the technology side of things, and then we're going to be looking
00:16:20
Speaker
at maybe on local content that will be like the main concern right but in a game that is for underage our concerns are that those ones times hundred or thousand right so we have to focus on the number one what is the audience of the platform or game that's going to dictate a lot of the
00:16:43
Speaker
Oh, look at that. That's going to dictate a lot of the strategy on what kind of moderation and even what kind of tooling we're doing it, right? We want to make sure that these kids are not entering the birth date of their parents just to pass the first filter, right? Because, believe me, kids are very, very resilient and they will find ways of trying to pretend that they are all there.
00:17:11
Speaker
in order to get to these platforms,
Collaboration for Effective Moderation
00:17:13
Speaker
right? So, to your point, for parents to be aware that there's moderation, yes, there's moderation. I will say, do not assume that there's moderation in every single app that is marked as git-friendly. That's something that is a misconception. There is moderation from those companies that are actually buying or really aware that there's these challenges that they want to protect their users from.
00:17:41
Speaker
I love that point. I love the point that you made where any online platform should have moderation. I think that's a really, really important thing. And I think a lot of times the usefulness of moderation is questioned just because it's not something that we've really known about as a problem statement for a very long time. I think like for a very long time, the internet was very unregulated in that way.
00:18:10
Speaker
There's a lot of danger out there on the Internet.
Closing Thoughts on Trust and Safety
00:18:12
Speaker
I know, like, you know, there's there's a huge concern right now, especially in the government with violent radicalization and, you know, things like sex trafficking and child grooming and stuff.
00:18:24
Speaker
being able to tackle those problems means to me having moderation. And it's really important, I think, for people to understand that. I would also say, too, that the problem statement of how do we protect people on specific platforms with specific audiences is a really big problem statement as well. And I think that's something that
00:18:49
Speaker
keeping in sort of tandem with the incarnate culture on those platforms is really important. We hear a few months ago when Elon took over Twitter, or I guess we'll refer to it as Twitter, he got rid of the whole Trust and Safety team. And I think some people kind of just said, okay, I don't know what that means, but
00:19:10
Speaker
either of you, I'm not sure if one of you feels more passionate, but what does it mean when a huge platform like Twitter removes that trust and safety team? Is it going to affect its users? Sharon, do you want to chime in there? It definitely will. I felt like we were literally coming back to 20 years in time because that's exactly, and I think
00:19:31
Speaker
The piece with moderation is that there is a misconception about it, right? So there is, number one, the moderation means that you are going to be taking away expressivity, right? Like you are telling people what's allowed to say and what's not allowed to say. So that's challenge number one. Challenge number two, I believe that from the development side, people is not even thinking about moderation or tooling or anything like that.
00:19:59
Speaker
the challenges that Chris just mentioned on radicalization, child abuse, any of them. If we were to design the platforms with that on mind from the beginning, that will be another story that we're facing today, right? Developers think that moderation might be just catching the F word everywhere, right? And there's so much more to it, like the way that we are creating
00:20:25
Speaker
these different platforms, this engagement is so key that trust and safety steps up and reviews. And it is not for trust and safety to dictate what the process and what the project should look like. That's something that we really want to send that message. It is more for me to tell you, hey, by the way, that's a really cool feature. You're going to be uploading UGC and pictures. Have you thought that that could be actually a child pornography picture?
00:20:54
Speaker
and you will be amazed of how many people because that's not their role that's not what people does they just create amazing futures amazing games and that's what's in their mind how many people like holy shoot no I have never thought about it can you please tell me more
00:21:10
Speaker
Again, we're not trying to catch the creativity or an innovation, anything like that, but just giving them a little bit more of background of like, if you were to do this, it might decrease all the number of child abuse pictures that we see. And it's crazy to think about that you can create that impact. So when people is usher out of the door for trust and safety, those are challenges that you are opening to your platform.
00:21:39
Speaker
I remember hearing somewhere, I forgot what it was, but typically when you create like a game with a map editor, the first thing that's done typically is someone will design a map with, with male genitalia in the shape of that. And it's just like, come on, guys. Like, is that really, we call it TTP time to penis. As soon as you go live, how long it's going to take them to either try to draw one or to pass the word through the filter.
00:22:06
Speaker
As far as we come as a human race, it's like we take 10 steps backwards at certain times, and it's mind boggling. But the way you're going with that, Sharon, I'd like to dig more into the day in the life of a moderator. And when I first started at Keywords, I'd never really understood what a moderator was. It's kind of like, oh, well, it's nice that they're getting all that, but what's the real hardships and challenges? And you don't understand that. So maybe we can start with Chris here, because since you are in that role, what is the day in the life of a moderator?
00:22:36
Speaker
That's a very good question. I'm very happy we're talking about it the the day in the life of moderator is
00:22:44
Speaker
It runs anywhere from very dull and innocuous to completely terrifying and scary. And the reason is because a lot of times when you pull up a moderator queue, you don't exactly know what you're going to see. Obviously technology we have to help filter those results before moderator sees them. That's a great thing. It helps us keep them safe. And also just making sure that we have escalation pathways for different kinds of content. That's also something that's really important.
00:23:12
Speaker
uh but I mean you know moderators could see anything from just innocuous swearing to you know some of the stuff we mentioned earlier that's a lot more heinous uh and it's really important to understand that uh my hope uh for the future at least of moderation is that uh you know like Sharon was saying they are treated like superheroes um and that we understand like
00:23:36
Speaker
how damaging it can be to go through that much content. You know, I have a talk that I do at my work about a little bit about the neuroscience of the psychology of like, you know, you know, how seeing toxic content on a regular basis can affect your brain, how it can make you act, how it can make you think things that you haven't thought before that are bad. And, you know, I think
00:24:00
Speaker
through like kind of researching and putting that together. I've kind of come to understand how taxing it should be, not only to be a moderator, but also, you know, what kinds of expectations are good or bad for for people to have of moderators too. So very much a, you know, whatever you get when you walk in is what you get kind of job. And it can be really, really tough for that reason. I love I mean, I don't love right, but like,
00:24:26
Speaker
Sharon brought it up earlier and you're kind of talking to it as well as the fact that, you know, people think it's people saying the F word or terrible words online, but it's grooming, it's trafficking. It's all the scary stuff that's happening in these games. And from your perspective, Sharon, you're overseeing a group of moderators. So what's a day in the role of Sharon Fisher's shoes?
00:24:48
Speaker
Well, I think creates really describe like the challenges, but the other side of it is like we just can't give moderators a title of superheroes and be like, yeah, you are the core of it. And thank you so much. I think there's the other layer, which we call it at keywords. There are more for it. Right. So.
00:25:08
Speaker
how we are responsible of making sure that these superheroes that are like getting up every day might see the worst of the internet and then save life through that. And that's why they are superheroes. What tools are we giving them? It's not just a title and a cape and good luck. We have to armor them with that, right? So for those challenges, you have to make sure that you just don't give your superheroes a psychologist at the end of the line, right? Like that's nice to have.
00:25:37
Speaker
but that should like almost you should never get to that point if you are really giving your superheroes the right tools and what that looks like and why keywords is pushing the industry towards that is like it's that everyday care it's really normalizing the factor like how are you feeling today I'm not saying yes we call it my fire here
00:25:57
Speaker
and what it means is like how's your fire today and just it's a gift to tell me how are you doing today that gives visibility also for your team to know how are you doing somebody might reach out around and slag or teams and be like hey what's up like how can I help you.
00:26:14
Speaker
Sometimes we're at coils every morning, right? And you're just like rolling out of bed and you don't want to do anything and that's human. Sometimes we're fired like in a building and we're just like flaming and we're super excited. So I think there's like a lot to understand when a moderator is doing this job, to Chris's point, all the impact that it can have.
00:26:36
Speaker
if they are not well taken care of, I call it, you just like are really damaging people's souls because that is what it is. Doing this kind of job without having a support, a line of defense like technology, it's something that in 2023, almost 2024 should not be happening anymore around the globe. We have technology that has made it so far already that are protecting superheroes too.
00:27:04
Speaker
should be something that we do not only because it is good business, but because it's the right thing to do. Are you able to kind of dig into that a little more? Because I know you had a very, I'm gonna say hard stance of keywords that you want to protect your superheroes, you want to make sure they have specific benefits that will
00:27:23
Speaker
give them the mental health breaks, stuff like that. How can other companies that are taking a look at this do things or just understand how and what keywords other companies should be doing to help their superheroes?
00:27:35
Speaker
So first off, while I'm working for keywords and obviously our superheroes, something that I like continue to say, like, please take it, take the term superheroes, like it is not unique to us. It's obviously not trademark. Take it because everybody and every moderator needs to be shown a light and be aware that they need this protection. To your point, the depth of it is number one, who are you resourcing?
00:28:04
Speaker
do not resource only based on the language skills, right? Like that's not enough. So the way that we're researching is like a homemade literally, and it took us probably like eight months to get to a test. A test that is not just only looking into language capacity and capabilities, but also are you aware of your bias?
00:28:27
Speaker
Are you willing to receive this help? Because it doesn't matter what of a strong program that we have and we have seen it everywhere and anywhere. We all have all these benefits and when do you utilize them? Never or unless you are like in a very deep, deep hole and we are guilty all of it. So making sure that number one, you are willing to take this help that we're going to be providing is really important to us. There's many other points when it comes to recruitment.
00:28:54
Speaker
The way that we onboard people, it is not just like, these are the tools that you're going to be utilizing. We stop, we take a day and a half to just sit down and be like, okay, now that you're not in the stress, tell me a little bit about what is it that makes your soul happy. And this is just kind of like a back pocket, first aid kit.
00:29:17
Speaker
that we're going to have there forever and ever. This is very personal. We obviously don't publish it, but we want to make sure that people are able to be prepared for when and if these kind of cases come and they are on the stress. So that's just a little piece of it. There's a lot of that training and onboarding that we do.
00:29:37
Speaker
But then again, it is not just that and then go to work and then there's a psychologist waiting for you at the end of the tunnel. There is the everyday care and the everyday care. We had gone through every single leader within the organization to make sure that they are able to support our superheroes, that they understand what to ask, how to support them.
00:30:00
Speaker
Very, very important to say we are not giving the responsibility of a psychologist or a psychiatric to these people. We just asking them to ask the questions and to prompt and to make sure that they are again asking what's your fire every day.
00:30:15
Speaker
Then we have the, that's the daily, then we have the weekly, which is the one-on-ones. There is a lot of information coming to the superheroes during the week, but there's also the monthly that we have models. And these models can look like, again, multiculturalism.
00:30:34
Speaker
internet lingo, bias awareness, and we are developing this internally 100% because we want to make sure that it's our superheroes that are guiding what is needed. So Greg, I can probably go for like 700 hours on this because this is how detailed the program is, but
00:30:55
Speaker
I'm open to anyone and everyone to reach out and the more that we can protect our superheroes around the gaming industry or social media industry, we're happy to support.
00:31:07
Speaker
I love the fact that you made this one statement that says you find out what really makes them happy inside. And I feel like it's just this little thing. Like I imagine like a bucket of like, I like Sour Patch Kids right here. Like just like when I'm angry, like, all right, let me just take one take a breath. I think that's super important. And the same question for you, Chris, but more curious, like, you are in this role. When you start to feel overwhelmed, what do you do? What helps you? And maybe if you don't want to get that personally, you don't have to. But
00:31:35
Speaker
Oh, no, I'm happy to. I actually developed a whole framework with, you know, my care team, my mental health care team to actually deal with this for that reason. You know, a lot of what Sharon was saying is kind of my own practice as well. Just checking in with, you know, what's going to motivate me today? Like, what's going to make me feel fulfilled? And then also being able to check in periodically about, you know, how am I feeling? Like, where is my emotional state at? You know, what kind of things are coming up?
00:32:07
Speaker
I think a lot of times when I walk into this role and I start to look at content, the thing I'm the most afraid of doing is disassociating. And I think a lot of times when people start to look at content like this, that's the first thing they do and that's good because they need to separate themselves from the content. It's very important that they don't feel like it's actively happening in front of them and that it's their responsibility. But at the same time, if we do that,
00:32:35
Speaker
for a very long period of time, we can end up just walking away from ourselves. And we can end up like, you know, losing that motivation, we can end up losing that sort of internal barometer of how am I feeling today, like what things are affecting me. And then all of a sudden, you know, we're three or four hours into a shift. And
00:32:55
Speaker
we're feeling completely burnt out and we don't know why. You know, we don't even have a place to put it because, you know, when you're looking at that stuff for so long, and you're disassociating for so long, it's really hard to just come back into your body. So there's a lot of stuff that I've borrowed from some of the trauma therapists that I've had, some of the stuff I borrowed from generalized therapists, you know, having to do with like, CBT work and dbt work that we've done,
00:33:21
Speaker
And then also just specific trauma-focused therapy techniques that I use to not just calm myself down, but keep myself in touch with myself and what I'm doing. Because that, for me at least, is the biggest challenge.
00:33:39
Speaker
You know, I've had a lot of experience with this and a lot of experience with trauma in my life So, you know, I have a lot of tools to deal with if I see something that is Traumatizing to me or see something that affects me in a really hard way, right? But I think the thing that's really hard to realize is that the small stuff adds up And it adds up pretty quickly if you're not careful
00:34:02
Speaker
Yeah, that was well said. Thank you. And I can imagine just, first of all, understanding what makes you tick, understanding how you could control yourself and disassociation, I think is, is important knowing that it's not actively happening in front of you. I think that's really well said. Something I know we kind of spoke about is a lot of the moderation that's being done is reactive, right? It happens, we take care of it. But what can be done proactively to try and get in front of it before this even begins? Maybe Chris, you want to start with that one?
00:34:30
Speaker
Sure can, yeah. So a lot of the space that Modulate works in, at least, is a lot in the gaming space. And generally, for voice chat, the path of escalation that we've seen is a player reports a thing, the studio takes action on that thing. That is what we consider reactive moderation, right? Something that Two Hat kind of pioneered with their text chat moderation
00:34:57
Speaker
technology, but also, you know, modulate is some we're trying to do this in the voice space a little more is proactive moderation, right, where we have technological tools to go through those voice chat instances and proactively escalate something that's happening, even if a player doesn't report it, or
00:35:16
Speaker
While it's happening in real time so that if someone you know if a moderator can get to that voice chat instance before something really really harmful happens we can prevent even more harm from going on. I mean just as in technology I think it's amazing that you're doing this almost real time via voice and you're able to.
00:35:37
Speaker
to do this, I think it's just mind blowing. I don't know if there's a question to be asked there. It's an awesome technology. I don't know, Sharon, is your team, how did they handle with this proactive in a similar way? Proactive moderation is all that we call moderation. I think after the matter, it's more like damage control and cleanup, to be honest. So what we are always aiming is to make sure that proactiveness happens. And it goes again.
00:36:04
Speaker
from the design phrase even like from the ideation phase that's what we're calling now responsible moderation right where it's like okay let's ask these questions before you go to town and add all of these features but make sure that when you add them you have these pieces in the back of your mind.
00:36:21
Speaker
Then let's make sure that we create through terms of use the idea, which nobody reads them breaking news, but that we as a company have an understanding of what is it that we want this community to look like.
00:36:37
Speaker
Where are we drawing the line? And that's where, to your earlier question, Greg, like, what does it look like? Who knows? Like, it depends, obviously, in the title, in the audience, all of those pieces, we want to make sure that people is able to say it's effing awesome, effing love this game. Like, that's totally fine.
00:36:56
Speaker
you should never be able to say go and help all the Mexicans and stuff like that, right? So we work and we want to make sure that these kind of cases are not even present. Why are we going to be like worrying about like banning and muting and all of us stuff that like slap hand, slap hand if we can prevent those ones from happening, right?
00:37:18
Speaker
I think the other part of proactive moderation that we're trying to do moving forward with moderation is like not focused so much on the negativity of it, right? Because for studies that are out there, like we can see that there's like four to 6% of users that are actually toxic users, right? So in focusing just on them rather than 96% of them, we are losing an opportunity of lack
00:37:46
Speaker
actually creating communities that are more engaged and stuff so it almost feels like when it comes to moderation those things and the Mexicans and stuff should come out of the box already you know like we should like there's no question that that's wrong so those should come out of the box and let's focus now in like this guy or this woman that is actually saying something on the line so like hey
00:38:13
Speaker
nice meeting you, welcome to the game or things like that. I'm giving back very simple and very top of mind ideas. But when it comes to preventive moderation, there are so many tools that we can really utilize prior even going live because you're going to spend what?
00:38:33
Speaker
if you're lucky a year to five years to like focus on your game and developing all this cool art and like all of the love and care that goes into putting this and then at the end it's like oh I'm not gonna play it because it's a toxic game like let's take care of this from the beginning and that's why pre-moderation for us is literally what can dictate the success or not of the project
00:39:00
Speaker
I like how you're talking about things that should be just coming out of the box, things that should be standard across the board. And I think along those lines, AI is going to be able to help with some of this, right? Maybe it can help provide preventative in certain cases, but it could also kind of provide some relief to the
00:39:19
Speaker
moderator themselves based on what type of content is coming in. So I'd like to talk about how AI is affecting the day to day if you're seeing it yet or what things might be on the horizon that excite you. So Chris, from your perspective, is AI I mean, you're doing such cool stuff with voice and technology. So I'm sure AI is already involved. But like, how is AI evolving in your in your mind to help provide a better
00:39:44
Speaker
future for moderators? That's a great question, Greg. There's definitely a lot of strides that are being made in AI right now, and I think there's equally as many strides being made in the view of AI that we have on the outside. I think from someone just being an internal person who does this, I think the coolest thing that I see AI doing right now, giving us
00:40:13
Speaker
confidence in large scale like systems and communities that we could not have if we did not have it. I think something that we're focusing on a lot at Modulate
00:40:29
Speaker
is just how to do this at a scale, right? How do we leverage people as much as we can and how do we take the burden off of people so that we can make sure that AI is either in front of the more obviously bad things like hate speech and rampant sort of racism and other things like that.
00:40:54
Speaker
But also, like, you know, how do we make sure that, you know, we we are escalating things like, you know, violent radicalization and
00:41:04
Speaker
escalating things like child grooming, because the most exciting thing about doing this at scale is that that 0.05% of data that contains that content is something we can find now instead of just having to ignore it or having to look so long for it. And I think the fact that we can start to protect communities that way once we start to develop
00:41:31
Speaker
you know, different models for that sort of thing. Uh, that's probably the, the greatest role that AI has is just making the process of sorting through online spaces. Extremely simple and, uh, you know, fast too. Yeah. Speed, right. Efficiency being able to get in front of it before it spreads. Sharon, obviously you're okay. Keywords where we have tons of technology, but how are you looking at utilizing technology to help as well?
00:42:01
Speaker
Sorry, Groupita, how am I looking? AI to help kind of assist the moderators in making. It's the core of it, seriously. I think we have gone through this and I like to paint the picture of how we came to be with AI and HI. We started moderation back on the time with just humans doing it, right?
00:42:26
Speaker
It was not artificial intelligence back at the time. It was just like looking into the patterns and just like the words, let's lock the words. For me, for example, a club penguin, it was snowballs, right? Like the penguins will throw snowballs.
00:42:41
Speaker
But if the chat was too tight, then you couldn't say it's no balls because the second word was a sexual word, right? So we had to find a balance of it and understanding that. So then once we got really good at it, like the decision was like, oh yeah, like now automation is going to take care of all of it and forget humans because now we want to like reduce costs.
00:43:02
Speaker
really quickly we realized that that's not scalable obviously but also languages and nuances and all of those pieces like oh wait we might just need some kind of automation and humans and then we came into the era of like yeah but technology didn't tell me well the humans didn't tell me and then just pointing fingers because we were all fighting for our place within the moderation
00:43:28
Speaker
And I think today we're finally at a point where we understand that we all bring value and that we both need each other, right? So again, when we are talking about gaming, which is our main focus at Keywords, the sheer volume of content that we're seeing is nothing that we have ever seen. Like I will say like the last five years it has like
00:43:50
Speaker
triple just because of the circumstances and everybody being at home and stuff. So there is still not understanding from my end on thinking that why are you not utilizing technology to help you sort through all of these pieces, right? So AI becomes even more interesting to us because of how quick we can switch things, how we quick can we protect people, how quick we can even
00:44:17
Speaker
If we start looking into a trend, rather than having my superheroes thinking, how else could they say this? Well, guess what? There's a billion ways that you can misspell or as supers the F word, if not more, right? But when you give it to AI, it's just right there. We're not going to spend because I did spend that time back on the time.
00:44:39
Speaker
like three months trying to figure out what are the names and last names that could be utilized as a thing too, right? Like let's not put our effort there because there is technology now. So never mind the fact that it is protecting people around the globe, right? So the benefits that we see with AI
00:45:00
Speaker
are greater than anything else. And I think the hurdle that we continue to see is people thinking that AI is making that decision and making a call in what goes and what doesn't. I think that's also another misconception that we're trying to push back with. The fact that I say this has to come out of the box,
00:45:24
Speaker
I'm not talking about me deciding what goes and what doesn't go in the community. I'm talking about the overall understanding that somebody's saying I'm looking for a five-year-old girl virgin, it is not okay to be in chat, right? That kind of pieces, that's what I'm talking about, that AI should be able to already protect to begin with.
00:45:47
Speaker
So I have kind of a double question here. You keep mentioning sharing the AI and HI, which is human interaction. And my question is, as AI has been making more of a presence in this space, have you seen the human interaction interaction go down because it's less necessary? And then the other question, which I wanted to ask earlier, which kind of is,
00:46:09
Speaker
goes with this is, where does the limit of what a moderator can do end? What happens when you find that case that has to be escalated? Who does it go to? Where's your authority end? Sharon, I don't know if you want to start that one. Okay, so I think that's another fear, right? Is AI going to take over my HI job, right? And I think that, again, I just see AI as one of the pieces that makes our superheroes stronger.
00:46:39
Speaker
Like, I don't believe that that's the case. Now, to be totally transparent, of course, that having technology will decrease the amount of cases that a human is going to have to view. That is the point. But what we're trying to do with this is like, yes, the cases are lower, but how are you going to utilize that time now, moderator?
00:47:01
Speaker
Instead of slapping the hands of everyone, what about like making the community stronger, being more engaged with them, bringing all their kind of even intelligence to you and to the company and the team, right? Like because we now have more time to focus on the positive. So that's the first question. The second question I have the answer, but I don't remember the question. So please, great.
00:47:26
Speaker
Yeah, no. The second question is, when something does get escalated to a human, what can the human do? What's the escalation process from there? I think this is also very unique to keywords and the way that, again, total transparency. When I joined the organization, one of the pieces that I saw is there was a very big gap of what I thought the number of projects and the number of real life re-escalations were. Once I did a little bit of investigation,
00:47:56
Speaker
The challenge there was education, to understand what is a real life red case, but also what is the process of doing it. What we did today at Keywords is like, once you find these cases, and again, most of them are brought up by technology, right? Because we don't have people just reviewing horrible pictures. Technology in the case of pictures, for example, will take care of like the ponies and rainbows. It will take care of pornography.
00:48:26
Speaker
but then the middle side or the gray area, is this a really small bathing suit? For example, those sort of pieces. If the system is telling you or the technology is telling you this is a real life threat case or within chat or with your voice, you are starting to pick on all these signals, what we do is the superhero neon wrap set, I call it, where it's like give us all the information, all the data that you have available to make sure that we have something put together.
00:48:55
Speaker
Then it goes to a specialized real-life threat team that we have at Keywords that has been trained on the different topics to make sure that they are able to add more into this wrapping and usually get back to the superheroes and their leadership to ask for more information.
00:49:15
Speaker
And I will talk a little bit more about the challenge of sharing information and all of that. You can imagine it already. But once we have this, then we have become so good at Keyword Studios that actually it was the FBI like five months ago that reached out to us and say, hey, what are you doing? Because these cases are like really well put together.
00:49:38
Speaker
giving us really the trust and I say it really humble. Now we have like a direct line with the FBI in the past where we used to do is like, okay, I don't know, Oakland, California, I have this case. Can you please help us? No, we can't help you because you are not in Oakland, California.
00:49:59
Speaker
what's an IP no this is just happening in the internet we cannot support that so the gap of action was really big and as you can imagine it was very frustrating because gaming companies paying this money we are putting our superheroes in the middle to try to find it we find it we have all the information and then
00:50:18
Speaker
Nowadays, thankfully, again, because of the way that we have been doing things, we have that direct line, which doesn't guarantee that it will be action on, but it does guarantee that the cases will be looked into, which is
00:50:32
Speaker
mind-blowing for me and we are very very proud to know that now we are not we actually can sleep at night thinking that the case at least has been seen because when you go through this process even for the superheroes going through it knowing that somebody is at risk
00:50:49
Speaker
It is really unsettling to not know if somebody is going to do something about it. So that's also something that really helps our superheroes to close the circle and be like, okay, done. And hopefully, obviously, we don't get a lot of feedback on what they do if they do it on all of it. But at least we have that piece that somebody, a professional in the law enforcement area has to look into those cases.
00:51:16
Speaker
It's insane and awesome at the same time knowing that that's your escalation path and just knowing that that's a possibility when you're protecting people online. That's a relief to know. Chris, same two questions. I'll say them again.
00:51:31
Speaker
I need to say them again. But has AI helped improve the number of cases or reduced the workload for a moderator to only make sure that they're looking at the urgent stuff? And then what happens when that stuff comes through and how is it escalated? Great questions again. So kind of similar to what Sharon said with this, AI has definitely helped in that area a lot.
00:51:54
Speaker
I think since the basis of our product is basically making sure we can escalate the right things so that moderators can do it, I think it's pretty evident to me that that's happening. One thing I will say is that the thing that studios are leveraging a lot is automation. And when I say automation, it mostly means things that are so bad
00:52:21
Speaker
and so obviously bad that they can just be actioned on a certain way. Like, for example, if someone has a code of conduct in their game that says, we do not allow hate speech, right, you know, that person can be in real time actioned on if they use a hate speech term. And, you know, like Sharon was saying, we do have the
00:52:46
Speaker
the nuance to know like, you know, to some degree, like this is reclaimed language or this is hate speech. And also like, you know, what is allowed in X community and how certain words can sometimes be hate speech, but sometimes be just a total normal thing. Like those are things that we can look into. With that tool though, and since we've seen studios leverage automation a lot, a lot of times we find that trust and safety budgets in companies like this are not very large. And for that reason,
00:53:15
Speaker
it's maximizing the amount of good that a moderator can do in a short amount of time. And I think that to me is the biggest difference is that all of a sudden we go from a moderator not really having the tools and having to look through a whole bunch of stuff to
00:53:30
Speaker
Having the tools and seeing that stuff for your other question with how does that stuff get escalated you know with the the small the I don't want to say smaller because it still hurts but with less severe cases with things like swears and person and other things you know
00:53:48
Speaker
we tend to escalate to moderators normally. Or if someone has a blanket ban on a specific language, we can automate that as well, obviously. But really, the value prop for a lot of that stuff is in that gray area was Sharon was talking about where we're not sure if this person is using a swear with
00:54:16
Speaker
you know, words among friends or if they're trying to perpetrate bullying, you know, you know, having a human look at that stuff is a lot of times the only way that you can tell. And since obviously models have error modes, it can be easy to, you know, it can be easy to put full trust in that stuff when really we should always be refining and retraining and just making this technology better so that it can be more accurate. That being said,
00:54:45
Speaker
for escalation in terms of the more severe stuff where law enforcement has to get involved. Modulate certainly has a system for doing that. When we were building our user scoring categories, that was one of the things that we had to sort of make on the fly. We try our best to partner with people like Nik Nik.
00:55:05
Speaker
and with the ADL and with other resources that just help us not only know where to put it, but also help us escalate those cases to the right authorities. I think a lot of times when you're dealing with law enforcement,
00:55:22
Speaker
Like Sharon was saying, a lot of times there's sort of a barrier to action that you have to pass where you need just enough information to be able to say, okay, this is definitely this amount of bad, or we are this confident that this sort of thing is happening. And passing that barrier can sometimes
00:55:40
Speaker
But, you know, with some of the nonprofits out there that working with it can be easier. And also, with just the fact that we have AI and we have just that backlog of data to look through to be able to say like, you know,
00:55:54
Speaker
be able to investigate and be able to say this is, you know, more than likely a perpetration of something really bad versus like this is more likely innocuous. I think the fact that we have that confidence also really helps with that escalation path as well.
00:56:10
Speaker
For us, it's mostly like if you find it, we have to go through the studio just because they're the ones who are going to be able to escalate that stuff properly. And they're the ones who have the PII that we do not have because we anonymize everything with our tool just to protect people's privacy. So they're the ones mostly who are going to be able to take care of something like that. Yeah, which makes sense, right? You're providing the service for the studio. The studio is the one that has to
00:56:37
Speaker
You could you could lead them to water as they say, it's up to them to decide what to do from there. Exactly. And I also wanted to highlight there like the importance of, in these cases, specifically, time is of the essence, right? So when you think about somebody like threatening to kill themselves or hurt others or hurt themselves, that's something that cannot go in a queue.
00:57:02
Speaker
and wait until somebody goes and see it if the SLA is 12 hours, right? Like that's something that we do not have the luxury of time in this sense. So that's again why our technology helps us so much on those matters, right? Like it is very different and I'm not saying they are not as important, but like if somebody is like
00:57:24
Speaker
drawing racism slurs is very different than, really, we need to save lives here. That's what we're talking about at that level. So time with technology is something that people really need to think about. It's life or death, as it sounds.
00:57:46
Speaker
It's a great point to bring up. We've been here for about an hour, and this is a really educational podcast for myself, and I appreciate the two of you for coming on. Let's kind of talk about what we looked at, right? We took an understanding of what the trust and safety role is. Why does it exist? Who's it protecting? How is it protecting us? And from what, I think, though, from what is an amazing thing to understand, right? It's not just the F word. It's not just derogatory terms. It's trafficking. It's grooming. It's all this other scary stuff. The day in the life of a moderator, right? It's not an easy job. From the outside looking in,
00:58:16
Speaker
You might think they're a customer support rep that's just taking a look at bad pictures, but there's a lot that goes into it. And I appreciate both of you for sharing your stories about that. And I appreciate what each of your companies are doing and the message you're trying to spread on how to make sure that mental health and how to make sure you protect
00:58:32
Speaker
these individuals who are in these roles, and how AI and safety can help understand that how the escalation path works, how quickly you can start to, as you just said, Sharon, that time makes a big difference. How quickly can you react? I think that's all important stuff. I think this is fantastic. And this was, again, just a really great learning experience for me. I think the one most important thing to me, I learned that you said it in the beginning, Sharon, and Chris, you alluded to it as well as like,
00:58:56
Speaker
We are all a team, whether it be keywords or modulate or 5CA or other competitors, right? Like everyone in this role, we want to make a difference. It's for the greater good of all of mankind together. So it's important to create these partnerships together to make sure that you can understand and share the technology and how everyone can be a superhero.
00:59:16
Speaker
So I appreciate both of you again, and I'll give you some final words before we sign off. So Chris, thank you for jumping on today. Is there anything you'd like to share and let us know how we can find you?
00:59:27
Speaker
Thank you. Yeah. So I'm just Chris James on LinkedIn and on Instagram. And I just want to say thank you for having us on. It's been great to talking with both of you. I think this is a really, really important issue. And I'm, I'm happy that we have this platform to do so. And I'm happy we, we have Sharon as well. And, you know, here because she has just so much experience, but
00:59:50
Speaker
Also, just thank you, Greg, for posting and keeping us here. Really happy to talk about this stuff. Thank you. Sharon. Just Sharon Fisher on LinkedIn. You don't want to see my social media. That's personal. But trust and safety here. But I think that the message that I want to live with is moderation is not the evil of the
01:00:18
Speaker
of this game, literally, but also it is not the saviour of all of it. So when you are looking into what your kids are going to play, what you're going to allow to in your house or not, you really have to, personally, I have a 9 and a 13-year-old. The 13-year-old is not too happy with me right now because he's not in every single social media that everybody is. But I think a rule of thumb for me is I need to
01:00:45
Speaker
Play the game that he wants to download I need to look into the advertisement I need to look into the mechanics of the game Like can they talk to each other and stuff? Do that first as a parent and I'm talking obviously on the personal side of things But even if you are doing it for yourself like looking to that first to protect that I will be the first layer and
01:01:08
Speaker
And then hopefully you can do a little bit more of digging into is this app game actually moderator? It's that actual technology that will protect my kid or myself from this toxic content. That's a holiday message that I give, but while you are
01:01:26
Speaker
trying to figure out what you're buying for your kids. Other than that superheroes again should be around the globe seen by who they are but also supported as they can continue to protect us and the real life work from all of the challenges that we just talked about and also very grateful to have Chris with me because I think that now you can tell why Model 8 and Keyword Studios are like such good partners.
01:01:55
Speaker
Yeah, I think it was really informational. I think that's a great point, Sharon. I think for anyone out there buying a system, getting new games for the holiday season, have fun, know that there's superheroes in the background that are looking out to make sure you're safe. But at the end of the day, it also comes on to you or your parents, or you're making sure that they are looking at the content you're playing. Because no matter what's happening in Call of Duty, if you're too young, you probably shouldn't be seeing what's happening in Call of Duty, period.
01:02:22
Speaker
Great stuff by everyone. Thank you everyone for listening today. I hope everyone has a great holiday season and I hope you have a great rest of your day.