Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Targeted by Design: The Dark Side of Online Advertising, A Conversation With Sarah Ralston, The Media Trust and Proxyware image

Targeted by Design: The Dark Side of Online Advertising, A Conversation With Sarah Ralston, The Media Trust and Proxyware

S1 E40 · Scam Rangers
Avatar
160 Plays1 day ago

In this powerful episode of Scam Rangers, host Ayelet Biger-Levin speaks with Sarah Ralston, online safety expert, fraud investigator, and mother of five, about the hidden dangers lurking in online ads.

We dive deep into how scam ads are hyper-targeted using the same ad tech that powers legitimate marketing, exploiting trust, grief, and even childhood curiosity. From cloaking techniques that evade detection to the heartbreaking targeting of seniors on obituary pages and kids in school, Sarah uncovers the shocking truths behind today’s online ecosystem.

We discuss:

  • The psychological manipulation baked into scam ads
  • Why 20% of programmatic ads may be scams
  • How bad actors use ad tech better than most marketers
  • Cloaking and how it defeats traditional ad review systems
  • Real-world stories of targeted exploitation—including children and grieving seniors
  • What businesses, schools, and governments can do now
  • Where hope lies: data sharing, regulation, and collaboration

 If you’ve ever wondered how safe your online experience really is—or how to protect others—this episode is a must-listen.

Sarah Ralston: https://www.linkedin.com/in/sarah-ralston-businessops/

Additional Resources:

Subscribe to Scam Rangers for more behind-the-scenes looks at the human side of scams—and the people fighting back.

This podcast is hosted by Ayelet Biger-Levin, who has spent the last 15 years building technology to help financial institutions authenticate their customers and identify fraud. She believes that when it comes to scams, the story starts well before the transaction. Ayelet created this podcast to talk about the human side of scams and to learn from those dedicated to advocating for scam victims and taking action against fraud.  Be sure to follow Ayelet on LinkedIn: https://www.linkedin.com/in/ayelet-biger-levin/   

More from RangersAI: https://www.linkedin.com/company/rangersai/  Learn More: https://www.rangersai.com/

Recommended
Transcript

Scams targeting seniors and children

00:00:00
Speaker
seniors are attacked on obituary pages. The obituary pages is always one of those that that you think really this older person is looking at this page for a reason in the midst of likely grief and you're attacking them with malicious scams.
00:00:21
Speaker
With older adults, it is very large print buttons that deceive them into downloading a printer driver, and which is actually malware.
00:00:32
Speaker
In schools, we see kids targeted with drugs such as fentanyl and opiates. We see vaping and e-cigarette content, alcohol, sexual content.
00:00:44
Speaker
All of this is targeted to kids without targeting.

Google's anti-scam efforts and ad fraud

00:00:49
Speaker
According to Google's ad safety report, 5.1 billion bad ads were stopped in 2024.
00:00:57
Speaker
And although the majority of these ads are ads that have used the ad network or trademark infringement, many of these ads are also scams from 890,000 for counterfeit goods to 8.8 million enabling dishonest behavior 10.9 million alcohol, 42 million inappropriate content, and the list goes on and on with adult content, gambling and games, misrepresentation, impersonation, and others.
00:01:29
Speaker
Malicious ads are targeting individuals into tech support scams, cryptocurrency investment scams, extortion scams, and others. And the question becomes, how do we identify and stop these ads?
00:01:49
Speaker
Scam Rangers, a podcast about the human side of fraud and the people who are on a mission to protect us. I'm your host, Ayered Bigger Levine, and I'm passionate about driving awareness and solving this problem.
00:02:07
Speaker
The Israel Internet Association recently published a report about ad scams in Israel. A new wave of algorithmic AI driven financial scams is targeting users via sponsored ads on social media.

Interview with Sarah Ralston on ad-driven scams

00:02:19
Speaker
These scams use deepfake videos, targeted distribution, and personalized content to manipulate victims into revealing sensitive data or transferring money. Our scam ranger today is Sarah Ralston.
00:02:33
Speaker
Sarah has a background at Microsoft and Reddit, and she has a really interesting beginning of her journey, which she'll share with us in just a bit. Today, she's a chief product officer at Proxyware and also a director of privacy and risk solutions at the Media Trust.
00:02:53
Speaker
Sarah is also going to be a panelist on the Global Anti-Scam Alliance tech support anatomy of a scam webinar that's going to take place on August 20.
00:03:04
Speaker
And I'll put a link in the show notes if you want to register for Hi, Sarah. It's great to have you on the podcast. Welcome. Hi, thanks for having me. So we're going to talk about something that we really haven't talked about before um on Scam Rangers, and that's the whole world of scams that are infused by ads.
00:03:27
Speaker
That's something that we really did dive into, how it happens, the anatomy of the scam ah from the point of contact perspective. so I'm really excited to dive into that with you today.
00:03:39
Speaker
um But before we get in, I really wanted to ask you about your background, which I found very fascinating, and how you ended up being here. What drew you into the space of online safety risk management?
00:03:52
Speaker
Tell us a little bit about your journey. My journey started in a very unconventional way. I've always been really interested in law enforcement and criminal justice, and that's what I went to college for.
00:04:06
Speaker
But I decided that college wasn't for me. And I joined the military um and through my military service, I got into the Naval Criminal Investigative Service, where i was able to see a whole new world of what was happening. And especially when it's happening from the people that volunteer to protect your nation.
00:04:27
Speaker
It's a different level and a different perspective. And after the military, I just needed a change. So i went to work at a marketing agency. And that was really pivotal in my career journey because it taught me the psychology of marketing.
00:04:46
Speaker
which later became foundational to how I combined those two very disparate career paths into one as a fraud investigator Microsoft.
00:04:58
Speaker
And from there, it just grew and changed as I did. ah wanted to learn more about everything. um My personality, I like to take things apart and see how they work.
00:05:10
Speaker
And so in business, that meant working a problem from every different perspective.

Psychological tactics in scams

00:05:16
Speaker
um And I also learned at Microsoft that there's very little that ah company can do from inside the corporate bounds to make the internet a safer place.
00:05:31
Speaker
So that led to my role here at at ProxyWare and the Media Trust, um I want to make the world a safer place. I want to make the internet a safer place.
00:05:44
Speaker
I have five sons ranging from nine 17. And the world that they all have grown up with online is much scarier today than it was when I first got my cell phone in high school.
00:05:59
Speaker
um And it's only going to get better if we collaborate across the industry to tackle this multi-channel, multi-company problem.
00:06:12
Speaker
Absolutely. And I think when you have that personal aspect of not only do I want to make the world a safer place, i it's very personal, right? So that that I think amplifies that.
00:06:23
Speaker
And you mentioned earlier psychology, and I think that's a really interesting point. um To me, being in the cybersecurity world for so many years, it's always about the hacking and the coding and the back doors and the protections and the risk indicators.
00:06:36
Speaker
i think psychology has become a really pivotal component around when, you know, when it comes to human related crime and attacking the human and manipulating with psychology. And I see all across, you know, financial institutions and other organizations, the emergence of how important it is to involve people, understand people.
00:06:58
Speaker
and how they act and what they respond to. Absolutely. If you think about some of the most prevalent scams that exist today, let's take tech support scams.
00:07:10
Speaker
They're big red ah screens that take over your browser. There's loud alarms and noises. They are designed to create fear.
00:07:21
Speaker
That is a so psychology tactic. And you think of other large print buttons that are green, that for anyone of driving age, green means good or go. And so there are little things that exist that in all of the scam dynamic dynamics that we may not actually think of at the time because it's hijacking your brain.
00:07:47
Speaker
It's making you you do something that maybe you don't want to by creating urgency, scarcity, fear, emotional manipulation. Bad actors are very good at understanding what makes you do the things that they want.
00:08:04
Speaker
And so the psychology aspect of the online scam world is critical to understand. And I think you mentioned something really important that we also didn't dive into so much.

Challenges in adopting anti-scam technologies

00:08:16
Speaker
We talk often about the language. What are the levers that they pull to ah use fear or delight, like an opportunity of cryptocurrency investment? But you talked about the the green button, and we'll dive a lot more into into those specific components. Yes.
00:08:32
Speaker
Going back to a email that I saw and I use often in my presentations where it's like a receipt for something that you didn't buy or, you know, thank you so much. If you didn't do it, call this number. That that number in font size and bold is huge. Like that's what their end game is to have you call that number.
00:08:52
Speaker
So everything else is just a distraction. And that's why the number is in bold. So wanted to ask you you've seen online threats from multiple angles, malware, um from privacy perspective, ad tech. ah What stayed the same and what fundamentally changed over the last few years?
00:09:13
Speaker
The biggest thing that has stayed the same is the amount of tech debt that the industry has. Bad actors are famously early adopters of technology. And as they adopt the newest trends, AI is a really big example of that.
00:09:32
Speaker
The industry that fights back against scams is much slower to adopt these things. So we're always chasing from behind.
00:09:44
Speaker
There's also a lack of funding, I would say, in in the online safety arena. There's a lot of money that's spent on cybersecurity, but cyber protects networks and systems and not people.
00:10:02
Speaker
And so while we're trying to figure out how to protect this device you're on, People are more complicated than that. My Mac and my phone are connected.
00:10:14
Speaker
i walk interchangeably with connecting to Wi-Fi at the grocery store, my kids' school, my house, my office, right? So if you're focused on one aspect, a computer or a network, you then you're missing the most critical component, which is the human.
00:10:33
Speaker
And that's why scams are as successful as

Scams in advertising networks

00:10:36
Speaker
they are. They're multi-channel. They are prevalent no matter what device you're on. And so the amount of protection that you have is dependent on what device you're on and what network you're connected to while bad actors exploit those pieces in the background.
00:10:55
Speaker
And unfortunately, bad actors have now much more advanced technology to use like AI. Exactly. Driven you know from text to voice cloning to deep fakes.
00:11:06
Speaker
all of those capabilities, but they're all targeted at what you said, really the human aspect where it's not just tools and systems anymore. So let's switch to talk a little bit more about your focus. And we haven't really talked about your role today, but we'll get there.
00:11:23
Speaker
But from an industry perspective, based on you're really involved and in both kind of research and what's going on in the industry, what is the amount of scams that comes from ads? The latest research I've seen is somewhere around 70%.
00:11:38
Speaker
And again, if you think about this idea of networks and systems, ads are served on trusted brand pages. So a and DNS list for a network is not going to block Google or Bing from serving on the network. It would be impossible.
00:12:00
Speaker
So let's say go to a trusted website, like you said, to Amazon or news, maybe a news page. And, or, you know, there's so many out there. I search for something, to a trusted page.
00:12:11
Speaker
So those ads that pop up, what I'm hearing you say is the, the yeah URL or the DNS of that trusted site is what we will see in the, when we look for risk.
00:12:23
Speaker
Exactly. And because bad actors use the brand trust of legitimate sites They serve ads on those sites and people feel like they're protected because they trust whatever new site or whatever search engine that's delivering them this content.
00:12:46
Speaker
20% of programmatic ads, which are served either on the banner or the side of a webpage when it loads are estimated to be scams. 20%. twenty percent Wow. That is a gigantic number.
00:12:58
Speaker
And that doesn't mean every person falls victim to it. But when 20% of what you see in ads are scams, it makes you look at the ecosystem a whole lot differently.
00:13:10
Speaker
so what types of scams are out there and most prevalent in a Definitely know about tech support scam pop-ups, but that's less kind of ads maybe. So let's talk about what types of scams are out there. Definitely shopping scams, I would presume, because a big chunk of that.
00:13:26
Speaker
Malware is actually the number one scam that we detect at ProxyWare. which is interesting because the FBI IC3 center where you report scams has only seen 45 malware cases in the entirety of 2024 from adults that are 60 or older.
00:13:52
Speaker
And we detect thousands of them per month. So, What this tells you is that that there is a technology gap that is leading to underreporting.
00:14:04
Speaker
And again, you've got bad actors using AI and other things that allow them to take over a computer and nobody knows about it. So it's not reported.
00:14:15
Speaker
and So it's also not talked about with the prevalence that we see it. Second to that, tech support scams are definitely a huge factor. They grew exponentially from 2023 2024 in the i c three data.
00:14:33
Speaker
And financially, investment scams, and you could probably include romance scams in this kind of ah bucket of of scams where the incentive is to get money from a person. So whether it's invest your life savings or it's I love you and my car broke down or I can't pay my mortgage, the end goal is the same.
00:14:55
Speaker
um And those are the most financially damaging scams, although the prevalence is much lower.

Accountability in ad content and tech companies

00:15:01
Speaker
Right. And I would say, correct me if i'm wrong, but tech support scams, shopping scams, cryptocurrency investment scams or and investment scams in general could start with ads.
00:15:13
Speaker
Romance scams too? Yes. um Even sexual exploitation this starts with ads. There was a study that was done 2022 that showed that 70% of sexual exploitation that started on the internet began with an ad.
00:15:32
Speaker
So think about adult content ads that push you to sites that are adult content. um But romance scams are the same. They can originate with, hey, go to this brand new dating site where you meet an AI person who loves you instantly and only wants your joy and happiness.
00:15:54
Speaker
ah And years later, you find out, I've given all of this money to someone who doesn't exist. The longest I've heard is 10 years. Someone was in a relationship for 10 years giving money.
00:16:09
Speaker
So the next question I want to ask is if I own a legitimate site and I allow ads and that's my business model, um what is my responsibility and how can i what are the techniques that typically companies use today to detect these scam ads?
00:16:26
Speaker
If at all, like, I think there's also a counter incentive here, which is a little tricky, but let's say that, that, that, that companies don't want scams on there. Let's assume it's not always true.
00:16:37
Speaker
What, what can they do today to stop it? What did they typically do? What they typically do and that what they're responsible for is nothing. And that's a pretty shocking answer, but it is the truth.
00:16:50
Speaker
So when you sell space on your website to put ads you don't know what ads are going to be placed there until the moment they're served. And they're served when I go to whatever news site or whatever whatever site I'm visiting, the ads are hyper-targeted to me as a person.
00:17:11
Speaker
And so there's no way that the company, the legitimate website, would know what ads are going to render to me um to do anything about it. They put all of their trust in the ad tech and the...
00:17:25
Speaker
and the um, the programmatic companies to do due diligence and follow the laws and the jurisdictions that they operate and not harm their consumers.
00:17:39
Speaker
Unfortunately, it's not enough. that Those include like KYC. What are some, what are the requirements there for the ed deck companies? It is. Yes. So especially in Europe,
00:17:53
Speaker
The DSA has very specific things to include KYC, but in other countries, that's not a requirement. So it really is up to the ad tech.
00:18:04
Speaker
Are they following that in Europe or are they following that globally? um Outside of that, it is running the creatives through a review process to make sure that what is serving is legitimate You mentioned gaps. One of the gaps there is ads are hyper-targeted and cloaking, which is directing cybersecurity professionals to one site and the intended victim to another, happens very frequently.
00:18:38
Speaker
So in these human review processes, you may think everything is absolutely fine and legitimate, but once the ad is out in the wilds,
00:18:51
Speaker
the the victim is still going to be targeted with whatever malicious content. So let's double click into cloaking. You mentioned this term. So what is cloaking exactly? So let's say I let's take um I don't know what company at Microsoft or any company, for example.
00:19:09
Speaker
So they sit in their corporate environment with a corporate network and they now review the ads. And I look at the ads and they see I see see they're fine and I approve it. What is cloaking in this context?
00:19:21
Speaker
Floaking is specific code on a page that directs traffic to another site in certain circumstances. So I've seen Microsoft's corporate IP blocked from seeing um what we call money pages, which is the the actual scam page.
00:19:42
Speaker
um It could be things like your browser size or your what type of browser, your screen resolution, your language, your keyboard settings, your country.
00:19:55
Speaker
It's designed to detect or counter detect really the tactics that fraud investigation professionals use to be able to see harmful content.
00:20:08
Speaker
We use the same tech on our side. The screen resolution looks off. The language isn't right. It's coming from this country. And bad actors use it back on us to detect whether we're using VPNs to access their content.
00:20:25
Speaker
um And that's that's where you'll get mismatches between your computer's country and your IP geolocation. um And then you'll get directed to a harmless site that looks legitimate.
00:20:40
Speaker
That's really interesting. So the classic fraud controls that we've been using for years, looking for anomalies and looking for, you know, associated with malicious activity, or if it's a location, if it's a device, if there's a user change or whatever, they're using the same tactics to identify cybersecurity researcher person that is valid validating the ads and and present them with legitimate ads. Whereas when they target users in other places, they can serve a malicious ad. So let's talk about the targeting for a second.
00:21:13
Speaker
um how How targeted, we talked about psychology, we talked about different types of scams.

Technological solutions to filter harmful ads

00:21:19
Speaker
How good are the scammers at at targeting the victims to present ads that might fit their reaction?
00:21:27
Speaker
They're great at it The entire internet ecosystem has been built by ads over the past two decades. And all of those tactics that are used to make sure that you see the advertisements, let's just say they're legitimate.
00:21:45
Speaker
um You see these legitimate advertisements that are relevant to you, that you're most likely to engage and click with, um click into, you They're there. The the bad actors know them too.
00:21:59
Speaker
And so because we've built this ecosystem where you are the commodity for sale and how to reach you online depends on how good you are at targeting your specific interests, bad actors can use that same technology to find the victims most likely to engage with the type of scam that they are deploying.
00:22:23
Speaker
Wow, that's very scary. So we can look at things like even age. I know that there are some technologies that we use in the fraud detection space to, you know, based on this typing and speed and things like that, look at age of the user and I'm sure demographics and I'm sure give us some examples of schools in a second.
00:22:41
Speaker
um And we'll get to that topic, which is very upsetting. But um first, what what can we do then? If cloaking is used, if the classic review process doesn't,
00:22:53
Speaker
ah make it doesn't cut it. What can we do? And i think this is where we dive a little bit into the media trust and proxy where and what your role is. So please share with us kind of what are some tactics that can be done to um to protect us, the consumers from from these malicious ads.
00:23:10
Speaker
The biggest thing is hiring a third party tech that can defeat cloaking because they are visiting your web property as the consumer would. and and And that really goes to the responsibility of the ad tech and the people that are serving the ad content.
00:23:33
Speaker
There is ah a way for you to defeat the cloaking before the ad is served. But for the legitimate business with a web property, there's also a way for you to filter out harmful content as the site loads for the end user.
00:23:51
Speaker
And there are a number of technology companies out there that do that. I'm biased. I think the media trust is the best at it um But there are others that do the same thing.
00:24:02
Speaker
And that is how you protect consumers first and foremost when they're visiting your web properties. Block the things that you know to be harmful from your side, not just put your trust in ad tech to detect what they aren't.
00:24:19
Speaker
So can you give ah an example of how that's done? and So if you're a legitimate business, you own a news website. You can hire a company like the Media Trust.
00:24:34
Speaker
to deploy a single line of code on your page, which reviews all ads in real time and filters out the harmful ones, replacing them with pre-selected ads that are are good.
00:24:49
Speaker
And that preserves revenue on your site, that um allows you to protect consumers, but it also allows people like me, people like the media trust to better understand the attack vectors that are out there because we're seeing in real time from multiple geo locations, what ads are being served to real consumers at the time they're serving.
00:25:17
Speaker
Okay, so I think maybe we can break that down a little. So you mentioned entering one line of code and multiple geolocations. So can you double click that? Yes, have proxy locations in 121 countries, and we can visit the internet through 100,000 unique synthetic digital personas.
00:25:38
Speaker
So if the malicious ad is targeting an 85 year old who goes shopping at the online quilting store, who loves loves to make recipes with her grandkids, targeting can be that specific, but we also can be that specific in the personas that we create.
00:26:02
Speaker
to visit web traffic. And we can use that same persona across locations in 121 countries to understand differences in how ads are being rendered in the US versus the UK for younger audiences versus older ones based on interests, which really hones down on the elements that are cloaked and the the victims that are intended.
00:26:32
Speaker
So it's kind of like a honeypot approach where you exactly you kind of lure and then you can, based on what you see, you can take action and remediate some of that. ah You mentioned some companies like Microsoft, Amazon, or or those who provide the ads.
00:26:48
Speaker
Is there a place from the consumer perspective to provide protection? So that let's say I don't want my employees to visit malicious websites while they're at work. i don't want my kids at school. I think this is a big, big topic.
00:27:02
Speaker
Our kids go to schools, they have school issued devices, and I hear stories again and again on how they access content that they really shouldn't with their school device available. Yeah, so that's an interesting problem. And especially in schools, federal funding in the US forces schools to employ tactics that protect kids online.
00:27:25
Speaker
The problem again is that they're focused on networks and systems, not the kids themselves. And you see that in these stories of kids accessing content at school. There isn't a great way to protect kids without knowing what it is that kids are seeing online.
00:27:43
Speaker
So that is the catch 22 here. You have to know what is targeting children to be able to take it offline. And that's what proxy wear does. So similar to the media trust um and viewing websites as synthetic personas.
00:28:04
Speaker
ProxyWare does the the same thing. The difference is it's hyper-targeted to a specific location. So we have a device, we deploy it in a school, and from that school, we run web traffic the way that a child would.
00:28:19
Speaker
And then we can see what types of content is targeting a child while they're at school, whether on a school-issued device or their personal cell phone or whatever the case may be.
00:28:31
Speaker
And then we can feed that data into a lot of different directions. One of them is into do-not-serve lists. One is into um the the digital ecosystem to actually remove that targeted harm, which we do about 90% of the time.
00:28:49
Speaker
And then in

Website complicity in scams

00:28:50
Speaker
states where there are laws that require ID verification before accessing adult content, We can see the sites that are not complying with that and work through the state to help them enforce those kind of laws at scale.
00:29:05
Speaker
So there's a lot of different uses for a technology like this. um It applies to the corporate world as well. You want to protect your employees while they're on your work issued computer surfing the internet on their free time.
00:29:22
Speaker
um And similarly, and corporate security can only do so much to protect against the ad threat. So it's important to understand what is targeting your employees.
00:29:33
Speaker
We work with a large national realtor. And where we see scams deployed to them is realtor.com, Zillow.com. It's the very specific sites that are targeted for for that working group.
00:29:51
Speaker
um And we would see the same if we were working in some other employment arena as well. State governments is a great example. ah We have devices that have been in state government buildings where we've seen hyper-targeted malware attacks, backdoor attacks.
00:30:10
Speaker
um to try and gain access to the intelligence of the state government. We don't see that at the realtor, right? So this is that ecosystem of victim scam and geography that makes the Internet what it is today. And it goes back to what you said earlier with the hyper targeting, right? It's not just hyper targeting on a large scale and a demographic.
00:30:32
Speaker
like the realtor example, if it's almost classic to put scam ads on that site for that audience because they will go there to do their job. And then when they go there, it's best to serve those scams. The likelihood of them clicking links is is much bigger, which is pretty scary. I want to almost ask, like, how bad is it?
00:30:53
Speaker
And you said 20%. So I think that gives you the number. But um I'm wondering if you can share a few

Targeting children through indirect means

00:31:00
Speaker
examples of things that you've seen and what you were able to kind of do to remediate that um in either schools or other, other examples or ecosystems.
00:31:10
Speaker
So senior communities is a big one where realtors are attacked on realtor sites. Seniors are attacked on obituary pages, news sites, ah sports sites.
00:31:23
Speaker
The obituary pages is always one of those that, that you think really, you know, and this, This older person is looking at this page for a reason in the midst of likely grief and you're attacking them with malicious scams.
00:31:42
Speaker
We see it as well on recipe pages. i I would love to say there's a good recipe page out there. i have not seen one yet. um and With older adults, it is very large print buttons that deceive them into downloading a printer driver, and which is actually malware.
00:32:05
Speaker
And because the print font is so big comparative to the actual website's print font and to actually print the the document, seniors will click on that more likely than they will the pages.
00:32:20
Speaker
So there's there's a lot of different vectors that we see. um In schools, we see kids targeted with drugs such as fentanyl and opiates. We see vaping and e-cigarette content, alcohol, sexual content.
00:32:36
Speaker
All of this is targeted to kids without targeting. So what I mean by that is the laws state you can't use under 18 as the targeting parameter.
00:32:49
Speaker
But you can say people who visit Nickelodeon, people who watch YouTube kids. Like there are very easy ways to know a kid online and you can use those interests to target as long as you don't use an age.
00:33:08
Speaker
So we do see bad actors targeting kids based on these interest parameters while they hold the statement that they just didn't know it was a child or I didn't intentionally select children. um And it's very sad that our laws are designed in a way that they can do that.
00:33:31
Speaker
Yeah. And you also mentioned like a school location, like the perimeter of a school, so like an IP address. Yes. That's cool. Wow.

Need for accountability and regulatory changes

00:33:38
Speaker
um You know, and as we were talking, I'm thinking about websites where they they have these stories like um almost celebrity news or something like that, where you have to go through 10 pages to actually read the article.
00:33:52
Speaker
And then you have ads all over that page. and And I feel like so it's definitely the business model for that news site or whatever to have as many ads displayed to me because that's how they make money.
00:34:05
Speaker
But they're somewhat a participant in this scamming activity. Yes. And i can tell you, there are some websites that we see as top offenders for serving harmful content.
00:34:19
Speaker
And anytime that we see harmful content, we notify them. So at this point, they are complicit. in the scams that are attacking their consumers because they are well aware that this is happening and they continue to allow it.
00:34:34
Speaker
And the less reputable the site, the less the chance they care. and Fake celebrity endorsements is a big way to draw in clicks, also called clickbait.
00:34:46
Speaker
um And they are prolific with ads that are on page. They want you to be clicking on those, they generate their their revenue based on it.
00:34:58
Speaker
So the more ads they can throw at you, the higher the revenue is for that page for your site visit, especially sites like recipes, blogs. um They don't get paid for anything that they produce except for the ads on page, which require your attention.
00:35:16
Speaker
So the longer you're there, the more the times the ads recycle, the higher your value is to that particular website. And if you've ever been on a website where you scroll to the bottom and the whole page reloads, it is likely that you've encountered some scam ad in that page.
00:35:40
Speaker
You said you report to them, but I think we need to report to someone else who can actually take action against these sites that are not doing and I'm sure companies that are, that their businesses, you know, they're enabling ads, but it's not their main business and they want to be trusted and responsible. They're either hiring you or or other companies in the space to do the due diligence with the synthetic ideas.
00:36:05
Speaker
um But i think kind of generally I'm asking for your opinion here. How can we as ah as a community, and know given the different um either legislative or community-driven action or nonprofit action organizations, how can we hold these offenders accountable? And I'm not talking about the scammers, that's one thing, but those enablers of the scams because all they want is the profit, the the clickbait and the ads.
00:36:37
Speaker
It's a really interesting question. the Digital Services Act in the EU was designed towards that end. And everyone in the industry sort of quietly knew that the way the DSA was written wouldn't apply to programmatic ads.
00:36:54
Speaker
It would apply more so to search engines. And nobody said anything until it was too late. So now the DSA is trying to work on a code of conduct that would allow more enforcement against these sort of complicit websites that are out there.
00:37:17
Speaker
But at that, it's a code of conduct. It's not a legal requirement. So it becomes more difficult to actually enforce that. And it's limited to a specific geography.
00:37:29
Speaker
The United States is far behind the EU or Australia when it comes to these types of issues. Australia, for example, you typically have a 20-minute takedown window.
00:37:44
Speaker
So from the time you detect that something is a scam to the time it is removed from the internet, you have 20 minutes. There is no such timeframe like that in the U.S.,
00:37:55
Speaker
And we see at times um when we notify not just the website itself that there's an ad that's wrong, but also the ad tech provider, and the the actual seller, they don't take it down. There is no requirement in the U.S. for them to actually take it down.
00:38:14
Speaker
And why would they? It costs time and resources to have people that are sitting there that are taking these kinds of ads down. And with a lack of accountability, there's no reason for them to divert those company funds to that specific area of the business.
00:38:33
Speaker
So if we want change, we need accountability. So what's happening in the US? I know there's the kids and it's and it's it's related to kids. It's the Kids Online Safety Act the currently making its way slowly through this complex situation of legislation in our country. But does it even include um any anything to address what what you described to us from ah from a threat perspective?
00:39:02
Speaker
It addresses a number of threats for kids online, mostly around social media. i don't think that there's a big enough grasp of the dangers that kids are pushed through through online advertising. And there's a big focus on social media. and Now, I'm not saying that that's wrong.
00:39:26
Speaker
i I completely agree with... limiting social media access for younger kids because psychology has shown that it changes the dopamine receptors in your brain. it um changes like your attention span. it has negative impacts on anxiety and depression. So there's a number of reasons why the focus on social media is appropriate.
00:39:55
Speaker
There's not enough national focus on the impact of adult content on children's brains. The average age that kids now see adult content online, such as pornography, is 11.
00:40:11
Speaker
is eleven And some studies have even shown as early as 9. And that also has a risk to mental health, anxiety, depression, and body health ah image issues, healthy relationships and boundaries.
00:40:27
Speaker
There's even a link to feeling... feeling okay with behaviors that that are exploitive um or ah related to trafficking.
00:40:39
Speaker
When I was at Reddit, I saw a story of a girl who was 18 with a much older boyfriend, and she went to the community to ask for help.
00:40:50
Speaker
um And the answer was she was being trafficked, but she didn't know because we've normalized this type of content online. So there are a number of states um that have adult ID verification laws, not all of them and not at a national level.
00:41:07
Speaker
And while COSA will will be a big step towards this, there's also a need to make sure that the language is appropriate and that there's enforcement mechanisms in place to make sure it's happening.
00:41:21
Speaker
at one of the schools we're in in Virginia, we ran a test and 70% the top 70 adult websites are not compliant with state law.
00:41:33
Speaker
And Virginia has ID verification laws. So writing laws on paper and then how that actually equates in enforcement and action are not always the same thing.

Global awareness and legislative hopes

00:41:46
Speaker
And so while I hope COSA is successful and goes through, what I'd also love to see is where is the accountability and who is enforcing and how is that being done? Yeah, I think that's a huge gap and that is a critical one too to be resolved.
00:42:03
Speaker
There are solutions out there and I think really wider adoption, but you mentioned a few things. You mentioned technical solutions to be able to spot the ads. And we also mentioned accountability because there are KYC requirements out there, but the enforcement is definitely lacking. There's a gap in there and we need to get more of those regulations in there and more KYC expectations. And then the Antec companies and and those clickbait companies to take more responsibility I think we need to think about ways to call them out more as well.
00:42:35
Speaker
But with everything that you talked about and you being in this industry for so long, where do you see hope? What are you hopeful about in terms of the strides that we've been taking to protect consumers better?
00:42:49
Speaker
I think there's a global awakening to the industry at large and the things that are happening. The first 10 years that I was in this space, I saw online harms and there were no laws, there were no regulations or accountability that people had to take.
00:43:08
Speaker
GDPR was the first step towards there's something to this online ecosystem. And since then, i think a lot of steps have been taken. So I'm very hopeful about the conversations that are being had in the community.
00:43:23
Speaker
I'm hopeful that people are now sharing data in a way that they never would have before um through various ah alliances, I guess. ah Global Anti-Scam Alliance is one of those that is pretty well known.
00:43:40
Speaker
um Sharing data on harms and scams would have never happened before because of competitive reasons. And so i think we're taking the right steps as a community, but there is a technological debt that has to be paid and we have to be willing to invest in that.
00:43:58
Speaker
And with, the leaps and bounds that are being made with AI right now, my fear is that are we goingnna are we going to keep up with that? But my hope is that there's enough people who actually understand what the problem is that we can push forward legislation and push forward accountability in a meaningful way.
00:44:20
Speaker
Yeah. Yeah. And you're reminding me of the video for the interview with Sam Altman where he's talking about, oh my God, ai is going to used for fraud. Yes, yes, we were seeing that already and it's I'm thank you thankful that it's out there, but we we as a current industry definitely know that, so we definitely need to invest more to keep up.
00:44:40
Speaker
Sarah, thank you so much. It's been very enlightening conversation for me and I'm sure for the listeners as well. Thank you for taking the time and really looking forward to kind of continuing the conversation down the line. We'll be In a few months, we'll revisit where we are and hopefully we'll see some improvements from a legislation and enforcement perspective.
00:45:02
Speaker
I would love that. Thanks so much for having me.