Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Avatar
1.7k Plays11 months ago

In this episode, we talk about misinformation, disinformation, and troll farms in the 21st century with Olga Belogolova and Regina Morales.

Olga Belogolova is the Director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS). She is also a professor at the Alperovitch Institute for Cybersecurity Studies at SAIS, where she teaches a course on disinformation and influence in the digital age. At Facebook/Meta, she led policy for countering influence operations, leading execution and development of policies on coordinated inauthentic behaviour, state media capture, and hack-and-leaks within the Trust and Safety team. Prior to that, she led threat intelligence work on Russia and Eastern Europe at Facebook, identifying, tracking, and disrupting coordinated IO campaigns, and in particular, the Internet Research Agency investigations between 2017-2019. Olga previously worked as a journalist and her work has appeared in The Atlantic, National Journal, Inside Defense, and The Globe and Mail, among others. She is a fellow with the Truman National Security Project and serves on the review board for CYBERWARCON.

Regina Morales is the principal of Telescope Research, where she conducts investigations on behalf of law firms, multinational corporations, financial institutions, and not-for-profit organisations. She has subject matter expertise in Latin American politics, corruption issues, extremism, and disinformation. In particular Regina specialises in investigating disinformation campaigns waged on social media platforms, forums, and certain messaging apps. These campaigns include online harassment, corporate disinformation relating to securities, conspiracy theories, and politically or ideologically driven campaigns. She has seen, often in real time, how the theoretical components of disinformation and propaganda are used in practice. Prior to founding Telescope Research, Regina worked for two top-tier, Chambers and Partners-ranked global investigative firms where she conducted and managed complex, multi-jurisdictional investigations on behalf of white shoe law firms and multinational companies.

Patreon: https://www.patreon.com/EncyclopediaGeopolitica

Recommended
Transcript

Introduction to the Podcast Series

00:00:02
Speaker
Welcome to How to Get on a Watchlist, the new podcast series from Encyclopedia Geopolitica. In each episode, we'll sit down with leading experts to discuss dangerous activities. From assassinations and airliner shootdowns through to kidnappings and coups, we'll examine each of these threats through the lenses of both the Dangerous Act to seek and to conduct these operations, and the agencies around the world seeking to stop. In the interest of operational security, certain tactical details will be omitted from these discussions.
00:00:34
Speaker
However, the cases and threats which we discuss here are very real.

Meet the Hosts: Louis H. Prisant and Colin Reed

00:01:05
Speaker
I'm Louis H. Prisant, the founder and co-editor of Encyclopedia Geopolitica. I'm a researcher in the field of intelligence and espionage with a PhD in intelligence studies from Loughborough University. I'm an adjunct professor in intelligence at Science Po Paris and in my day job I provide geopolitical analysis and security focused intelligence to private sector corporations.
00:01:25
Speaker
My name is Colin Reed. I am a former US intelligence professional now working in the private sector to bring geopolitical insights and risk analysis to business leaders.

Understanding Troll Farms: Episode Focus

00:01:35
Speaker
In this episode, we'll be discussing how to run a troll farm. Now the field of fake news is a big confused one with many actors and commentators struggling to understand and identify the various kinds of influence operations, smear campaigns, and outright trolling that make up the modern information ecosystem. So to sort order from this apparent chaos, we've invited two guests to help us define the state of the field, contextualize the different kinds of threat present in the landscape, and to understand how various actors are seeking to combat the problem.

Guest Introduction: Olga Belgalova's Expertise

00:02:04
Speaker
Our first guest is Olga Belgalova. Olga is the director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies. She's also a professor at the Alperovitch Institute for Cybersecurity Studies at SAIS, where she teaches a course on disinformation and influence in the digital age. At Facebook meta, she led policy for countering influence operations, leading execution and development of policies on coordinated inauthentic behavior, state media capture, and hacking leaks within the trust and safety team.
00:02:32
Speaker
Prior to that, she led threat intelligence work on Russia and Eastern Europe at Facebook, identifying, tracking, and disrupting coordinated IO campaigns, and in particular, the Internet Research Agency investigations between 2017 and 2019. Olga previously worked as a journalist, and her work has appeared in the Atlantic National Journal, Inside Defense, and The Globe and Mail, among others. She's a fellow with the Truman National Security Project and serves on the review board for Cyber Work On.

Guest Introduction: Regina Morales' Investigative Work

00:02:59
Speaker
Regina Morales is the principal of telescope research where she conducts investigations on behalf of law firms, multinational corporations, financial institutions, and not-for-profit organizations. She has subject matter expertise in Latin American politics, corruption issues, extremism, and disinformation. In particular, Regina specializes in investigating disinformation campaigns waged on social media platforms, forums, and certain messaging apps.
00:03:24
Speaker
These campaigns include online harassment, corporate disinformation relating to securities, conspiracy theories, and politically ideologically driven campaigns. She is seen often in real time how the theoretical components of disinformation and propaganda are used in practice. Prior to founding telescope research, Regina worked for two top tier chambers and partners ranked global investigative firms, where she conducted and managed complex multi-jurisdictional investigations on behalf of white shoe law firms and multinational companies.
00:03:54
Speaker
Olga and Regina, thank you very much for joining us. Thank you so much for having me. Thanks so much, both of you, for having me as well. So we always start with this question, and we'll start with Olga and then, Regina, you

Olga's Journey: Journalism to Disinformation Expertise

00:04:05
Speaker
as well. How did you get into your line of work? So I'm originally from Ukraine. I was born in Kiev, technically born in the Soviet Union. So I guess you can say I was born into this field in some way.
00:04:17
Speaker
It all makes sense looking backwards. As was mentioned earlier, I did work as a journalist and I had worked in the field of information, and then I went on to study warfare. Those two things really came together as information warfare, one of the many terms that we might talk about later. I finished graduate school, absolutely expected that I was going to go work in the national security field in government.
00:04:44
Speaker
And then, you know, the 2016 election happened. And all of a sudden, you know, I was really interested in what was happening and talking to people about it.
00:04:52
Speaker
and figuring out what they were doing about it, whether they were people in defense roles or in social media companies or at think tanks doing research into the space. And the field was relatively limited at the time. And so I just talked to anyone who would talk to me. And eventually I met someone who worked for the Threat Intelligence Team at Meta or the company formerly known as Facebook. And the rest is history. Regina, how about you?

Regina's Path: From Music to Corruption Investigation

00:05:18
Speaker
Yeah, so I kind of went about it in a very tangential way. I actually studied music in undergrad and international studies, so definitely not a normal path for a music major, for a performance major. But I did study corruption issues in graduate school, and I had an interest in the space where private sector and government meet.
00:05:44
Speaker
I grew up between Illinois and Bolivia, two places that have been rife with corruption in different ways. As many listeners may know, Illinois has a history of corruption and politics.
00:05:58
Speaker
And multiple governors have faced charges of corruptions and other forms of fraud. And in Bolivia, corruption is more of a daily life occurrence. It's very difficult to get things done without paying some sort of bribe or facilitation payment. So that's sort of where I wanted to go in terms of my studies. Now, while I was in graduate school, I interned at an anti-money laundering think tank.
00:06:27
Speaker
And the executive director noted that I had a knack for investigating and investigations and told me about the investigations industry and political consulting.
00:06:38
Speaker
sort of more broadly. So I started looking for a job in that industry in private sector instead of government. I knew I wanted to work in private sector. So I started my career at a political risk consultancy that also conducted investigations primarily in emerging markets. This was a shop started by former intelligence officials, sorry, intelligence officers, and really focused a lot on like human source intelligence.
00:07:07
Speaker
I worked on many interesting cases, including cases where different sides were giving conflicting narratives and press and on social media. And this was particularly in like West Africa, Latin America, 2014, 2015 timeline. So this is sort of when I started kind of diving into this sort of realm of political disinformation.
00:07:31
Speaker
I then went on to work at a global investigations firm that's headed by former federal prosecutor, so different type of sort of investigators from a different angle. Here, worked primarily on long-term litigation cases, finding facts and evidence to bolster the case, and the disinformation research that I did
00:07:53
Speaker
at this company was really, as it affected the private sector. So again, really focused on private sector and the ways that mis-information have affected these companies, their bottom line, things like this. So one of the things that, you know, with corruption and sort of this misinformation, a lot of the, in terms of the intent for disinformation,
00:08:19
Speaker
these campaigns are usually driven by by profit or power. So there's sort of like a kind of very similar dovetail with corruption and disinformation in terms of like, what motivates people to do these acts. But that's sort of my story. Not a very linear one, but yeah.
00:08:42
Speaker
Well, two very fascinating backgrounds there.

Clarifying Terminology: Misinformation vs. Disinformation

00:08:46
Speaker
So, Olga, you teach extensively on this topic and like any good professor, you've been very adamant when we've been planning this episode that everyone in the conversation needs to get the terminology correct.
00:08:55
Speaker
something I can certainly appreciate. So help us set the stage here. When we talk about malinformation, misinformation, disinformation, you know, I hear these terms being used interchangeably all the time. And, you know, I can see that that's not the case. So so when we talk about influence operations and this type of information warfare, what do we mean?
00:09:15
Speaker
So this is one of my favorite topics to talk about. In fact, sometimes I get made fun of by my colleagues because I go on these rants about when people use these terminology incorrectly, when they say things like missed disinformation because that doesn't mean anything really.
00:09:32
Speaker
And the reason i care about that is because we can't possibly get our responses to these problems right if we don't actually fight the same things or we don't identify the problems that we are specifically trying to address and so there's a couple different frameworks that i think are useful.
00:09:49
Speaker
to understanding the terminology in this space. There's much, much more detail that we can get into, but I'll try to keep it short. One thing to get out of the way really quickly is fake news is a fundamentally useless term, right? It's something that now really just means I don't like that, and now I'm going to call it fake news. It's more of a joke than anything else at this point in time. But the terms that are most often used, including in our conversation so far, are misinformation and disinformation.
00:10:18
Speaker
And the distinction between them is really rooted in the words, right? Misinformed, right? Your friend, your colleague, your uncle might be someone who just doesn't know any better and truly believes something to be true. And so they are merely misinformed, which is distinct from disinformation because it is the intentional sharing of false information.
00:10:39
Speaker
And those things are important. Intent is not always easy to divine, but you have to understand that those two words mean different things and you can't use them interchangeably or in some cases, meld them into one word together because intent is the distinction. And when you're thinking about what you do about these problems, right, you know, Regina mentioned conspiracy theorists.
00:11:00
Speaker
They are often people who are misinformed. They have gone down certain rabbit holes and they don't know something any better. There are certainly people that do know what they're doing and they're trying to sort of spread a particular conspiracy theory. So you want to sort of treat differently the person that is sort of the victim versus the perpetrator of sharing false information. But both misinformation and disinformation
00:11:23
Speaker
are content-based terms. What do I mean by that? Content is you're making a decision whether something is true or false based on what you see in front of you. It could be words. It could be an image of some kind. You're trying to assess whether it's true or it's false.
00:11:39
Speaker
that's a binary but ultimately when we are talking about this field and the way i got into it in the first place we were largely talking about troll farms the name of this episode and troll farms are not necessarily focused on disseminating false content they are campaign based and so when you use campaign based terminology you can use terms like influence operations information operations or the historical term for
00:12:06
Speaker
what the Soviet Union did during the Cold War, which is active measures. Those are campaign-based terms. You're thinking holistically about a lot of things that may or may not use false information in their campaigns.
00:12:19
Speaker
And so that distinction can be really important in particular, because if we think about the most famous example, which we'll be talking about later, the internet research agency, most of what they did was actually disseminate, you know, legitimate news outlets, websites, and all of that, and not necessarily false content. And that's a common misconception. So I prefer to use the term influence operations. A few other quick distinctions to make.
00:12:43
Speaker
financially motivated operations versus politically motivated ones. So there are certainly campaigns out there that are trying to deceive someone for a political gain, trying to manipulate the information environment associated with either a government or a political party.
00:12:59
Speaker
but a lot of things that look to the casual observer like an influence campaign or troll farm are really financially motivated campaigns people trying to make money off of sharing false information or getting someone to just click on a flashy page and you know get some get some clicks and likes.
00:13:18
Speaker
And finally, last distinction is covert versus overt influence. We can talk about, you know, troll farms. We can also talk about state controlled media outlets where people are not necessarily creating false personas. There are journalists who are working for a government sponsored media organization and they are doing work on behalf of the government. And that can be things like propaganda, which in and of itself is a neutral term.
00:13:46
Speaker
It just means that dissemination of information that is on behalf of a government or some other organization that is favorable to it, and that can be done by anybody. But the term over time has sort of gained a negative connotation.

Actors in Disinformation: Motivations and Methods

00:14:01
Speaker
So, Regina, let's turn to you now and talk about the various actors in this space. What are the differences between the nation state actors versus maybe extremist groups or individuals?
00:14:12
Speaker
Yeah, so when most people think of sort of culprits in the space pushing disinformation narratives, they think of nation states. But other groups and individual actors participate in the space as well, as Ola had mentioned.
00:14:26
Speaker
These can be to push narratives to meet their goals, whether that's for power or profit. Sometimes the narrative that they're pushing begins within the group. Many times it's actually that they latch onto an existing narrative in the online eco space and then amplify it in order to achieve their goals.
00:14:50
Speaker
And in terms of, you know, extremists, particularly in the US, this can be for profit or for power. So there are instances where it's both they might be trying to generate clicks, likes, but also sometimes they're pushing products or donations, but they're also pushing a message as well.
00:15:15
Speaker
one of the other examples that I really like to give and kind of how I got started in the space was through investigating short sellers. So these are people that short a stock of a publicly traded company and then seek to drive down the stock by putting out information
00:15:39
Speaker
particularly in this case, we're looking at false information that would help drive down the stocks. So that is also a for-profit enterprise in terms of a campaign that you would see online. So it's not only nation states, but it can be individuals, it can be groups, political, ideological groups,
00:16:02
Speaker
It can be political parties, it can be other types of groups, it can be religious groups, any kind of community online, really. So let's take this back. Olga, you mentioned this started for you after the US presidential election in 2016 and the Russian effort that took place there. Talk to us about the situation that occurred then and give us a bit of a scene setter for that.

Case Study: Russian Influence in the 2016 US Election

00:16:27
Speaker
Yeah, of course. So importantly, you could argue that there were probably two or even three distinct types of Russian influence campaigns in the US in the lead up to the 2016 election. They often get blurred together in public discourse. And again, I like to make the distinctions between what we're talking about, to be clear.
00:16:45
Speaker
So the first one is Russian military intelligence or the GRU and the Hack and Leap campaign to steal private information and then use fake accounts but also sort of more direct methods of disseminating that information to journalists and to others and getting that information published and creating fake websites and other things like that to disseminate that information as well.
00:17:09
Speaker
The overt side, which I mentioned earlier, is state-controlled media outlets like Sputnik and RT and other subsidiary outlets that people are not necessarily sure or familiar with the fact that they are controlled by that same government to share information about what was happening in the United States and catchy headlines that get people to read something.
00:17:31
Speaker
And the most infamous effort is the Internet Research Agency campaign, and that's the one that I have the most familiarity with because it is what I spent my time doing in the early days on the intelligence team at Facebook. So what is the Internet Research Agency? Importantly, it's actually not a government effort, right? It's a troll farm in St. Petersburg, Russia that had been in operation since at least 2013.
00:17:54
Speaker
And it was initially targeting domestic audiences in Russia. And importantly, as I said, it wasn't a government effort. It was a rich oligarch, Yevgeny Prokoshin, who is no longer with us, according to public reporting, who had friends in the government, but did not work for the government and had contracts, military contracts and other things with the government for catering and with Wagner Group.
00:18:19
Speaker
So another common misconception is that these efforts were designed to benefit a particular candidate or party. But if you actually look at the substance of the campaigns themselves, as I have as an investigator, you'll see that they covered both ends of the political spectrum.
00:18:35
Speaker
and appear to be more so designed to drive wedges in society further rather than promoting particular agendas. And I think that's important to point out because, again, a lot of the conversation around this centers on electoral outcomes or a particular agenda. But really, if you look at the substance of these campaigns, you'll see that they were more focused on posting memes
00:18:58
Speaker
that were divisive politically and again on both ends of the political spectrum they had blue lives matter pages black lives matter pages they had pages that were support supportive of
00:19:09
Speaker
guns and not supportive of guns. And so all kinds of things on both ends of the political spectrum. As I alluded to earlier, they actually did share quite a bit of content that was not necessarily false, right? Some of the most shared links on Twitter were NewYorkTimes.com. And so this common misconception that they were creating lots of fake domains
00:19:31
Speaker
and creating fake websites to share false content is actually not true. They were actually using existing information that was out there. Again, not that different from what we saw during the Cold War with the Soviet active measures where they really used a lot of real things that were happening in society. The problems that they leveraged weren't problems that they invented. We have civil rights issues in the United States of America. We have a lot of other divisive
00:19:58
Speaker
political issues that they just took advantage of, they did not invent them. So the content piece is important because it's one of the reasons I sort of warn people off of using content driven investigative techniques because it can prove harmful and also sort of you end up thinking that something is actually a campaign when it's not or it's legitimate people that are just sharing accurate information.
00:20:25
Speaker
I think the reason we're talking about all this is because ultimately this campaign changed this field, or for better or for worse, it kind of created it. Sometimes joke about the concept of the disinformation industrial complex. People have been lying for centuries, but now there's an entire industry surrounding this.
00:20:45
Speaker
My course name when I was first started teaching at Georgetown was called lies down lies and disinformation and it was a bit of a joke but it's also you know Play on the terminology and a play on the fact that you know, this isn't a new problem, right? We you know people have been lying for a long time It's just that now the term disinformation is really sexy. So everybody's talking about it
00:21:05
Speaker
So Regina, can you tell us about some of the other government actors that are active in this space that our listeners might not be

Global Influence Campaigns: Beyond Russia

00:21:12
Speaker
as familiar with? You know, this idea of the kind of 2016 elections and the French elections the following year in 2017, you know, they're big cases that got talked about a lot. But what are some of the other cases out there that people maybe don't know as well?
00:21:27
Speaker
Yeah, so many nation states participate in these types of campaigns. It's not exclusive to like the, you know, the big bad actors like Russia. There's obviously like Venezuela is known to participate, especially in the domestic sense. Iran has a history of also participating in influence campaigns throughout the Middle East and South Asia.
00:21:51
Speaker
you know, during COVID-19, China appears to have pushed in narratives related to COVID-19 and the Hong Kong demonstrations and BLM protests in the US. However, they were, you know, pushing, you know, pro China narratives. I don't think that anybody like, you know, fully was able to attribute it to the state, but it's possible and it is maybe likely since they were pro China narratives, but again, inconclusive.
00:22:21
Speaker
There's also been reporting about pro-Western, pro-U.S. narratives in the Middle East and parts of Asia from inauthentic actors on social media, particularly Twitter. So it's not like, you know, this isn't just Russia that does this. Many, many nation states participate in this, and they often participate through proxies. So it is difficult to attribute directly to them, especially from a
00:22:49
Speaker
private sector perspective and non-government perspective where you don't have access to subpoena power and things like this or other tools at your disposal. But definitely want to emphasize that Russia is not the only one doing this.
00:23:06
Speaker
So we talked a little bit about the nation states involved in this.

Nation State Methods: Overt and Covert Strategies

00:23:10
Speaker
Olga, I wonder now if we can talk about the methods available to those nation states to run these influence campaigns. We've talked a lot about the internet research agency in the context of the troll farm, right? But how effective was the IRA's work? And is that sort of the only methodology the nation states deploy? I'll answer the last question first, which is definitely not. So we talked about overt versus covert influence. Nation states deploy.
00:23:36
Speaker
They have a lot of different tools at their disposal to influence. You can actually go back. One of the things I share with my students in class is testimony from Bob Gates during the 1980s in front of, I believe, the House Foreign Relations Committee about what they had analyzed the Soviet Union to be doing. They break things up into covert and overt influence.
00:23:58
Speaker
And some of the different types of active measures that the Soviet Union was engaged in and they included things like reaching out to individuals and having sort of agents of influence. There's a lot of different ways in which nation states can influence one another and influence populations in their respective countries and they are not limited to running a troll farm.
00:24:19
Speaker
They include things like, you know, uh, overt state controlled media outlets and other, you know, um, things that are connected directly to the state, um, in the public domain and of course,
00:24:31
Speaker
you know, all kinds of relationships, diplomacy efforts. You can think, you know, Regina mentioned China, so you can think about the Confucius Institutes and things like that, where there are diplomacy efforts that many different nation states undertake in order to try to, you know, make people more favorable to their particular country. So we can talk about the concept of whether, you know, influence is always malign, right? There's a lot of different, everyone's trying to tell their story and trying to tell it effectively. And where does it cross the line into being malign?
00:25:01
Speaker
And that's where the work that you know i have done into things like troll farms is where we can sort of draw a particular line in the sand and say when people are pretending to be someone that they're not. Then that can be a red line right building false personas and pretending to be part of some sort of activist group that they're not in.
00:25:21
Speaker
So in practice you're talking about governments but a troll farm is not all too different from a marketing agency or an advocacy group's headquarters in and of itself which is a complicated factor when you're conducting investigations into them because if you're looking for one you will likely find the other.
00:25:38
Speaker
But really what is it it's a group of people often young people with social media skills marketing degrees english degrees depending on what country they're in who often work in the same building often in shifts. They're creating false personas pretending to be people that they are not.
00:25:57
Speaker
and creating content, joining communities, often existing communities, amplifying often existing content and issues like we discussed, reaching out to journalists to try and another influential figures to try to drive their specific agendas.
00:26:12
Speaker
To the question about how effective they are, that's a good question because not a lot of people have actually been able to definitively prove that. And it's actually quite a difficult thing to do. Put it this way, how do you know that some meme of Jesus wrestling Hillary Clinton that someone saw on their newsfeed on a social media platform for about five seconds had an impact on their political views and decisions?
00:26:38
Speaker
You don't. And there's a number of different proxies that people try to use to understand whether these things are impactful, but they're imperfect and they're all flawed in their own ways. So one proxy is numbers and reach, right? A lot of the conversation tends to center around, well, how many accounts did they create?
00:26:58
Speaker
and how many people viewed them and for how long and how many friends did they have? Those are things that can help us because we are in a data-driven world to try to make sense of it. But they're flawed because ultimately, if you are just focusing on numbers, there's a lot of campaigns that my team back at Meta saw that were really big and then no one saw them. They created a whole bunch of accounts and got no traction.
00:27:25
Speaker
And then there's some campaigns that had two to three accounts, but they actually reached an influential person, got them to do something or share a news story for them. And that actually had more of an impact than anything else. And so it's not always the best proxy to use.
00:27:40
Speaker
A more complicated one is, as I said, whether a real individual took the bait and someone actually disseminated that information for them. Some researchers in this space, including Alicia Wanless and Kate Starbird, talk about participatory disinformation or
00:27:56
Speaker
the sort of participatory space in which this takes place because ultimately, this is not, you know, television, this is not radio, this is a participatory environment online, where everybody's communicating with one another all the time. And that's what, you know, fundamentally makes a lot of this different.

Non-State Actors: Troll Farms and Financial Motives

00:28:14
Speaker
So let's move away from the government context now. And I'd like to hear from from both of you.
00:28:19
Speaker
Let's talk about the extremist actors and the individual actors that are out there propagating these mis and disinformation narratives. Can we talk about their methodologies, the tools they use that might be different to those that we've already discussed the nation states deploy?
00:28:32
Speaker
So I can start on the government side, aside from, you know, government, you've got privately run troll farms, like we discussed, um, that maybe have some government ties. You have financially motivated troll farms that are just trying to make money off of clicks like the Macedonians. And then you have non-state actors, political parties, advocacy groups, and private companies that all have, you know, an incentive, as Regina mentioned earlier, to, you know, share their story, both using
00:29:00
Speaker
methods than any marketing agency or promotional organization would use, as well as more nefarious techniques like creating big accounts. The distinction between them, which I made earlier that I think is important, is if you can sort of think about what someone's incentives are, then you might be able to more effectively address them.
00:29:20
Speaker
Take, for example, Macedonian troll farms. If they are trying to drive someone to their website, which is the point of sale where they will make money off of the clicks that are on that website, then they're going to be trying to get your attention away from a social media platform and onto the place where they'll be able to make money.
00:29:40
Speaker
So when you think about that from the perspective of an investigator, it's going to look a little bit different on a network graph when you're doing analysis than a troll farm that is just trying to keep your attention, build a relationship with you. You're going to have more longer running personas and relationship building that these troll farms invest into than perhaps someone who is just interested in getting you to go somewhere else.
00:30:05
Speaker
And so a lot of that, the techniques will look different to an investigator and also the guys that are engaging in these campaigns in the first place. Regina, please jump in. Yeah, so in terms of the way things look, I think it's
00:30:24
Speaker
One thing that an effective campaign will look very similar to what a normal conversation of organic conversation will look like on a particular platform online. So I'll give a couple of examples and one sort of more recent example and
00:30:42
Speaker
Well, they're both semi-recent and haven't really been discussed, let's say, in media, but it was something that we noticed in 2022 that anti-government extremists, particularly in the U.S., in sort of a, quote, unquote, post-pandemic,
00:30:58
Speaker
space, a lot of these existing channels and existing groups online that were previously anti-COVID mandate and pushing these sort of false narratives there began pushing a new narrative about food shortages and that governments and Big Agra were pushing food shortages in 2022.
00:31:22
Speaker
And this narrative initially spread on platforms such as GAB and .win communities. But as Olga mentioned, it was promoted in a more mainstream sense on Tucker Carlson and which further amplified these narratives in those initial spaces on GAB and the .win communities as a verification of the narrative and then also spread to more mainstream.
00:31:51
Speaker
platforms. Now, whether some of these channels on Telegram that were pushing this also push ideas of parallel economy. This is something that's very big within the extremist community. These are also anti-corporate sort of channels. These were ones that were pushing
00:32:14
Speaker
the anti Bud Light, the anti Target, anti Disney type of messages from earlier this year and even in the years past. And one of the questions that I haven't done this investigation, but in thinking about from the short seller toolkit, the ways that short sellers would push narratives to drive down stock prices, you would see somewhat similar things here.
00:32:42
Speaker
Within these communities these extremist communities they would talk about the stock price of some of these companies that they were targeting and that these narratives that they were pushing. I'm saying that we should boycott you know bud light but wiser and heiser bush and following the stock prices falling same thing with target same thing with the disney.
00:33:05
Speaker
I don't know if there was any sort of short seller campaign tied along with this but I think it would be an interesting thing to look at as. I think there's been quite a bit of movement since you know the reddit sort of game stop community that kind of built up during the pandemic.
00:33:25
Speaker
Let's just stick with this idea of sock puppets versus bots. What do we mean when we use these terms in the information ecosystem? How do these augment these actual humans that you've both been talking about that are seeking to influence the environment? What do these terms mean and how are they used?
00:33:44
Speaker
So I can start really quickly. I mean, I think it's important the word bot comes from the word robot, right? And a lot of people forget that, you know, depending on what country you are in the world, people will use the term and bot and troll interchangeably. And that can also in and of itself be problematic because ultimately when you're thinking about trolls and troll farms, there's a lot more effort being put into developing a persona, right? There's a person behind the computer
00:34:11
Speaker
And they are spending some time to try to backstop their persona. They are maybe creating a website or they're creating multiple social media profiles that look the same so that when someone goes to dig around and say, okay, well, is this a real person? Let me see if I can find them on LinkedIn or find them somewhere else. And then, you know, and then they'll have a bit of
00:34:31
Speaker
more information that they can find in other places that takes effort that takes time. And you know you need to buy a pack of sam's you know use a phone number to create a fake account in email address and build up the profile of all that that's you know that's what trolls do a troll farms.
00:34:49
Speaker
bots, of course, are artificially generated. Some platforms completely block efforts to create bots, and it's easier for an artificial intelligent platform to catch an automated account because automation finds automation. There are certain triggers that you can develop to say,
00:35:10
Speaker
These are all the different things that we've seen previously with a fake account. So let's find more like them, right? Because you're training a computer to find things that look like a computer. So that's an important distinction there is, you know, bots versus trolls and, you know, people actually spending time to develop a persona or not.
00:35:30
Speaker
I think with bots, particularly on social media, it refers to a program that is designed to automate interactions. So these can be innocuous, such as the bot on Reddit that comments below any post that contains all the necessary components of a haiku, something like this.
00:35:51
Speaker
It can also be informative. There's a bot on Twitter that will tweet out every time a filing in a very important federal case in U.S. District Court is filed, and that's a bot. There are other bots that are amplifying a particular message or narrative in the form of a campaign, and these are usually in the form of a retweet or a reply.
00:36:19
Speaker
Bots are, you know, they can be particularly spammy. They can be very easy to detect. The more difficult type of actor is the sock puppet account, which is something that you would, you know, a troll farm is usually a one person at a troll farm might control several different sock puppet accounts. And sometimes those sock puppet accounts are all talking to each other. So they like form a community or they're part of a larger community.
00:36:46
Speaker
And it's a way to mitigate detection on behalf of the social media platforms that they're operating on, and also to mimic organic conversation on these platforms. So it's definitely a more costly, a more time-intensive process than just a simple bot.
00:37:11
Speaker
So I think with respect to recent campaigns that I've seen on social media dealing with sock puppet accounts, these are much more sophisticated accounts than just spammy bots.

Sock Puppet Accounts vs. Bots: A Comparison

00:37:24
Speaker
You can detect them by using different types of statistical analysis. Olga mentioned some of these things like post-volume, seeing how much they reply, how much interaction they have coming in versus going out, things like this. They're not perfect markers, but they're the ways that we have to analyze a campaign, and this is what we have at the moment.
00:37:50
Speaker
Other things are many of these sock puppet accounts, particularly if they're really busy, they might participate in what's called copy pasta. So they do copy pasting techniques where maybe one thing is slightly different, but it's really like you see copy pasting. And this is not a bot, this is a real person just tweaking the word, like a certain word or a hashtag a little bit.
00:38:17
Speaker
making sure to either tag specific accounts, mention specific narratives that are part of their goal, that sort of thing.

AI's Role in Disinformation: Potential Impacts

00:38:28
Speaker
How is the evolution of AI technology changing the nature of the threat for you guys so far?
00:38:34
Speaker
I would say it is and it isn't. So in many ways, AI has been used for a long time and we either weren't just calling it that or weren't getting as excited or nervous about it as we have been over, you know, collectively as a society over the last couple months, I would say. I don't know what happened in May.
00:38:52
Speaker
but all of a sudden every workshop that i want to was about a and you know it just change dramatically but i've been teaching about generative techniques to create photos and video content for at least four years now i took a look back in my syllabus so i think you know i will say jokes aside that
00:39:12
Speaker
the advent of generative text and sort of the public of large language models have gotten people speculating about whether these things will fundamentally change this field. I am skeptical. Ultimately, these tools are precisely that. They are tools.
00:39:30
Speaker
The investigative team that I worked with at Meta wasn't focused on content or narrative-driven investigative techniques for a reason, because behavioral anomaly detection was a more important signal in our investigations than the content. And I think that keeping things in that area means that you are protecting yourself from
00:39:51
Speaker
you know, the advent of, you know, someone's creating a whole bunch of content that doesn't stop you from still being able to detect that they've used a false persona and created it, right? There are still methods that you can use that we've discussed here today to identify whether someone is pretending to be someone that they're not and working and shift hours and other things like that.
00:40:12
Speaker
My thesis on all of this at the moment is that large language models will more so affect and help financially motivated actors because those are the ones that are prioritizing scale over anything else. So if you're thinking about who's going to benefit from you're going to now be able to create lots of content and more quickly, probably those guys, but not necessarily nation state threat actors or others in the sort of influence operations and troll farm game that I've seen.
00:40:40
Speaker
There are some good examples of deep fake or cheap fakes being pre-bunked, like the example of sometime in March 2022, shortly after the beginning of the second invasion of Ukraine, there was a pretty terrible fake video of Zelensky where his neck didn't quite align with his head. And I would say it's a cheap fake.
00:41:04
Speaker
Rather than deep fake but does one ski team pre-bunked it they knew it was coming and it was sort of a non story that's why we don't you know you probably don't even remember it and so there's some examples of people you know sort of dealing with artificial intelligence in this field.
00:41:19
Speaker
It is a technique. It's a tool. I think for some threat actors, it will make their jobs easier. But for some, it's just another technique to create a fake account or to create content. I'm trying to pour some cold water on this sort of AI scare right now.
00:41:37
Speaker
Yeah, I would agree. I think deepfakes are something to be concerned about, but especially as the technology improves. There have been, as Olga mentioned, instances of deepfakes used in campaigns, but they're quite easy to detect even with an untrained eye. It's not there yet. The use of LLMs in bot creation has been interesting.
00:42:02
Speaker
I think, just like Olga said, for these more profit-seeking type of campaigns, I think that's somewhere where a person that maybe doesn't have a robust programmer to help them create their bots could certainly use an LLM for their advantage.
00:42:24
Speaker
I recently learned of a way to identify bots by using, by searching for the error messaging. So the error messaging that comes up in a chat GPT prompt, if you put that into like just an, even an advanced Twitter search, you can start seeing where an error message from chat, like someone clearly tried to create a bot via scripting from chat GPT.
00:42:52
Speaker
on Twitter. This is something I just learned this like two weeks ago from shout out to the Dutch OSINT guy, if anybody here follows him. So he taught me this. And it's been just fascinating just to see like what comes up. Obviously, these things are very easy to detect. They they they do come down, but they don't come down right away, which is also, you know, a later topic
00:43:17
Speaker
for later in the show. But yeah, that's been interesting. But I'm sure that things are going to improve. The technology is going to improve, especially with the deep

Government Regulation of AI and Disinformation

00:43:29
Speaker
fakes. And I think it's something that, you know, is sort of like a wait and see. Well, you've been listening to How to Run a Troll Farm with Regina Morales and Olga Belaglova. After the break, we'll discuss how to fight back against this threat.
00:43:51
Speaker
You have been listening to How to Get on a Watchlist, the podcast series from Encyclopedia Geopolitica. If you like this show, don't forget to check out our other content at Encyclopedia Geopolitica, which you can find at howtogettontawatchlist.com, where you can find our analysis on various geopolitical issues, as well as reading lists covering topics like those discussed in the podcast.
00:44:15
Speaker
Please also consider subscribing to the podcast on your streaming platform of choice, giving us a rating and joining our Patreon.
00:44:33
Speaker
So who are the regulators in this space? And in particular, given that so much of this conversation in the information environment is happening on kind of private platforms, where does government start and stop on policing speech, policing information online? And where do private entities like social media companies start?
00:44:52
Speaker
So we were just talking about AI, and I think it's important to mention that the US government just put out an executive order on artificial intelligence, and it does include some language around the use of artificial intelligence for creating things like deepfakes and all of that. And so there are times where governments are starting to step in, especially on the AI space, in the early stages to try to understand how can
00:45:20
Speaker
these particular tools be used for harm and where can they get ahead of them? And so I think there's some lessons learned from the advent of social media and places where some governments were perhaps not thinking about regulation in the early days. That said, after 2016, most governments initially responded by creating task forces, setting up and standing up teams to conduct investigations to identify these types of campaigns, intelligence teams,
00:45:49
Speaker
people to interface with different companies, to share intelligence with one another. And of course, as you mentioned, some governments have stepped up the regulatory efforts more recently. But I would sort of break down what governments are doing, both in the US and Europe and the rest of the world into a couple different categories. So the first, which I mentioned before, investigations and intelligence. So actually doing similar work to what myself and Regina have done in different spaces.
00:46:17
Speaker
and collaborating with both private and public partners that have done that work. So the collaboration piece is important. And then the other areas include things like actually crafting regulations, right? What are they going to do to try to limit the capability of social media companies and people using those social media platforms to engage in these types of activities?
00:46:42
Speaker
Those have been still in the works. Some of them have not passed in some countries. The most famous example is the Digital Services Act in Europe. That's something that's still being understood, whether it's having an impact in the space. It's only really been around officially since this last summer.
00:47:03
Speaker
And then there's things like sanctions and that's an area where I think I've seen some really effective, you know, understanding of this space from the US Treasury Department, for example, where they're really looking along the kill chain of what someone is doing to run these types of campaigns and troll farms and thinking about where they can make it more difficult for these bad guys to do their jobs. So I'll give you one example.
00:47:29
Speaker
In order to create a fake account, you often need to have a fake ID. One of the sanctioned entities by the US Treasury Department was a Pakistani fake ID company that was providing fake IDs for Russian threat actors that were using them to create social media accounts. That's a place where you can really think about driving up the cost for threat actors, making it more difficult for them to do their jobs.
00:47:53
Speaker
And the one other area that we're talking about defense, but you can also talk about offense. There are certain governments around the world that have actually gotten into the offense game. There's a couple different ways they have done so. One specific example is from 2018 ahead of the U.S. midterm elections.
00:48:13
Speaker
the cyber command in the United States actually disrupted service at the Internet Research Agency. Now, one can argue whether disrupting service for a day or more at a troll farm is actually going to fundamentally change anything, especially right before an election, but we can have another conversation about that.
00:48:32
Speaker
But the other side of offense is, you know, Regina mentioned this in the early days, there are, you know, governments around the world that are also getting into the an eye for an eye. You do an influence operation, I do an influence operation game. And, you know, my team back in the day at META actually identified a French campaign that was sort of playing ball with the Russian campaign in Francophone Africa. And the French government at the time, you know, it was attributed to the French military
00:49:02
Speaker
And it was, you know, they were effectively engaging in the exact same activities, right, that the Russians were doing, which is creating fake accounts, pretending to be people that they were not. And the people that were sort of caught in the middle were the people that were living in those francophone African countries. But on both ends of the spectrum, we had the Russians and the French that were doing something. And the French government has actually publicly come out and said, we do these things, right? We do influence operations.
00:49:30
Speaker
And some governments do them but don't publicly acknowledge that they do, right? And so everybody's sort of playing this game but some doing so quietly versus loudly. And there's a debate to be had about whether we should, you know, Western nations, liberal democracy should be playing this game and doing an eye for an eye or if you sort of lose credibility by doing what the bad guys are doing.
00:49:54
Speaker
Olga mentioned U.S. sanctions being particularly effective, and I think in response to what is going on in Ukraine, the U.S. has amplified its sanctions against Russia and intermediaries, in particular media outlets. So this is, again, on the overt side of things.
00:50:20
Speaker
But that is, I guess, one sort of effective way that regulations, not necessarily on the social media platforms themselves, but other sort of regulatory regimes have been able to combat the problem.

Social Media's Role in Combating Disinformation

00:50:37
Speaker
So we've talked about the role of the governments. Let's move on to the social media companies and the platforms where the speech is actually being hosted. Can you talk a little bit about the upsides and the downsides of relying on those social media platforms themselves to carry out the sort of regulation and enforcement when it comes to confronting this malamus disinformation?
00:50:58
Speaker
So, you know, obviously with the caveat that I did work on a team that was doing this, and I'm quite proud of the things that we did, I think, you know, it's a hit or miss depending on the company, right? There have been quite a few teams that have been stood up to conduct these types of investigations at Meta, historically at Twitter, when it was known as Twitter, at Reddit, at LinkedIn, at TikTok.
00:51:27
Speaker
probably forgetting a couple of names at Google. And so there are threat intelligence investigative teams that have been stood up. There are policy teams that have been stood up to try to do analysis of where they can actually take action on these problems. And I think to the point that I made at the very beginning,
00:51:42
Speaker
The way you are effective when you are developing policies and addressing these things from a social media company's perspective is you know the limitations of your role. There are certain things that governments can do that you cannot. Some of the offensive actions that we talked about were sanctions and other things like that, where you might need to rely on another partner if you are perhaps not as trusted.
00:52:09
Speaker
sharing your investigations with an independent organization so that they can help build societal resilience by sharing their own findings so that can be really effective. And having separate policies to deal with separate problems, right? You cannot meld these things together and that's why we had separate policies for things like
00:52:30
Speaker
coordinated and authentic behavior, which is how we dealt with troll farms, date media, which we dealt with differently, right? We labeled state controlled media outlets as opposed to taking them down because the problem is fundamentally different. And then taking action to make it more difficult for financially motivated actors to actually make money doing what they're doing, you know, famously Google, you know, changed some of the Google ads system that was being taken advantage of by the massaging troll farms.
00:53:00
Speaker
And then having distinct policies for things like misinformation and conspiracy theories where you're actually thinking about the ways in which you can build resilience in society, get people to see fact checks, and get more accurate information in front of their eyes.
00:53:15
Speaker
Those are all things that I think are effective, but also we are still trying to understand what is effective. There's a lot of research in this space that is being done by the research community to try to understand, okay, well, if you did this kind of fact check, who's going to read it? Is it actually going to make a difference for someone? And that's still being studied. I think at this point in time,
00:53:37
Speaker
as much as we want a quick solution to a lot of these problems, we still need to actually study how human beings are reacting to the things that are being shared. Of course, there's also the political conversation where any enforcement actions, collaboration with governments and other things are now being sort of targeted by certain individuals in the United States and others as some sort of nefarious campaign to censor people.
00:54:04
Speaker
when ultimately a lot of the people doing this work, and I know so many of them, are well-meaning individuals who are really even-handed when it comes to political bias and everything else, and really just trying to improve the information environment.

The Challenge of De-Radicalization

00:54:18
Speaker
So let's talk a bit about radicalization and de-radicalization in a polluted information environment. How do we go about de-radicalizing individuals who may have been consuming bad information sometimes for many years? And is there a success story or a model that kind of gives us a path towards, we could call it, improving our information hygiene? This is not my specialty, de-radicalization. I think there are some very good groups and people working on this.
00:54:48
Speaker
But it's difficult. Some people aren't really open to being de-radicalized. Some people don't necessarily want to know what's real or truth or not. I don't really have a very good answer for this. One thing that was useful that I had seen firsthand with respect to QAnon and some of the COVID-19 mandate
00:55:15
Speaker
misinformation and disinformation that was spreading over the last few years is sort of this idea of not wanting to be gamed in any kind of way that people don't like being seen as a victim or being seen as manipulated. And so the times where news had come out that a particular personality
00:55:38
Speaker
Like an online personality was a grifter or receiving a lot of payment for putting out certain types of narratives. I think that was successful in sort of maybe changing some people's minds about the content that they had internalized and thought to believe is true.
00:55:59
Speaker
So in the first half, we talked about how private companies could be targeted by information operations in and of their own selves as specific targets of these

Corporate Strategy Against Online Campaigns

00:56:09
Speaker
things. If you're an executive of just a random corporation, not one that's a content provider, not a place where you're hosting this kind of stuff on your platform, what are you thinking about these inauthentic information targeting operations that might come after you? What are the things you should be concerned about if you're not already concerned about them?
00:56:30
Speaker
Yeah, I think that this is something that we've seen time and time again with private sector, non-tech, non-platform, user-generated platform private sector is that it's not a problem until it is a problem. And so a lot of companies aren't necessarily taking preventative measures with this or at least coming up with a game plan. And I think that one of the things that I've seen is that with a lot of these large corporates,
00:56:59
Speaker
different areas of the corporation are so siloed. So this is security, HR, communications PR, legal departments should all kind of come together to come up with a plan if in the event that they end up being the target of some sort of coordinated campaign online. And often these things aren't really discussed or
00:57:27
Speaker
The policy isn't really figured out until you're at a crisis point, in which case you're quite pressed for time. I think different companies have different thresholds as far as what response they do want to make. There's some companies that are very vocal and then others that prefer not to be vocal at all. The times that they have been targeted in some campaign,
00:57:54
Speaker
they would rather just not really comment on it at all, whereas certainly other companies are much more maybe adept or have more significant comms platforms as part of their corporate culture or something like this, where they do want to respond head-on to
00:58:18
Speaker
a particular campaign that has been waged against them. I think I talked earlier about some of the anti-woke, anti-ESG campaigns that have been waged online more recently, and these different companies answered these situations differently. Target answered differently versus Disney versus Anheuser-Busch.
00:58:45
Speaker
Those are just three that come top of mind because these are names that have been bandied about in the extremist fringe space for the last several months. That would be my answer for a corporate is to really take on preventative measures and have policies in place because while you may not be a target today, you could easily become a target tomorrow. All right. Let's move on to our final question. For both of you, we'll start with Olga. What keeps you up at night?

Trust Erosion and Its Impact on Society

00:59:15
Speaker
The thing that I think about the most is, I guess I'll cheat and I'll answer it two ways. One is this idea that I've spent a lot of time in some ways engaging in a game of whack-a-mole. I'm really proud of the work that the teams that I've worked with have done in the past.
00:59:35
Speaker
taking down these networks that have been manipulative and deceptive. But I also think a lot about how we need to make sure that we are not just removing harm, but we're also putting something in its place. We're filling in the information environment with something valuable. You can't just remove harm, train people to spot it, and that's it.
01:00:01
Speaker
find ways to get people better information and tell a better story. And we've talked a little bit about Ukraine and what the Ukrainians have done. But I think one of the most effective things we've seen is, you know, they found a way to tell a better story and in a more compelling way. They didn't bore people with numbers or reports or anything like that, but they found a way to
01:00:26
Speaker
share memes of soldiers rescuing puppies and who doesn't want to see that, right? And who doesn't want to be on the side of the people that are rescuing the puppies as well, right? And I know that's a bit flippant, but I think that there is something to be said for finding a way to help people find better information and to not just address the problems, but to also think about how we provide people with better information.
01:00:56
Speaker
The other thing that I really think about a lot is something that my old team back in the day coined a term called perception hacking. And I worry sometimes that our worries about these problems and the elevation of these threats makes the average person not trust anything at all.
01:01:13
Speaker
One of my favorite books is Nothing is True and Everything is Possible by Peter Pomeransev. And it is, you know, that concept, even just the title of that book, you know, when you create a society where people just don't trust anything that they read or they hear or they don't trust their institutions, then that's where we are fundamentally, you know, at risk. And that is something that sometimes these threat actors can do, but we all together can
01:01:42
Speaker
Also exacerbate those problems because if we're constantly talking about how everything is an influence operation and we're not being clear about the evidence. To support whether we actually know that a particular threat actors behind something we're sometimes giving them work credit than they deserve right we don't want to credit the russians with everything right.
01:02:02
Speaker
And we have to remember that institutions like democracy, elections, rule of law, all of those things exist only because we believe in them, right? Concepts like that didn't exist at the dawn of human creation. They were invented by human beings to make sense of their society and to make that society function properly. And so we have to always be investing in those things and believing in them and making sure that we don't, you know, make everything, turn everything into an influence operation. People have been lying forever.
01:02:32
Speaker
And Regina, what about you, what keeps you up at night? Yeah, I'll keep this pretty pithy. And I sort of alluded this to this earlier. And it's really paraphrasing, again, another sort of book or author that I really admire is sort of a notion of truth from Hannah Arendt. So one thing is like, there's been lies, you know, people have been lying forever, yes.
01:03:00
Speaker
However, there is a moment where the number of political lies and like, what does that do to a human and to the notion of a truth for a human being? And once that person no longer believes in anything and a populace no longer believes in anything, those in power can kind of do what they please. So that's one thing that sort of keeps me up at night is sort of the
01:03:25
Speaker
we're kind of operating in a space where we're wanting to show information that is true or that is accurate or that is factual. But at some point with the bombardment of lies, people might not even really care about what is true and what is factual and might not have trouble believing anything. So that is something for me that keeps me up at night.
01:03:50
Speaker
Well, this has been a really interesting and as always slightly worrying discussion. So thank you both for joining us. Thank you so much for having us. It was really, really interesting conversation and I'm looking forward to having more like them. Yeah, thank you so much.

Podcast Conclusion and Call to Action

01:04:05
Speaker
Really enjoyed it. And yeah, thank you Olga. Thank you. Thank you all to our hosts.
01:04:11
Speaker
We've been listening to How to Run a Troll Farm with Regina Morales and Olga Bellaglova. Our producer for this episode was Edwin Tran, and our researchers were also your hosts, Colin Reed and me, Louis Hadrosand, as well as other unnamed members of the Encyclopedia Geopolitical team. To our audience, as always, thank you very much for listening.
01:04:31
Speaker
If you enjoyed this show, please consider checking out our other content at Encyclopedia Geopolitica. We'd also appreciate it if you could subscribe to the podcast, leave a review, or support us on Patreon. Thanks for listening.