Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Fighting Disinformation with Deep Tech AI | Lyric Jain (Logically) image

Fighting Disinformation with Deep Tech AI | Lyric Jain (Logically)

Founder Thesis
Avatar
292 Plays2 years ago

"Even though conceptually there's a degree of product market fit, there are still very few people in the world who understand how to deal with misinformation and disinformation."

In this episode, Lyric Jain discusses the complexities of combating misinformation in the digital age. He emphasizes that while the need for solutions is evident, the expertise and understanding required to effectively address this challenge are still emerging.

Lyric Jain is the Founder and CEO of Logically, an AI-powered platform dedicated to analyzing information and assessing its credibility. Lyric is working with 3 of the largest democracies in the world and 3 of the largest platforms - Facebook, Instagram, and Tiktok. Logically processes over 15 million pieces of content daily. Lyric holds a Master of Engineering from the University of Cambridge and has also studied at MIT and Harvard.

Key Insights from the Conversation:

  • Logically's technology is used by major social media platforms and government agencies to detect and address misinformation.
  • The company pivoted from a B2C consumer app to a B2B/B2G enterprise solution.
  • Logically employs a combination of AI-driven analysis and human fact-checking.
  • The episode explores the challenges of identifying and mitigating misinformation, including the rise of synthetic text and sophisticated disinformation campaigns.
  • The discussion covers the complexities of platform regulation and the need for independent organizations in the information integrity space.

Chapters:

  • 0:00:00 - Introduction to Lyric Jain and Logically
  • 0:02:16 - Pivoting to a B2B/B2G Business Model
  • 0:06:50 - The Challenges of the Consumer App
  • 0:07:49 - The Maharashtra Election Pilot
  • 0:12:33 - Scaling the Misinformation Solution
  • 0:23:34 - Selling to Platforms and Governments
  • 0:34:23 - Monetization and Business Model
  • 0:39:55 - Key Use Cases: Public Health, Safety, Elections, National Security
  • 0:45:37 - The Technology Behind Logically
  • 0:51:36 - The Enterprise Opportunity: Conspiracy-Driven Threats
  • 0:54:54 - Team Size, Growth, and Hiring
  • 01:01:35 - The Future of Misinformation and Logically's Role

Hashtags:

#AI #Misinformation #Disinformation #FactChecking #SocialMedia #Startups #Technology #Entrepreneurship #SaaS #B2B #GovTech #InformationIntegrity #DeepTech #ArtificialIntelligence #FounderThesis

Recommended
Transcript

The Threat of Fake News

00:00:00
Speaker
Hi everyone, I'm Lyric, the founder and CEO of Logitech. It's a pleasure to be with you today.
00:00:16
Speaker
All of us would have seen those forwarded videos on WhatsApp with crazy claims and many times we might have even forwarded such videos without realizing that they are fake. In the hyper-connected world today, fake news is more than just a nuisance. It can be life-threatening and can change the fates of entire nations.
00:00:34
Speaker
Think about the January 6 riots in the US Capitol and the recent riots in Brazil which took birth in the dark corners of social media platforms and became mainstream movements threatening entire nations.

Logically's Evolution into a Misinformation Fighter

00:00:46
Speaker
In this episode of the Founder Thesis Podcast, your host Akshay Dutt talks to Lyric Jain, a UK-based entrepreneur who is the founder of Logically.
00:00:55
Speaker
a start-up that is fighting fake news the world over. In this fascinating conversation, Liddick talks about the journey of building logically, which started as a consumer news app but soon pivoted to fighting fake news with a focus on elections. He talks about their unique approach to fighting fake news at scale with a mix of AI and human intelligence. Stay tuned for the conversation and subscribe to the Founder Thesis Podcast on any audio streaming platform to learn about how start-ups are fighting evil while building a sustainable business.
00:01:32
Speaker
I was born

Lyric Jain's Background and Inspiration

00:01:33
Speaker
to North Indian parents in South India. So we were one of those North Indian families in the middle of Karnataka that didn't know how to speak any kind of that. My dad is an entrepreneur, but of a very different variety than an innovation-driven enterprise. But his life story is almost more interesting than mine in some ways. His origin was he was the son of a headmaster in this village in Haryana. The family were okay. I'll say with one typical middle-class family, but
00:01:59
Speaker
Unfortunately, we're strapped down during I believe one of the, one of the wars and pretty much watch everything that they had. And dad rebuilt himself from scratch and kind of got there. Marring mum did a job as a basically a textile worker in a factory for a few years, hear stories of him working some ridiculous hours, which.
00:02:15
Speaker
I can't even compete with multiple weeks at a time without coming to home at a factory. And eventually it was promoted to the manager and then eventually was able to raise money to build his own textile plant. And interestingly, I didn't give him credit to the innovation drill. He actually ended up coming to the UK to buy some old textile mill bits like factory machines, et cetera, because a lot of factories in the UK were closing and moved them to India to, I think, Goa first.
00:02:41
Speaker
And that's where his entrepreneurship journey really began and set up a couple of factories, then gone to a little bit of real estate, et cetera. And then I eventually moved, when my sister moved to the UK, we all moved to the UK. So we moved when I was 12, I think 12, 13. So I was in.
00:02:57
Speaker
halfway through eighth standard. I remember life being pretty easy. Hey, Matt's in something to save a quarter in India. But I really struggled with languages. Ah, French and Spanish, I've never done them. I really struggled with them over the first couple of years. I made my way through school, then came at a crossroad moment where I was quite intrigued by the world of finance, but also by the world of engineering. Pre-university was this kind of childhood debate. Which direction do I go in?
00:03:18
Speaker
and

The MIT Journey and Birth of Logically

00:03:19
Speaker
get pieces of advice from multiple people ended up probably pointing me towards the engineering route. We went to Cambridge, had great times and great people that even then my head was still turned by finance and investment banking in particular. Cambridge, you did like a bachelors of science with the computer science specialization or something like that.
00:03:36
Speaker
No, so it was general engineering in Cambridge. It was when I went to MIT, I saw it as my one opportunity to, hey, if I'm ever getting into computer science, it's here. Otherwise, I'm never really getting into it. And that's, I've done some computer science, et cetera, at Cambridge. But again, there's credit to Cambridge, but they didn't really teach it very well. But I think Cambridge did give a solid foundation of computer science to be able to pursue that at MIT. And at MIT, I really got into artificial intelligence. How did, like, from Cambridge to MIT, like, how did that happen?
00:04:05
Speaker
So this was part of a kind of program that Cambridge MIT had, because it was a joint master's and bachelor's... Oh, like an exchange? Yeah, because it was joint master's and since you had, depending on your first year results, your kind of supervisor and centre could nominate you, go to MIT, and the other way around, people at MIT could be nominated to go to Cambridge. It ended up being a pretty varied MIT experience, but that was fun. But that was really where the origin story logically begins.
00:04:28
Speaker
Okay, I can see on your LinkedIn, you started out while you were still studying. So like, how did that happen? It's unfortunately, like a series of really strange events, a bit of family tragedy in 2014, 2015, my grandma, she was 86 at the time, but she still used WhatsApp. And she got this tirade of messages saying, Hey, drink the special green juice, give up your cancer meds, and you'll live longer. And unfortunately, we lost her a lot earlier than we would have. But at that time,
00:04:53
Speaker
No, very few people really thought of misinformation or disinformation as a problem, and those concepts were pretty poorly defined at that stage. I just thought it was fraud, but really started to develop an interest in a lot of social media information dynamics in 2016, particularly in the run-ups of the European referendum. So my experience there was quite novel.
00:05:12
Speaker
because a home for me in the UK is a little town called Stone in the middle of nowhere. And it happens to be the highest Brexit-burning constituency in all of the UK. And where I was at the time, Cambridge happens to be the highest-remained burning constituency in all of the UK. So it was this perfect storm of having one leg in both poles almost.
00:05:31
Speaker
And I vividly remember one of these moments where a friend from stone came over to Cambridge and I compared feeds with a friend from Cambridge and completely different information rife with misinformation and obviously they made very different decisions. But really it was the misinformation aspect of how much of that was creeping into their feeds and how much
00:05:49
Speaker
Almost social engineering was creeping into their feeds. That was quite interesting for me. At that time, very few had made this kind of observations. But that still was like, hey, big problem. It's like, world hunger. Someone will solve it. I don't know if it's for me to see. But it was really when I was at MIT where kind of that problem met potential solutions during my time at C-cell and the Media Lab there to see how AI is evolving, particularly around content understanding.
00:06:14
Speaker
I think NLP, et cetera, had started to be reasonably well evolved at that point. And really this area of NLU, natural language understanding, had started to become quite interesting. There were some promising breakthroughs that year and the year before. So I really wanted to start applying some of those to this problem context.
00:06:29
Speaker
What is the C cell? C cell, that's the AI lab at MIT. So there's the MIT Media Lab and there's the C cell, which would be AI lab. And I got one foot in both those camps doing like a lot of research for basically my coursework pretty much. And that that's where I took a focus, particularly in content assessment and content risk assessment, et cetera.
00:06:48
Speaker
So these labs are like fundamental research. They're doing fundamental research. They would be like, I think, global leaders in fundamental research. What are these labs like for people who don't know? Absolutely. Like C-cell is kind of one of the top labs for AI research out there, along with probably Stanford and probably these days, Toronto is doing some ridiculously good work.
00:07:05
Speaker
Yeah, those two, three labs could be right up there when it comes to global state-of-the-art AI research. These days, there's a bunch of private sector or semi-private sector and nonprofit labs that have entered into their foray as well in AI and various private sector companies. But in terms of pure academia, let's say C-cell and Stanford are way up there.
00:07:22
Speaker
that gave me the opportunity better in space with researchers, but also to really focus some time and energies on this problem around misinformation and disinformation. And these were still early days, it was still late 2016, early 2017. And for me, the technical proof point to reach was, could we quickly hack something together that identified misinformation on a social media platform?
00:07:44
Speaker
Facebook. It was Facebook. How do we really quickly build something that can identify misinformation on Facebook? And that would be the test of whatever we built in terms of, is it good enough? Is it doing a better job than whatever the existing platforms and their measures are doing? And yeah, it took us while that we were able to build that within a few months. And that really was the milestone for us saying, hey, there's really a there there here.
00:08:05
Speaker
that very few other people in the world are getting older, but it's clearly going to be a substantial challenge moving forward, given how democracies are being moved because of stuff like this, but also these individual high-risk events, such as what my grandma experienced happening because of events such as this. That's when the Logipy journey really began. It was a solo founder journey, tried spending some time looking to find a few people to build Logipy together with me, but couldn't really find anyone with kind of complementary skills and stuff like that.
00:08:32
Speaker
Yeah, and I'm getting started and really building a team, an early team when I returned back to the UK. And during the early days, our focus was actually, let's build on the technology and get the efficacy of our methodology, that it works more robustly, but
00:08:48
Speaker
At the same time, let's position it to other products. And during

Challenges and Shifts in Focus

00:08:51
Speaker
the early days, our focus was very much consumer. It was, hey, if we're able to build a better news experience than pretty much every one of these either social network companies or news aggregated companies throughout there, we'd probably gonna get a lot of traction. And after a couple of attempts at gradually improving execution, we managed to find a degree of program market fit.
00:09:10
Speaker
particularly when it came to a big crisis event. So be it like elections or even the early days of COVID, when we launched the app, it had like hundreds of thousands of daily active users during those days and weeks. But then retention was terrible. You want to sort of how you have together that MVP of like a tool with screens for misinformation and tags.
00:09:29
Speaker
What was that MVP which you initially built and obviously later on you told me try to bring it into a consumer product for news like a Google News kind of a product I'm guessing but what was the original MVP you built? Oh the original MVP was effectively give me a social post be it a long form article or a like a single claim on social media can we check it and can we check whether this contains any degree of
00:09:51
Speaker
misinformation risk. It was effectively just a script. It was something that we just had an API for. That was the original technical proof point. That was pre-company, just as a technical milestone of saying, yeah, this is theoretically scientifically possible. We've got to improve it in a million different ways. But yeah, the concept is viable. How did it give it a score? Essentially, it would give it a score of how authentic this post or article is. How would it give this score?
00:10:15
Speaker
Yeah. So the early days were fairly primitive. So the early day, I'm glad that pretty much everything that I built back in the day, everything, every single thing has been thrown out. So that's good. But some of the primitive methodology has been built on, but at that time it was, Hey, let's look at the source and let's just look at the content in terms of source. Let's have an index of how credible different organizations, et cetera are.
00:10:34
Speaker
So like a New York Times or a Washington Post as the source would get a higher score versus some other. Yeah. And now those methods have become quite a little more complex, going into like domain expertise, funding sources of organization, et cetera, et cetera. But at that time it was fairly primitive. And then the way we look at misinformation risk within the content would be just comparing articles, et cetera, to each other. So it was almost a popularity voting mechanism of, hey, a lot of credible things are saying the same thing, then it's probably true. A lot of non-credible things saying a lot of the same things.
00:11:03
Speaker
and not a lot of the credible ones. So it's probably false. So it was pretty naive methodology. In terms of the percentage rebalconess of it and accuracy of it, it ended up being pretty good. That alone is sufficient to get in the 90s when it comes to the efficacy of misinformation detection. But really moving the needle from something that works kind of 85% to 90% time to 98% to 99% time has been the challenge of the last few years. So gaining that next 10% in performance has been the challenge that's taking us another 180 people.
00:11:32
Speaker
A. How did you feed it data? Because what you're telling me sounds like it must consume a lot of news and posts and articles to really be accurate. So how are you feeding it data? Yeah. So this is also one of the things we've maybe lost so out on by being an adding move. We have to build a lot of our own scrapers, et cetera, because at that time,
00:11:55
Speaker
These days, it's pretty easy to scrape stuff, and there's a hundred different scraping service providers, et cetera. Back then, there were some frameworks that were available, like Strapey and Beautiful Soup, et cetera. So we ended up calling those together to build our own scrapers for pretty much multiple different news websites. We ended up licensing some content as well. And that was a lot of our database with long form content. And at that time, we had the Twitter API that we licensed from Twitter to get some of that social context.
00:12:19
Speaker
And that was about it. That was all the data that we needed. But when we moved it to MVP land, we knew that data was and data scarcity was going to be the biggest challenge in this space. Even before really doubling down and investing in our engineering teams, we started building out our content assessment teams because fundamentally that's been the capacity that's been lacking globally in this space. There's a bunch of batching organizations. There's a bunch of credibility assessment organizations.
00:12:44
Speaker
and they did some really important, powerful work, but three, four years ago, capacity was pretty, pretty constrained. So we then ended up building house capacity for that, bringing in kind of a dozen people, which is quite a lot for an early-stage startup, to focus purely on being that knowledge base for us, for building a methodology for how to reassess the credibility of websites, and building a methodology for
00:13:05
Speaker
So you were talking about that in-house team of a dozen people. So these dozen people are like the ones who decide this is an authoritative source of news. This is not an authoritative. Is that what they were doing? Yes, partly. So it was even within them, it wasn't up to one person. We had this almost internal jury system where people needed to come up with different views. And then an assessment would be made collaboratively within within that team. And there would be things like interim stage agreements, et cetera, that we take into account.
00:13:32
Speaker
before coming up with the logically score. Equally at that time, a lot of sentiment analysis capabilities in the field were pretty basic. So figuring out how we model stance and entity sentiment, et cetera, was also something that team helped us build out. And also just our libraries.
00:13:48
Speaker
misinformation and disinformation. And really, that's what kind of gives us a lot of the robustness that we have today, because that's the data that we've been building for like three, four years now. And that team scale, we have some excellent partners that help us. We have our platform partnerships that help contribute to that data pool now. But fundamentally, this is a data scarcity challenge. One of the challenges of misinformation detection is proportionately how little misinformation actually exists relative to all information on the internet.
00:14:17
Speaker
It's in single digit percentages, it's not like 50%, which makes it a tricky classification problem, or a tricky detection problem, which means we need to have good representative coverage of the different styles and modalities within misinformation, say. That's where the team really started mapping out. And I wouldn't say we've mapped out every single pipe. We today cover the geopolitical context pretty well, and we cover some health really well.
00:14:37
Speaker
But when it comes to, say, financial misinformation, disinformation, that's pretty much untouched by logical pieces. So long ways to go. But yeah, that's kind of the roadmap for along the efforts that this team does still make.
00:14:47
Speaker
How did you fund this? You started this when you were still like you're not graduated yet and you hired people and when you make that happen. Yeah, that was thankfully a lot to do with family support. It was a leap of faith for the first few years. We were able to run a pretty bootstraps operation for the first year or so before we got our first round of seed funding in from a UK-based BC.
00:15:10
Speaker
At that first year, getting up to the point of initial traction, that was thanks to a lot of family support, but also a huge leap of faith that our initial team took on us. And a lot of those folks are still with us today. They're our day one-ers. And I always admire anyone, again, across our journey who's taken that leap of faith. Because pretty much every year, we de-risk our journey, that individuals who've joined us four years ago, three years ago, two years ago, did decide to take on a lot of risk while joining us.
00:15:35
Speaker
were a lot scrappier than we are today. So yeah, thanks to their investment, and as well as family, we were able to build something in form of that consumer application initially that got a little bit of traction that warranted a seed investment. But more than the consumer app really, what the investment was in was the underlying concept and the underlying detection methodology, because that was really what our core IP was.
00:15:56
Speaker
Okay. What was the consumer app called? How did you position it? Oh, so it's called Logically. Very magically named. So it was supposed to be this destination, be this one-stop shop for news consumption, where people would have automated feeds, which had the big stories. Each story would be a collection of multiple articles that were on the same underlying events or
00:16:16
Speaker
issue that that had occurred, I would present multiple, like an objective summary and a number of bullet points of, Hey, here's what's happened to the objective summary and multiple viewpoints from across the political spectrum. That would be reflected a timeline of events. And all of this was automatically generated and contextualized through through our platform. And in the one post MVP, we also augmented that then with a semi-automated fact checking service. So automated fact checking and image verification, as well as some of our, which was supported by our AI as a first pass.
00:16:46
Speaker
And if consumers didn't get an answer through the 3D automation, they were able to ask our teams for an alter, our factoring teams. But that was an amazing experience, particularly during election events. And really the first test case for us was the Indian elections. So during, I think, the 2019 legs of elections, we launched the app, got a pretty good amount of traction.
00:17:05
Speaker
as soon as May subsided, we just saw the retention numbers just completely die. And we thought that was a lot to do with our execution. And we knew we had UX UI issues, so we reinvested in that side of what we were doing. And in time for the Maharashtra elections, we really wanted to get out that year. A funny whole thing happened then.
00:17:23
Speaker
So we actually partnered that in that election cycle with the Election Commission and the local law enforcement to identify misinformation during the election. But really, it was supposed to be something that was branding and marketing for us and our consumer app. But that relationship and that engagement ended up becoming commercial.
00:17:43
Speaker
And that was the first bit of revenue that we got. And that's where life became a bit interesting for us. Because we always knew there was a value proposition logically in our technology working directly with social platforms and working with various public sector agencies. But we hadn't seriously explored it. And just yet, we were pretty amazed to focus until that point on the consumer app story.
00:18:01
Speaker
But on the consumer app side of the business, we still were in this place where we saw that retention dynamic again. And we were like, OK, great. Next year, there's the US elections coming up. But we need to have a hedge strategy. We need to go in. We'll miss a big final roll of the dice for our consumer proposition in terms of this current formation and conceptualization of it. We reworked a lot of things with the app, launched it out of the American elections. We had an interesting feature for live fact-checking the presidential debates, which got, I think, 150,000 viewers, which
00:18:31
Speaker
Pretty big, even though like big news channels got like single digit millions, once again, that was pretty, pretty big for us. We got featured a bunch of places, but at the same time, building on top of our roster experience, we started working with battleground states in America and a commercial capacity to really build a software cut from POC, like a SAS POC for how organizations could detect misinformation that could threaten election integrity and other risks more generally. So that October, we rolled that out with a couple of partners.
00:19:00
Speaker
And that's really been a traction story of Logically for the last couple of years. I want to go back to when you raised the seed round through the consumer app. What was your pitch then? Was it advertising? The monetization pitch? No, it was subscription. It was subscription. And at that time, we were running a few subscription experiments that were going slightly better in the UK than they were in India. The conversion

Government Collaborations and Impact

00:19:22
Speaker
grades in India were pretty poor, like less than a percent.
00:19:25
Speaker
But subscription would just give them access to the content which is not behind a paywall. It's not like subscription would also give them paywall content. That was not part of the offering. It was like content with fact checking. That was the pitch to a consumer somewhat.
00:19:42
Speaker
So it was free content. So there was a couple of tiers that were involved. So there was content and freely accessible content that's contextualized and that story concept that I shared earlier with on the Monclack checking. And then some premium content as well. So we'd struck up some partnerships with some of the premium publishers that are out there, UFTS, WSJ, et cetera. And their content was going to be part of the lovely experience as well. So we were a bit further along in terms of
00:20:05
Speaker
commercial aspects, but really the big challenge there for us was retention. For a long time, we put that down to our execution and not the underlying market dynamics, but it feels like for that kind of value proposition, there is a serious and urgent demand for it during crisis events, because even though don't support, actively support the consumer app right now, it's still live.
00:20:26
Speaker
And in different markets, when there's surge events, we see like little peaks and troughs in the UCL TV app. There's little ways in which we can commercially monetize that directly. But long-term, we still want a way in which we can be in the hands of end users and deliver impacts. I think there's still a role for logically and consumers to work together to deliver impact, but it's just probably not at the top of our priority list because
00:20:50
Speaker
we're focusing on these high-leverage markets of where we can send the amplifier and pack by working directly with platforms and governments and that's the priority for us. The Maharashtra pilot, where they paid you for fact-checking and flagging fake news, what was the Maharashtra election commission getting? Was it getting a website where people could see this as fake news or what was it like? What's the product that
00:21:15
Speaker
No, no. So this was, so because this was supposed to mainly be a marketing initiative for us at this point, we didn't think of it commercially. It was this physical war room that we set up in their office. So they had like logically branding everywhere and we'd taken out space physically in their office with loads of press and all that good stuff. But main value prop there was identifying model care to conduct violations.
00:21:36
Speaker
say specifically things like, hey, your election has been moved from this location to this location. So really not political stuff, but it like stuff every single person can agree on that it's wrong, fraudulent, and shouldn't be happening. That's the kind of stuff that we, we failed to start with them. And it's their responsibility to during that model code of conduct period to identify these kinds of things. And the remarkable thing for us was in the Lok Sabha cycle, all of the ECI across India had found 900 violations.
00:22:03
Speaker
we in Maharashtra alone, I think the number we got to was 20,000. So in one state, we found 20 times more than what kind of the status quo found across the whole nation just three months earlier. So I think that really spoke to a lot of the scale of the challenge that exists in India. And this again, away from like political misinformation and disinformation, which is
00:22:23
Speaker
I agree. It needs to be handled sensitively. This is just like stuff everyone can agree on is wrong stuff. Hey, your ballot's been moved. Hey, don't come to the election because the election's got like super obvious stuff that no well-intentioned person can say isn't misinformation or disinformation. Those are the kinds of things we focused on. We also focused on foreign interference and started seeing some degree of fraud activity from the PRC in Pakistan that was involved during that campaign, even during our astro, which was
00:22:48
Speaker
quite interesting to us and that got us into some interesting conversations with various stakeholders in India but that's really began logically intelligent stories of where. This was like you were monitoring Facebook and Twitter feeds. Facebook and Twitter posts was what you were monitoring and flagging.
00:23:03
Speaker
That's right. So the way we delivered this was nothing was little to do with our consumer app. It was a lot of the backend APIs, et cetera, that we had. We just used to run them on these batches of content and feeds that either the ECI had access to or we had access to and download them as a CSV, put them together in a document, and give it to them. And that was it. And we would triage it, maybe prioritize it a little bit. And that's about it. So it was pretty like,
00:23:27
Speaker
low fidelity in some ways, it was very hacked together, because this wasn't supposed to be a product for us, but we really learned from that. And 12 months later, when it came to the American battleground states, we had a POC for an entire WEXA effects, how the equivalence of the ECI in the US kids defined their information environment, defined what they would consider a threat within that come up with prioritization framework, we'd, we'd fine tune our models for very high precision and recall. And we then had a remediation process built in as well.
00:23:57
Speaker
So customers could identify and respond to misinformation and disinformation through our platform. And that was the first big feminist fit for us. And in March last year, we ended up launching that commercially available product. But it's not been smooth sailing still, because even though conceptually, there's a degree of product market that there's still very few people in the world who understand how to deal with misinformation and disinformation.
00:24:21
Speaker
And that's where a lot of the team that we've been building over the last three or four years, that was initially this team of assessors and fact checkers and laser open source intelligence analysts, they ended up effectively using our platform to deliver various reporting products to various customers. Because currently what we do is a blend of our customers using our platform directly, as well as us providing our partners with capacity, as well as our platform. So it's almost that Palantir business model of platform only or platform plus delivery and
00:24:51
Speaker
Yeah, that's this gaming story over the last 15 months.
00:24:56
Speaker
So for the US election, when you had the product ready, I just want to understand that product a little better. So this product would monitor social media activity about elections. Maybe there would be some tags or prominent accounts and stuff which you would identify. These are related to the election and it would then create a repository of inauthentic or like posts which have a low score. And then what you said there was a
00:25:22
Speaker
like a mechanism where consumers could see the posts and like a redressal mechanism. I didn't understand. So not consumers, but the departments of state or whoever are responsible for acting on those risks, they can make multiple decisions. Again, you can monitor this, monitors cross-platform, both articles, as well as shoot on posts. We didn't have a lot of multimedia back then, so it was only text-only and English-only. We assessed what had a degree of misinformation risk. So that was our disinformation risk, either because of content
00:25:49
Speaker
or because his bots were involved, or because a nation state actor was involved, or because it looks like someone's impersonating an election official, or someone's calling for violence against an election official. So again, those aren't Auckland, but those were emblematic of the kinds of risks that we were identifying on the platform.
00:26:05
Speaker
And each one of those arrests, be it an individual account, or a piece of content, or a piece of activity, that could be investigated further through the platform. So we would contextualize it. Again, probably not in October, but the more recent version of the platform, we contextualize it with who's it reaching, how many people is it reaching, and which location is it reaching people, what demographics people are it reaching.
00:26:25
Speaker
hyper-targeting people or is it pretty general, etc. And then it gives you those options of, hey, what do you want to do about this? Do you want to typically do nothing? Because sometimes that is the best option. Don't be a bit more oxygen. Sometimes it's, hey, this isn't just harmful activity. This is illegal activity and we need to
00:26:41
Speaker
related to law enforcement or another agency. In other cases, hey, this is clearly a platform terms and service violation, let's flag it to a platform and see if they agree. And finally, it's coming up with some kind of backtracking or communications in response to a piece of misinformation. Those are examples of some of the remediations that were baked into the platform.
00:26:59
Speaker
The reporting was baked in, quoting that tweet or that post and saying that this is fake news. All of this was baked in, like could all be done through logically products. Yeah. And okay. So I understood this part of it. Now tell me that what then the next part, which you told me as a platform or a platform plus service, just help me understand that a bit. Sure. So.
00:27:20
Speaker
Really, it steps from the challenge of there not being enough capacity still in the counter-missive space, because a lot of the organizations that need to tackle misinformation and disinformation, they don't have specific analysts or dedicated resources to go out and assess and use platforms such as Logically to identify misinformation or disinformation, because at the end of the day of the platform, there needs to be a user at the end of this platform to really deliver some amount of value. And most organizations don't have that.
00:27:47
Speaker
So you're saying that this act of reporting versus posting that this is fake news versus escalating to law enforcement, this act is something which you also provide as a service because often... That's right. But it's not just that act. It's setting up the information environment. So right now with your election commission coming up in India, there's some elections coming up later, the CIO of the election in two years' time, you need to figure out
00:28:12
Speaker
What's your monitoring scope? What's your information environment? You're clearly not going to monitor like 10 billion pieces of content every day because everything on the internet is relevant to you. That's unfeasible for some organizations. That's unfeasible for most organizations. So you're probably not going to do that. So with your scope, how do you define what is election and election adjacent? What's MCC violation and MCC violation adjacent? And the platform can help someone do that, but it needs a degree of subject matter expertise and a user who's trained to do that.
00:28:38
Speaker
Then you need some kind of framework for what do you prioritize? Because the number of threats here will be in the thousands, or even in the tens of thousands and hundreds of thousands. So what's your prioritization framework? What does you as an organization care most about based on your policies? Logically can't really decide that? We can advise, yes, that do you care more about, again, it's as explicit as this, do you care more about, and is this a bigger concern for you, if an election worker getting killed by a conspiracy theorist?
00:29:06
Speaker
or a few people believing that the election was hacked. Which one is a bigger risk to you? And which one do you care more about? And it made the entire policy to say, again, it won't be as blunt as that, but effectively, what's your prioritization framework? And that's for a customer to build, logically can devise our services team, can obviously consult. And then it's about, okay, what does proportionate and effective response look like? A lot of kind of
00:29:28
Speaker
unexperienced people in the space and be like, hey, it's misinformation, it's disinformation, let's just take it all down. No, you'll make the problem a lot worse by just doing that. So proportionate and effective response would be looking at it and then making a calculated decision based on what the potential impact would be of taking something down or reporting something to a platform or escalating it or actively doing nothing, putting out a piece of fact check. Yeah, again, we're making all those things into the platform as part of the roadmap to make it easier for future users. But for today, that's a degree of specialism and expertise that
00:29:57
Speaker
users need to have, users need to be trained and certified to be able to use that platform. And some organizations have that. Some organizations, even including in India, have those super expert users, but a lot of organizations don't. And that's where kind of our accredited teams can step in as well, be it our faction teams or our open source intelligence teams and provide that capacity should that be needed.
00:30:19
Speaker
What do you mean by open source intelligence scenes? It sounds like a really fancy term, but one of the ways in which we can think about it, there's expert Googlers in some ways and expert researchers in the online landscape. I do them at this service by calling them that. It's a bit more complex than that, but effectively the discipline within intelligence gathering and within intelligence analysis, say you might have heard of signals intelligence or human intelligence or the James Fondi stuff as human intelligence, some part of it is that
00:30:45
Speaker
And open source intelligence is really anything than the open source domain and the publicly available, publicly accessible domain. How do you build an intelligent and a common operating picture as a result of what exists in the open source domain? How do you detect threats as a result of what exists in the open source domain? That's really that open source intelligence discipline.
00:31:03
Speaker
The engagement with a body like a government body, like an election commission starts with scoping. So scoping means what? Does it mean that they give you that these are the keywords or they give you the geo tags or like the location that I want to monitor this location? I want to do they give you the accounts of those people who are standing for elections so that those accounts can be monitored? What all comes in scoping? All of the above. And it has to do a little bit with who the organization is.
00:31:33
Speaker
So when it comes to say accounts, if you want to monitor accounts, you have to be a very specific type of organization with very specific authorizations, and logically to ever monitor accounts, because that's almost surveillance. That's, again, depending on the regulation of a particular country, that's pretty much on the surveillance side of things. And if you have the authorization, we'll obviously do it and the platform will do it.
00:31:53
Speaker
usually it's defined on the basis of content or on the basis of location or on the basis of what audiences are potentially reaching. So relevance in a given location or for a given community or things such as that. So it's a combination of all those factors that go into it. Like we have a love-hate relationship when it comes to things like keywords. They're a very blunt instrument. You can think of it as, hey, someone wants to look for the word bomb because they're looking for, I don't know, threats of car bombs, but that's something that the word like, hey, this car is the bomb or
00:32:22
Speaker
Oh, that song was the bomb. It's going to be pretty, pretty poor. So that's the whole point of a lot of the intelligent systems that we have supporting our ingestion as well as our threat detection systems. It's filled throughout a lot of those false positives. It's scoping and figuring out what the information environment looks like on the basis of contents, accounts, activity, on the basis of locations, as well as demography.
00:32:44
Speaker
And so this is one part of your business, which is let's say the B2G business, which is where you're selling to government organizations.

Partnerships with Social Platforms

00:32:52
Speaker
What about platforms? Do you also send something to platforms directly? Does Facebook or Twitter, do they use your product or service?
00:33:00
Speaker
Yeah, I think it's publicly unknown. We work with Facebook, Instagram, and TikTok. So we work with them partly through our intelligence platform and partly through our fact checking. Again, the modality there is pretty similar. The only exception in the platform cases, sometimes they want us to provide the information, like very much how we do in the government context. But a lot of the time, they already have feeds. They have feeds because their users have flagged
00:33:25
Speaker
various things on the platform as being potential misinformation. And that enters into our cues, either into our automated cues or into our teams. And it then goes through the assessment stage. Again, either through our services team or through our platform comes up with an assessment. That assessment then goes to the platform, and the platform's then responsible for doing whatever on the basis of that assessment. They have their own policies. So Facebook policy is slightly different to TikTok's policy, which is slightly different to Twitter's policy, which is slightly different to Google's policy.
00:33:52
Speaker
So, based on these assessments, they apply and enforce their policies. So, essentially, when someone hits the report, this post for inappropriate content, that post will come to you further. Only if it's misinformation. Oh, okay. When someone is reporting, they are asked to give a reason. They're like, there's a dropdown or something. That's right. So, if it's hate or if it's child safety related, et cetera, that's not right.
00:34:16
Speaker
Many other organizations in the world do powerful work for that. For us, really the domain where we specialize is misinformation and disinformation and associated harms. Things that occur as a result of that underlying misinformation and disinformation. That's our core specialism.
00:34:31
Speaker
Okay. So when someone is tagging it for misinformation, like report inappropriate, and the reason is misinformation, then that post comes to you for like giving back Facebook a decision on it that yes, this is there or to what degree is it misinformation. So you will give Facebook back some information on which they will then take further action. And your decision making can either be purely machine driven or at times the machine may not be able to give a clear decision. So it will go to the human.
00:34:59
Speaker
That's right. Again, these are all the subtle little nuances for each platform. The third platforms have like triggers of, hey, a thousand users need to report it before we send it to someone. Some platforms like it's mostly with the concentration of users. If 10 users have reported it in 10 seconds, then we'll share it. Again, we're not convolving a lot of that.
00:35:16
Speaker
policy decisions and the reasons for why it enters into our feeds. That's that comes decided based on their policies. It enters into our feeds and we're responsible for the assessment. We're responsible for the robustness and expediency of those assessments and platforms are then responsible for again, what they want to do on the basis of that assessment.
00:35:33
Speaker
How do you decide when you will give a machine-generated assessment and when a human will look at it? Confidence. So every assessment that we have is kind of confidence-based, but our automated assessments as well as our people assessments. Wherever confidence levels are below a certain level, that's actually agreed with. That's when we'll refer it to our team. In some cases, for some platforms, everything has to go through a manual review, regardless of the automated review. Someone will have to check whatever has been assessed by our veracity stack.
00:36:02
Speaker
And probably like something critical say election might despite with the criteria for those kind of posts. Yeah. Anything that might be high sensitivity or high impact probably would choose to go through manual review. Anything that I just in the assessment itself, we're uncertain because either of an absence of information
00:36:18
Speaker
too much contradictory information or evidence, we would have a lower confidence level or anything with data that's not very timely, like claim might be one day old, but our most recent evidence context of it will maybe be one week old. So all these are, again, appealing the onion and love it too much there, but those are the kinds of signals that go into confidence.
00:36:35
Speaker
And I guess with time, the human moderation which is happening today would be training the machine learning algorithms to be able to increase the confidence level and reduce the percentage of content which goes to human moderators. That's right within the domain, yes. So we've certainly seen that in the geopolitical domain or when it's come to issues around COVID and you can very clearly see that S curve of
00:36:59
Speaker
improving performance and plateauing performance based on how much training we're getting from expert input. But really there's so many domains, there's so many different domains for us to go into. So we still need to scale those subject matter expert teams for their foreseeable future. But yes, eventually there will be an automation payback and there'll be kind of again, that S curve of how many people we're going to need to support the overall level of outputs we're able to deliver.
00:37:22
Speaker
Give me some scope of numbers. What are the number of posts that logically is assessing daily, weekly, monthly, whatever, like some idea of what are the metrics that you look at? Every day we pull through about 15 million pieces of content, I think. 15 million, yeah.
00:37:41
Speaker
But that's not enough. We need 100X of that because there's over a billion pieces of content that are posted every day. It's closer to 10 billion every day. I think Twitter alone, I believe, is just under a billion a day. There's a long way for us to go in terms of scaling. This is very much as a tip of the iceberg. A lot of the underwaters and undercurrents of misinformation and disinformation, this is where it's exposed to enterprises, brands, and even individuals. And that's a following market for us to start working with some organizations
00:38:06
Speaker
What are the other metrics you track? What are the numbers you look at on a regular basis?

Business Model and Growth Plans

00:38:10
Speaker
When would be how many pieces of content you're reviewing? What else? A few. For us, it's the efficacy of our automation. That's a pretty big one for us to continuously see trending up as a result of investments in our roadmap. We have this interesting framework. It's our capability to complete this framework. And it's almost this big jigsaw puzzle that we want to build out what our roadmap looks like. And what is that?
00:38:32
Speaker
Where is that from a completeness level and where is it from a performance level? Both those are things that we measure and track quite closely. There's obviously the financial sides in terms of... How do you measure efficacy? So I see you're saying like would mean how good a job you're doing at flagging or giving a score? Like how accurate your score is? How do you measure it? Like how would you know that?
00:38:52
Speaker
Is it based on whether a human operator is disagreeing with the machine score? Is that how you do it? That's right. Even within our expert operations, and we don't just put things through one person, things have to go through three people before we come up with assessments separate. So we have ethically scores in big places. What's our instrumentator agreement or what's our internet or assessor agreement when it comes to people? And what is it between the overall people outcome and overall machine learning outcome? These are all scores that are pretty closely tracked by our teams.
00:39:21
Speaker
And you said you also track revenue. I want to understand what is the way in which you monetize. Is it on post? I'm sure the model will be different for a platform versus for a government agency. What is the commercial arrangement like? Sure. It really varies. We have this interesting construct. We call it the Situation Room.
00:39:40
Speaker
And we bring this back to our Russia days because my Russia was a war room with that up. So it's effectively a product concept that we have called situation rooms, which defines this information environment that someone needs to monitor, detect stuff, detect threats and triage them and respond to them. And so it's a price based on the size and complexity.
00:39:59
Speaker
of the information environment. So the size would be, obviously, the number of posts, the number of accounts, and the number of interactions. And the complexity would be things like, how many languages is there? Is there just text, or is there any multimodal? So stuff like that would be something that goes into what the overall kind of subscription value for a customer would be. And then it's effectively recurring business models very much are recurrent for every business model. That's the way we're pricing. Obviously, for most technology-driven businesses, it's the ARR number that we tend to keep a pretty close eye on. Those numbers are usually pretty top of mind.
00:40:28
Speaker
Say Maharaj Chai likes in commission, they would subscribe for it throughout the year or this will just be like that one couple of months period for which they would subscribe. It really varies. For us, it's we definitely share the story of this being an always on risk. This is a risk that is again heightened during these critical events, but there's a lot that an organization can be doing to mitigate those risks by being ever present, even in less intensive months.
00:40:52
Speaker
So for some of these types of organizations, we do have surge pricing to be able to accommodate those high impact windows and a more accommodating pricing level when it comes to businesses as usual. But yeah, a lot of organizations use us all the time. But I think like 90 or 85 to 90% of the work that we do is on a always on basis of a 10 to 15% of our work is certainly on a very much an event driven basis. But
00:41:16
Speaker
Again, that's work that we do to demonstrate the value of what we can bring to certain organizations. And really, if it's an organization that's facing one kind of risk, it's pretty likely that they're going to have a new crisis pretty soon. In the government space, beyond the election use case, are there other use cases also like other types of government organizations that you work with?
00:41:35
Speaker
Absolutely. For us, the support core are use cases within the public sector. There's public health, public safety, election integrity, and national security. So those four are key use cases. Within public health, there'll be public health organizations, hospital networks. Again, some countries have seen like COVID misinformation. Yeah. Or even in general, there's a lot of anti-vaccine misinformation out there. In India in particular, there's so many online frauds that are driven from
00:42:02
Speaker
kidney transplants and all of that that are driven through misinformative and propaganda driven scams. There's a lot of other, particularly in the space of alternative health. It's a touchy space, given the Indian cultural context, but it's certainly some clear-cut areas where there are some disinformation campaigns there. In the public safety space, that's very much to do with communal violence, as well as potentially nation-building activity. So a lot of times nation-state actors step into
00:42:27
Speaker
stoke some of these internal fires. There's obviously elections and national security would always be around both foreign surveillance, protecting a country's interest domestically, but also protecting a country's interests overseas. That one's quite interesting because when Lottery started, 18 countries or I think 16 countries had information operations capability that allowed them to run some kinds of information operations either within their own borders or outside their countries. Today, that number is 90.
00:42:54
Speaker
So basically half the countries in the world can run information operations right now. So it's a pretty polluted landscape. But yeah, that again is a pretty critical use case for us. And this would be like in India, like Ministry of Home Affairs for this kind of like, they would be our clients or like Ministry of Health and Welfare for health-related stuff. Yeah, across again, both at central level, at state level, it'll be those kind of organizations. But beyond kind of ministries, it's
00:43:20
Speaker
Each of them, these have affiliated agencies. So our work is mainly with kind of the various branches of the civil service, as opposed to with kind of political executives, et cetera. So it's really working with those organizations, health, home affairs around, that will be focused on law enforcement. There'll be some focus on natural security in some agencies. There'll be some focus around.
00:43:41
Speaker
The upcoming interesting landscape is also the registry dimension. A lot of countries are living to regulate platforms and regulate how a lot of these trust and safety operations are run. In the UK, we have this online harms bill that's coming to parliament pretty soon. India has had, I think, one attempt already at regulating
00:43:57
Speaker
this last year as the amendment to the rules governing the IT acts, but there's also, I think, some other redrafting for it that's currently ongoing. So that, if it feels, is an emergent catalyst for us as a space, because what's become clear is that platforms, although they are trying, and they're trying hard, some harder than others,
00:44:16
Speaker
They're certainly trying to tackle this problem. They've proven that they can't do it alone. There's obvious risks. Governments doing this by themselves around freedom of expression and kind of all the politics around it, but also I think this is something, again, regardless of me wearing my logically hat from an independent perspective, it's still a whole review that it needs to be something that's run by an independent organization or a free market of independent organizations. That's a real place we want to get to.
00:44:42
Speaker
So, you would typically work with like a consulting agency that would further have the government as its client. That's what you say. The government is beautiful. Sometimes, yes. Okay, sometimes, okay. It's a mix of both. Yeah, sometimes it'll be through various, we call them channel partners, so it'll be through channel partners sometimes. In some cases, it'll be directly in India, we've done both. In the UK, we've usually gone direct. In the US, we've done both.
00:45:06
Speaker
How did you navigate this sales to government? So you need typically white-haired folks driving something like that. How did you navigate that, like big sales to government? I have no idea. I'll let you know when I figure it out. In startup world, it definitely gets a bad rep.
00:45:21
Speaker
Also, in venture world, it's always seen as a bit of an ugly market in some ways because there's horror stories of how long sales cycles can be that, for me, the biggest rewards in the government sector kind of thinking commercially is, again, a huge amount of impacts because of just a big leverage that governments and platforms have. But stickiness, once you're in, you're pretty much in. And unless we
00:45:42
Speaker
grew up in some horrible way. It's an incredibly sticky customer. And I think that for us is the biggest value of over-investing during those early days to go and acquire these customers. But yeah, I think that's what we've been biased towards. And we do have a couple of more great-haired individuals in the team than myself. They obviously help.
00:46:00
Speaker
And what do you charge the platforms? Is it on a per content per post that you review, something like that? It varies. It's close enough to a per post. There's some kind of compliance around that in terms, hey, can't be a duplicate post, can't be a highly similar post, et cetera, et cetera. But there's a price slightly differently that, yeah, broadly it's on a per post basis.

Future Innovations and Threat Detection

00:46:20
Speaker
What is your reality right now? Are you at liberty to share that? Not quite publicly. 10 million plus? Just south, just south. So we raised the $25 million Series 8 a couple months ago. What kind of you have the ability to fact check, text, video, audio, everything? Or what is the current capability in terms of the modes?
00:46:38
Speaker
Yeah, we're in a really interesting place. So I think by the time this podcast is out, we will have released the fully multimodal version of the platform at the moment in the current architecture. It's very much Texas, the bread and butter. And there's some image bits that are bolted on and some video bits that are bolted on.
00:46:55
Speaker
But the version of our platform that's being released on the first week of August, it'll be multimodal by Deepal to not just a text only, image only, audio only, video only. But when, when they've blended up together as well, it'll be like memes will be covered or like an image with a, with some text within a WhatsApp message. Like all of those form factors will be covered. Quite excited about that update in three months time.
00:47:18
Speaker
How did you solve that? I'm just thinking of the kind of WhatsApp stuff we get where there is somebody, let's say, talking in a regional language and who's giving some self-formation. Are you able to detect regional languages also and all of that? That sounds really challenging too. Do it for video. In a limited way, some. I think we do a pretty good job in 12 Indian languages. But beyond those, it's a big challenge in addition to those Indian languages we work in.
00:47:45
Speaker
European languages as well that really want to expand our roadmap to be able to cover all major languages. We want to have a list of 110 languages that we want to cover before the end of the year. We're always going to be in this place where our level of efficacy and performance in English will naturally be higher just because the state of natural language understanding is
00:48:04
Speaker
a lot more advanced in English than it is in any other language. Like Mandarin's pretty close, but we can't work with a PRC. Although we can, we can monitor them if someone's interested, but we certainly can't work with them. You're using like an existing voice recognition engine, say Google has this voice to text engine and all, or you're building your own engine. Just a little bit about, so it depends if it's formal speech or informal speech. So if it's formal speech, like a lot of the things that are available on the shelf are just way better as we use those. But when it's to do with informal speech and kind of
00:48:34
Speaker
social media speaking in particular. We haven't found, again, with respect to whatever exists in the market, be it Azure or AWS or Google, has that high level of efficacy. So we're fine tuning what we're building internally for the social context and using what's available commercially for formal speech. Fascinating. Okay. You told me you're doing some corporate pilots. So what did that we'd say? McDonald's would want to make sure that there's no fake news going on about it. Like from that perspective?
00:49:00
Speaker
Yeah, so I think there's two or three main verticals for us within this. There's the security side, which is this approach of conspiracy-driven threats. So it's literally, particularly in the States, there's organizations right now whose offices, warehouses, and executives are being targeted.
00:49:17
Speaker
because they're believed to be part of some big global conspiracy. Or that's like the pizza gate thing, there was some pizza partner. Yeah, or like vaccine sensors and fat manufacturers of vaccines and stuff like that. But it's really broadening out. Like last year was wayfair. Last year was a year before it was wayfair. It was a furniture company that became the epicenter of
00:49:40
Speaker
the QAnon conspiracy because they had cupboards that had the names that had women's names and these conspiracy big and they were quite expensive they're like five thousand dollar ten thousand dollar cupboards and these conspiracists thought hey they're trafficking women that are called that in these cupboards that's the like come on
00:49:56
Speaker
This was a serious conspiracy, and this organization was being targeted. Some people got right across to the point where they wanted to start getting off their executives. They started finding out who these executives are, who their children are, and really, like, nasty stuff, unfortunately. So these kind of threats today are present, so that's a lot of the security dimension. There's also a financial dimension here. I think there's some pretty interesting examples in India of a handful of banks in particular that have been targeted by various amplification and pump and dump schemes.
00:50:23
Speaker
There was a rumor about ICICI bank is shutting down and there were lines outside of ICICI teams of people trying to withdraw their money. That's right. So again, those kinds of events is both from a financial disinformation, the market manipulation. You can think of crypto even as a segment. We have something quite interesting we're working on at the moment for crypto of all things.
00:50:42
Speaker
that there's so many inauthentic accounts that are pushing various coins and there's clear trading activity that's linked to that post, et cetera, as well. That's an interesting problem for us. And so it's kind of pure reputational side even. And that would play more challenging for us because I think there's plenty of organizations that are out there that do a good job at reputation management. We don't necessarily want to get into that.
00:51:02
Speaker
For us, if there is an active disinformation threat that's focused on an organization, that would be interesting. And in some countries that might get exists and others it doesn't, because historically what people have thought of disinformation campaigns, they've thought of countries. What's happened over the last three or four years is there's now these agents of disinformation available for hire in various countries around the world. It's very much equivalent to almost ransomware.
00:51:25
Speaker
ransomware was this kind of big cyber threat that's happening today. People knew about it probably for the last five, six years, but our positioning is similar where ransomware was maybe in 2016. The threat vector exists, but it's not ever present. One or two organizations are being targeted by every few weeks and months, but it's not hundreds every day, but it's
00:51:45
Speaker
It will be there in three years or four years from now, given what's happening in the adversarial space. And you're located like in India also? Where's your headcount? What's your headcount split like? Yeah, so we are about 170-180 people at the moment. And about half of those are based in the UK. Just under half based in India, about half a dozen based in the US. So around 1870-10 or 1980-10.
00:52:09
Speaker
And what is the team in India doing? Are these the tech guys? So tech as well is split across the UK and India. Mainly most of our engineering teams sit out of Bangalore. Most of our AI teams sit out of London. And most of our product teams also sit out of the UK. And we also have some of our subject matter expert teams when it comes to fact checking and open source intelligence that sit out of India as well. But the majority of people in India are within engineering roles.
00:52:35
Speaker
Right. Okay. You raised this pretty massive 24 million dollar round rule. What do you want to use these funds for? It's pretty much half and half. We know there's a long way for us to go in terms of furthering our platform itself. I think multimodality aspect that I mentioned being one of the milestones where we're gearing up to. But equally, yeah, we have a pretty aggressive roadmap to better support some of our high leverage customers in particular, as well as differentiate our product offering potentially for
00:53:00
Speaker
enterprise. It's then also investing in new threat vectors. Again, there's a lot of bars around deepfakes, but it turns out it's not probably the biggest disinformation threat. There's a few other interesting things happening in the world of synthetic text in particular that are probably a bigger threat vectors where we're keeping on top of and red teaming all of the new imagined disinformation vectors. And the other half is really going into building our code and market teams across these three comforts. Okay. What are the new vectors of misinformation? What is synthetic text?
00:53:27
Speaker
I mean, people have heard of deepfakes in the video form, and there's a lot of kind of buzz around them. But in terms of how much you see them in the wild, it's mainly just porn, like 99% of deepfakes out there are porn, which again, it's a risk that exists, but it's not purely it's not missed miss this is it's very small, like maybe every couple of weeks, you might get one that's high profile in nature, synthetic text is really

Podcast Conclusion and Listener Engagement

00:53:49
Speaker
the text equivalent of that.
00:53:50
Speaker
So it's imagine you can create a disinformation campaign that's posting 1,000 very different posts from 1,000 different accounts. There's been a lot of progress in that direction by a lot of organizations working in the adversarial space, but also things that are building on top of recent breakthroughs in natural language generation. So that's a pretty significant risk at the moment. I think we've seen one of the campaigns that would actually been targeted to Wikipedia
00:54:15
Speaker
I think there was one attempt that was recently made to edit something like 10,000 Wikipedia pages concurrently. Again, these edit wars are always going on. But what was interesting about the most recent one is the editors were all like all 10,000 edits were being made by a synthetic agent. And you know, bot based edit wars are also common.
00:54:34
Speaker
What the new dimension was, they aren't just fan-posting the same thing. What they're writing is human-like and in some cases easy to dissect, but in some cases pretty challenging to detect. We see that as an interesting dimension. The other dimension we also see is really this two truths, the kind of social engineering framework.
00:54:53
Speaker
getting people down a rabbit hole of radicalization. So giving them two truths first, and then giving them the third lie has been a repeated tactic we've seen from various adversaries. So I think tactically, a lot of knowledge sharing might be happening within the adversarial space right now, and they're converging towards some best practices. Yeah, they're developing their playbooks, so we need to just stay ahead of them. And that brings us to the end of this conversation.
00:55:15
Speaker
I want to ask you for a favor now. Did you like listening to the show? I'd love to hear your feedback about it. Do you have your own startup ideas? I'd love to hear them. Do you have questions for any of the guests that you heard about in the show? I'd love to get your questions and pass them on to the guests. Write to me at ad at the podium dot in. That's ad at T H E P O D I U M dot in.