Introduction of Ashwini Rao and EIDL
00:00:07
Speaker
This is Alex Albin and we're here today with Ashwini Rao, who is the co-founder and CEO of EIDL. We're eager to learn about that and Ashwini's perspective on issues relating to technology and AI.
00:00:26
Speaker
We met Ashwini when we were at a panel a few months ago at the Los Angeles Tech Week. It was a really interesting group on the future of AI, especially focused on legal issues, and we thought it would be fun to have her come, and we can pose even more questions to her because we were on the panel back then, so the roles have reversed a bit.
00:00:55
Speaker
I'm Alex Halben. I'm one of the co-directors of the AI Forum.org. And I'm here with Patrick Yip, who's also co-founder of the AI Forum. Ashwini, we're really happy to have you here. Thanks for being on the podcast. Thank you for having me, Alex.
EIDL's Mission and Ashwini's Background
00:01:16
Speaker
As Alex said, I'm the co-founder and CEO at EIDL. And EIDL is a scam protection platform.
00:01:23
Speaker
protecting enterprises and consumers from scams that have been on social media and
00:01:31
Speaker
app stores and apps and all the good stuff. So my background is cyber security. I was formerly a CISO of a FinTech startup. Even before that, I was an engineer at multiple big and small companies. I've developed multiple systems, including AI systems.
00:01:54
Speaker
So I'm very aware of how the technology works. I also work with policymakers, so understand how technology needs policy to function appropriately. Yeah, that's my background. Terrific. Well, just to kick things off,
00:02:14
Speaker
We are very concerned about the direction of cybersecurity as a global problem.
Human Factors in Cybersecurity
00:02:20
Speaker
What do you see from your point of view as the most pressing challenges that we face in this realm today? So usually when we talk of cybersecurity and cybersecurity protections, the first thing that comes to mind are machines. How do we prevent a machine from getting hacked?
00:02:44
Speaker
But what truly happens is we are not just dealing with machines, we are dealing with humans who deal with machines. So cyber security, there needs to be a lot more focus on how these humans are getting hacked, by which I mean, social engineering. The biggest hacks today happen, not just because machines are weak, but because the person who's handling the machine is getting fooled.
00:03:12
Speaker
So I work on scams, which is all about social engineering, how humans are tricked and victimized. And this in 2022, US consumers lost $8.8 billion to scams. That was 30% increase from 2021. That's a lot of money. But
00:03:34
Speaker
So yeah, so what I see is the need is protecting humans, like having more systems to handle like how people are getting victimized, prevent them from getting victimized. That's where I see the biggest need as, you know, when we talk about cybersecurity. Maybe just give us a few examples of the types of attack that you're seeing now that use social engineering. Most people think of
00:04:03
Speaker
phishing attacks as being web-based, you get an email and the email tricks you as to the identity of the sender. How is this evolving these days? What are you seeing in your business?
Evolution of Phishing Attacks
00:04:18
Speaker
Email phishing is still happening. I believe like a billion phishing emails are sent every day.
00:04:25
Speaker
So that is still happening. So is website phishing, which is, you know, someone sets up a Bank of America website that looks very much like Bank of America website, but it's not right. But what we have seen evolve is really
00:04:43
Speaker
things through social media. In the last seven, eight years, social media has exploded, right? There are today 4 billion users on social media, not just consumers, but also employees. I mean, we are all on LinkedIn, pretty much. And what we are seeing more and more is phishing that's happening through social media. Now,
00:05:06
Speaker
Think of, okay, maybe you follow your bank on Instagram, right? A lot of us want customer support through Twitter. We go complain, hey, you blocked my account, right? Or you are not letting me pay this person.
00:05:23
Speaker
And what we see is social media, just like you would see the example I gave with website that looks like Bank of America on social media. There are profiles and there are other profiles that look like your bank's profile.
00:05:38
Speaker
There's also a lot of direct messaging that happens. OK, I'm an agent from the bank and I'm contacting you because we are going to block your account because something illicit happened. It's exactly the same you see on email, right? Hey, your account is going to be blocked unless you change your password. You see the same message through social media, not just just social media. There's also ads, a lot of ads that are being shown on social media.
00:06:05
Speaker
There are mobile apps because everyone's using an app for banking, for everything else we have an app. And there are fraudulent apps that look just like the legitimate app. So we are seeing this evolution, not just website and email, but also ads and apps and social media, messengers too.
00:06:27
Speaker
What you seem to be saying is that people have a lower guard for some reason when it comes to social media than they might have for the traditional old-school email-based phishing attack. And how does AI change the equation? And are you seeing specific AI-enabled attacks now in social media and otherwise?
AI and Phishing: A Cat-and-Mouse Game
00:06:53
Speaker
We have certainly started seeing them. There's also some anecdotal evidence as to how AI is being used for novel phishing attacks. So one of the things that we see is, so let's say there's a scammer who wants to create a social media profile that looks very much like, let's say another legitimate company's profile.
00:07:16
Speaker
What do they have to do? They probably have to create a profile, copy the logo, copy some of the posts, right? Like think of Instagram. They have like 10 posts and you copy all that. But now AI is being used to actually detect that kind of attacks, right? So you can compare two profiles and say, this looks very similar to that. And so I'm going to apply this.
00:07:41
Speaker
So as that starts happening, what scammers do is they get sneakier and sneakier. They will start like, OK, modifying the logo a little bit or modifying the post a little bit so that the AI detection algorithm no longer works or is as accurate. But then the AI learns about it. Then again, it gets taken down. These profiles get taken down.
00:08:13
Speaker
I'm going to pause here for two seconds because. All right. I lost my chain of thought. So Alex, you. It's time for another question anyway. Yeah. Yeah. Okay.
Social Engineering and Trust Exploitation
00:08:28
Speaker
Yeah, I think I want to switch over just to consumers. I just, while you were speaking, I pulled up the stat, like, what's the percentage of people that have weak passwords, like their password is password, or I think we were in class actually talking about this and Alex's class and the new trend is instead of putting password, you put the name of your dog, you know, fluffy or Ruby or whatever it may be, right? And the dog's birthday. Yeah. And the dog's birthday.
00:08:55
Speaker
What can consumers really do to protect themselves? You say we're moving from this world of email scams. One thing that I've seen recently just in my own life is people impersonating the CEO of the company. I get a text and it says, it's my boss, and he says, hey, I'm busy right now. Can you send me an Apple gift card because I need to buy a MacBook right now?
00:09:20
Speaker
And I'm sure there's doing that in mass and people are falling for that. What can people do to protect themselves here? So it is true. I think the US FTC statistics is if you encounter a scam, let's say through text or email, I think if there's a 5% chance that people will lose money, on social media there's a 70% chance people will lose money.
00:09:48
Speaker
So that's a huge, huge difference. Why do you think it's so much higher? I know, for example, on what I see, and it's so easy, right? There's Facebook impersonations, and I've seen them impersonate my high school teacher. What value to malicious actors have in impersonating, I would say an average everyday American or citizen, what value is that to them?
00:10:17
Speaker
So there are two things here. One, why people actually fall for it and what's the impact of that? So the impact is financial loss, right? 8.8 billion. It's going to about, I mean, I think the average loss of social media is around 600 to $800 for every scam. So when someone gets scammed, that's how much they lose. So it's a good business for them. Yes, for sure. It's a good retirement.
00:10:45
Speaker
It's not just consumers who lose money, right? Like the statistic is if the consumer loses $1, the business actually, because the kids are consumer of some business, usually banks or some other business, they lose $4 because it's not just refunding the money, there's investigation, there are fines, there are legal liabilities. So both consumer and the businesses lose.
00:11:10
Speaker
So yeah, scammers make tons of money. The ROI is just too high for that. As far as like why people are more likely to fall for scams on social media versus let's say websites and email, I think it's their nuanced answers then. One is people are much more aware of email scams and like website scams versus this new kind of scams that are happening through like apps and ads and social media.
00:11:39
Speaker
I would also like to point out the word social, right? There's a reason why we call it social because we are dealing with our friends and we are dealing with maybe family or even if it's an employer, there's an elevated level of trust. So when you think it's your CEO contacting you, I mean, it's the CEO you want to do something.
00:12:04
Speaker
There is that social pressure. If it's your family contacting you, there is, again, some sort of pressure. And all scam tactics are about pressure. And given, like, our elevated level of trust and the pressure we feel and that we are not trained and aware, those are the reasons that are contributing to people falling for it more often on social media.
Future of AI Scams
00:12:32
Speaker
-up question. I think there was a viral clip not too long ago on 60 Minutes where they had the producer and the anchor and they were able to basically mimic the anchor's voice using AI and call over to the producer and say, hey, I'm at the airport. I forgot my passport. Can you look it up and give me like my passport, ID? And they fell for it because it sounded like them. It was from a phone they trusted.
00:13:00
Speaker
Do you think these types of attacks are going to increase just with the ability to mimic voices and to better mask your identity through these channels? And then the follow up to that is like, what can we do to know that it's AI versus someone real?
00:13:17
Speaker
100%, you know, just like we love technology, scammers also love technology. They'll, you know, they're also very progressive when it comes to technology. So if there is a technology that will help them, they'll definitely use it. Whether it's like, you know, sound modifications or image modifications, whatever it is, they'll use it.
00:13:39
Speaker
It might take some time to get into the mainstream. Like I don't think tomorrow is when we'll see an explosion of AI scams, but it's going to happen in the five, six, seven years time. It'll increase for sure. Maybe it's safe to say, I'm sorry. Maybe it's safe to say that the scammers are early adopters when it comes to any kind of technology and
00:14:07
Speaker
they will respond to things that work for them.
Ashwini's Journey into Cybersecurity
00:14:12
Speaker
Absolutely. Well, just to shift gears for a sec, because you're not only someone who's working in this area, but you've had a background that's very interesting in both academics and as an entrepreneur. And maybe just describe what inspired you to start your own company in this area.
00:14:36
Speaker
Even as a kid, when I think back, why cybersecurity? People have asked me this question, why cybersecurity? Why are you passionate about cybersecurity? What is it about it? So I thought about it. And even as a child, my parents would give me a toy. And I would just take it apart.
00:14:58
Speaker
into pieces, right? Like, how does it work? Like, okay, once I took it apart, it would all be in pieces. I would never put it back. No interesting, like, building stuff, but breaking stuff? Yes. Amazing. So that's how I, you know, I guess that's the mindset, right? How does something work? And how do I take this apart? How do I break things? That was something was always there.
00:15:21
Speaker
And that kind of drove me into the security area where we want to break stuff. If everyone's thinking the norm, we want to think of the edge cases. When everyone's thinking, how do I make this work? We are like, how do I break this? So I think that's something that drives me.
00:15:37
Speaker
But as I went into computer science, studied computer science, started working in security, I would find something. One of the work I did was on passwords, like how to break passwords, specifically using grammar. Because we are moved from passwords to passphrases. Now we use a lot of words. So I developed a technique like how do we use, actually NLP, it's a natural language processing, to break these passwords.
00:16:06
Speaker
And I would present my work to people, right? And they would all go, oh, this is all nice. But tell me, how can I create more secure passwords? And how can I protect myself? So I gradually started understanding that it's important to break stuff, but also how to build more secure stuff. It's important. And then that definitely, at some point in my professional career, I wanted to do that.
00:16:35
Speaker
So after my PhD, I joined a startup which was in the FinTech sector. I helped them build, like architect the, you know, in financial services, a lot of scams, a lot of security issues. So I helped them build a better system. And then once I quit that startup, I was like, okay, what do I do next? I do love building systems. I want to be in cybersecurity. I want to do new stuff because that's, you know, that's my passion too.
00:17:04
Speaker
So I was like, okay, let me do a cybersecurity started. So here I am. Great.
00:17:14
Speaker
Yeah, we can switch gears to maybe some current events of this week.
AI Executive Order and Security
00:17:18
Speaker
So, earlier this week, President Biden did an executive order on AI. It's one of the first in his administration. And it's pretty far-reaching, covering the workforce, covering innovation, covering policy on how it should be regulated within tech companies. Kind of curious what your take on it is. It's just released this week, but what's your reception to it?
00:17:45
Speaker
Yeah, so you're referring to the safe, secure, and trustworthy AI initiative, right? That's the one. Correct. Yes. So yes, I've gone through it. I've also gone through, there's a link to the proposed Bill of Rights, AI Bill of Rights, or like a blueprint for AI Bill of Rights. So I want to comment on both of them.
00:18:12
Speaker
So first of all, I really like it. I read it and I think it's a very good start. And my comment on that is, you know, when we look at this safe, security, trustworthy initiative, we start thinking of how can we make systems or AI systems 100% safe, secure, and trustworthy. I want to stop everyone right there at that point, right?
00:18:42
Speaker
The foundational thinking in security is no system is ever 100% safe, secure, or trustworthy. It's impossible. It can never happen. And that's the security mindset. And we want to operate with that security mindset because otherwise,
00:19:03
Speaker
Okay, so same thing. Think of passwords, going back to passwords. If you say, let's always have the most secure passwords, that's impossible because humans are involved in this system. Somewhere we are fallible. So my
00:19:21
Speaker
opinion is that it's good to always think of how can we make things more secure, but also think that nothing can be 100% secure. So we need to have things in place. What happens when something fails? We need to start planning for that. That's how we can build the most resilient system by planning for failures, right, knowing that things will fail, but to minimize the damage when failures happen.
00:19:49
Speaker
So that's one thing that I would always think about. Do you think that the executive order is going to help move the ball forward for better security practices or is it just more of a political document?
Comparison to GDPR
00:20:10
Speaker
I think the face value, yes. There's also a lot more about how it's going to be made more concrete, how it is actually going to play out. Is it going to be legally binding?
00:20:23
Speaker
or not, are they going to be fines if someone doesn't do it? Those things matter. I mean, we have seen plenty of other things where, like let's take GDPR or any of the other privacy protections. The big difference between the US privacy protections, which there are like a lot of privacy frameworks in the US too, but compared to GDPR is in GDPR,
00:20:50
Speaker
There's 3%, I think these companies can be fined up to 3% of their worldwide revenue. That is a game changer. That is a big reason why a lot of people are taking GDPRC. So yeah, it's a good start, but then how does it play out? What are the implications of someone not adhering to some of these principles or whatever is stipulated by the,
00:21:19
Speaker
by this initiative. So that's important, but I do think I read it and there's a lot of good stuff in there. What about the Bill of Rights section of the executive order?
Critique of the AI Bill of Rights
00:21:39
Speaker
Yeah, so when I read through it, a lot of it is very good. But I want to go back to the point I made that if we have to start thinking that no system is ever 100% safe, secure, or trustworthy. So what does that mean? What's that one word that I didn't see in the Bill of Rights? Rectification. I didn't see that word at all. But as it's in GDPR, one of the articles is right to rectification.
00:22:07
Speaker
What does this mean, right? So let's say you have an AI system that was trained on a certain data, then it's deployed somewhere. Now it makes a decision. Maybe because of that decision, someone loses their job. Sure, in the Bill of Rights, there is notice and transparency, so they might know how the decision was made. So it's there.
00:22:32
Speaker
But then what? Okay, so you know, there was this incorrect data and there was decision made on that and I lost my job. Now, what's my recourse? I have access, but what next? Can I get the data rectified? How do I get the data rectified? What's the redress? What do I get like for this?
00:22:55
Speaker
So that word, like being able to change things, like especially correct inaccuracies, that's something I would like to see more. And that again goes back to thinking this mindset that nothing can ever be 100% right or accurate. And so we have to operate with that. Okay, it's great, you have access, you have notice, you have consent.
00:23:19
Speaker
But then what? Because something will fail somewhere. At that point, we need to rectify and regress. Actually, I'd like to see that more whenever we have this bill of rights. Yeah. Interesting. We have time for one or two more questions. Alex, you want to?
Misconceptions about AI
00:23:41
Speaker
Actually, we ask everybody who comes on this podcast a basic question, which is, what do you think the greatest public misconception is about AI? Well, that's very interesting question. There are so many answers that come to mind.
00:24:00
Speaker
But I would say, let me start with an example. Okay, let's say you go to a casino in Vegas, right? And then you see the cards being shuffled because, you know, it's some poker game, let's say. On the left side, you see a machine shuffling cards. On the right side, you see a human person shuffling cards. Immediately what comes to mind about,
00:24:27
Speaker
fairness, like who's doing a better job. I think the human mindset is to think it's the machines. See, machines don't have biases. They are not as emotional as humans. They are going to do a better job, a fair job. I think that's where that's, that leaves a lot of misconceptions about machines and algorithms that operate those machines and AI algorithms too.
00:24:52
Speaker
So we think of machines as having no emotions, but which leads to biases. And I think that's one of the misconceptions, right? So when it comes to AI, we can assume that we might assume that, oh, it's going to be more fair. But that's not the case.
00:25:12
Speaker
whether it's AI or any algorithm is written by humans, whatever biases we have in our mind, we encode that into the algorithms. When we train the algorithms on data, that data comes from humans, which has inaccuracies and biases. So the algorithm gets trained on that. It has biases. So if I look at two people and make different decisions because of whatever reasons,
00:25:39
Speaker
the algorithm that I code and I train will also make the same mistakes. But then no one's looking at Ashwini making those decisions. They think, oh, there's this computer that's making the decision. So it must be fair and unbiased. So yeah, I think that's the mindset. We think of human cells fallible as emotional, but machines are not. But then that's just not true.
00:26:07
Speaker
That's a great point and a really provocative example. I guess we have a final question. Patrick, I'll leave you with a final question to ask.
AI as a Tool for Human Enhancement
00:26:18
Speaker
Maybe the flip side of that is, instead of misconceptions, what do you think the greatest benefit is of AI? How do you think it's going to help the average consumer or society at large?
00:26:31
Speaker
So, you know, from a technology perspective, I would say it's the creation of superhumans, right? AI can reduce manual labor by 70%.
00:26:45
Speaker
And I'm not saying that we are going to replace take away 70% of the people doing jobs. That's not what I'm saying. Because go back to the bias thing, we still need a lot of humans. What I'm saying is a given human being can do, we can make them do way more with AI-assisted technologies. Take all the mundane things that we don't have to do.
00:27:09
Speaker
Anyway, it's boring. We can get rid of that and just focus on doing things that are more important. And this happens, right? So for us in the scam world, we have to train a lot of algorithms. We have to collect a lot of data. We have to take a lot of screenshots. All this work can be reduced just by having AI algorithms involved.
00:27:38
Speaker
And we can just focus on making things better, you know, catching the scammers. How I pursue AI and AI technologies, it just helps us do our jobs way better.
00:27:49
Speaker
Well, we can certainly train AI to do better podcast questions, but I can't think of better answers than what you've given us. Shwini Rao, who's the co-founder and CEO of EIDL, we really appreciate you being here with us today. Thank you, Alex. Thank you, Patrick.