Introduction to AI Security Shift
00:00:04
Speaker
Welcome to the Silver Bullet Security Podcast, Episode 154. I'm your host, Gary McGraw, coming to you from the Berryville Institute of Machine Learning, where we're defining the future of machine learning security.
00:00:15
Speaker
From 2006 to 2018, Silver Bullet explored the nascent field of software security through the lens of building security in. But today, the frontier
Guest Introduction: Gadi Evron
00:00:24
Speaker
has moved. As we integrate machine learning into the fabric of our essential systems, we find ourselves facing a new set of architectural flaws and security challenges that traditional software security can't touch.
00:00:35
Speaker
On Silver Bullet, we're shifting our focus to the security of machine learning, bringing the same deep dive, no silver bullet philosophy to the world of AI. To help me kick off this new era, I'm joined by my new friend, Gadi Evron.
Gadi's Career and Contributions
00:00:49
Speaker
Gadi is a veteran of Botnet Wars, a community builder and the chair of the new Unprompted Conference. Gadi, welcome to the show. Thank you. You also forget ah somebody likes to suffer as I have a startup.
00:01:04
Speaker
Gadi Evran is founder and CEO at Gnostic, that startup, an AI security company and chairs ACOD cybersecurity conference. Actually, I have to admit, I should have sent this to you earlier. I no longer chair that conference.
00:01:18
Speaker
Oh, and no longer chair the ACOD cybersecurity conference. It's kind of like, ah I will cut you off right there and say, i am proud of that because it's something that actually survived me, meaning I was successful.
00:01:30
Speaker
Oh, that's great. So hopefully that'll happen with Unprompted too, but I got to get back to the rest of your stuff. Previously, he founded Symmetria, was CISO of the Israeli National Digital Authority, founded the Israeli CERT, and headed PwC's Cyber Security Center of Excellence.
00:01:46
Speaker
He wrote the post-modem analysis of the first internet war, founded some of the first information sharing groups, wrote APT reports, and the first paper on DNS-DDoS amplification
Vision for Unprompted Conference
00:01:57
Speaker
Gaudi has written two books on cybersecurity, is a frequent contributor to industry publications, and a speaker at industry events including Black Hat and CISO 360. So thanks again for joining us today for the Reboot of Silver Bullet.
00:02:13
Speaker
Thank you. i appreciate you having me here. Scotty, you're chairing the Unprompted conference, and I'm really pleased to be working on the committee with you. We've both seen the security conference circuit evolve over the decades, but Unprompted feels like it's trying to capture lightning in a bottle for the ML security space.
Building the AI Security Community
00:02:32
Speaker
What was it about the current state of AI security that made you feel we needed a dedicated practitioner-first venue, something you know beyond just another AI track at a traditional security show?
00:02:45
Speaker
It actually didn't start that way from that direction at all. I feel there are hundreds of conferences, maybe thousands on the topic. And there is so much noise and there is so much hype that creating something around the ch technology just wouldn't work without being taken over by the marketers.
00:03:04
Speaker
And what I usually wait for and what I'd like to help facilitate if I can is a community, especially of
Conference Growth and Highlights
00:03:10
Speaker
operators. People who work on this every day. And the question was, do we have that yet?
00:03:15
Speaker
And we created this sort of podcast, not really, where people come on on Zoom. It's called Prompt or Get a Fuck Out. People come on a Zoom and they just share for five minutes, no slides usually. We heavily discourage that.
00:03:27
Speaker
What actually works for them or what doesn't work for them so we can learn from each other. And at some point you had Chris Inglis on, right, talking about something he did. And you had the Halvar Flake on talking about something he did. And you had deep reverse engineers and deep mathematicians and policymakers and CISOs in the chat.
00:03:44
Speaker
Everybody talked about the limited context window for Claude and how they're actually implementing something and knew, okay, we have the community. We can actually create a conference now where people can share what works for them, why it works for them. We can create the collaboration that we really need.
00:04:00
Speaker
and leap your hand take you hand Take AI back from the marketers. they drew but You had a seed of a community because it got really big, really fast, didn't it? Yeah. well it's It's so-called a ground roots movement but or something technology. It's where people went to learn. Grassroots, grassroots movement.
00:04:17
Speaker
It's where people just went to learn. So yeah, we had the seed of the community already so we can run with this. So tell us a little bit about Unprompted. Like, you know, how many people are going to be there? where's Who's going to be speaking?
00:04:30
Speaker
Like, what's the deal?
Historical Context of AI Security
00:04:33
Speaker
We started wanting this to be 120 people or maybe 200 people tops. We'd like small conferences. And then there were 200 people on the waiting list. And then we moved venues and then there were once again 200 people on the waiting list. And that's the situation yet again. And on the submissions, you were on the committee for the CFP board.
00:04:55
Speaker
good Lord. And we had almost 500 submissions or so. Holy cow. With 40% of the being the last three days. I think it was you who said, oh, I just finished reviewing track one for the fourth time. Yeah, that was me. Yeah.
00:05:12
Speaker
ah and i And I probably wasn't even done because I think some more probably came in after that. but But it did create really interesting conversations about what we want to see, why we want to see it, why would talks be good for the community. And that's what we're going to see there. We're going to see people who are deeply into this from the attack side, from the framework side, from the defense side, sharing their real work in 20, 25 minutes.
00:05:36
Speaker
and engaging with the community.
Evolving Threat Landscape in AI
00:05:38
Speaker
That's great. We're looking forward to that. So it feels like we're watching history repeat itself to me. Most people are talking about outside in approach to LLMs, obsessing over things like prompt injection, which is essentially just a modern version of malicious input.
00:05:54
Speaker
of course, with a wildly unstructured API. Because we're so focused on the IO boundary, are we missing the inside reality of how these models actually fail? If we remain stuck on the outside, does the concept of something like an IOC, the in indicator of compromise that you helped to spearhead, even mean anything anymore?
Cyber Defense Economics and AI
00:06:15
Speaker
I don't know. I think you answered your own question in there. And I am i can't i know it's not really good answer, so I'll try differently. Try something different. um
00:06:28
Speaker
We just don't know much yet. We're just starting to see the use cases that are even useful to us. you know where For example, people would just start to say, how do I get visibility into this?
00:06:41
Speaker
How do we even use this? And now we see developers are using coding agents everywhere and finance is using coding agents everywhere. And we see a lot of startups getting bought and we see a lot of organizations trying to make sense of all of this thing.
00:06:55
Speaker
And all I know for a fact is that security operations is going to need to catch up. but what ah but And that's it. We can get into the detailed beauty details, but it's not about prompt injection.
00:07:08
Speaker
What I do see is what I call a micro-singularity. The attacker side is there. They have their success. You can now go through the model of researcher to engineer, to analyst, to your family at home, finding vulnerabilities and exploiting them on the fly.
00:07:26
Speaker
And that is something we need to start adopting to ourselves within our organizations until we figure out how to do actual defense, to be ahead of them. Gotcha. It's interesting times for sure you've always looked at security through the lens of economics today the supply chain isn't just a library of code that we can't keep track of. It's a trillion dollar data or trillion token data set that we don't fully control if an adversary or even a particularly dumb human can poison the foundation model. at the source, either accidentally like the dumb human or for a few thousand dollars like an attacker and affect every downstream application, have the economics of this defenders dilemma finally shifted into some sort of permanent deficit?
00:08:11
Speaker
We are always at a deficit. Let's not kid ourselves.
00:08:17
Speaker
And your question is... supposed to be an optimist. I am an optimist, but we also know we're always behind. Maybe AI will be that shift. Maybe we'll fly through the Gibson, right? If you remember the hackers and be able to shift things around and defend ourselves. But when it comes down to it, the economics are shifting where, okay, let's look only at cyber defense because again, the world is wide.
00:08:42
Speaker
Looking at attackers and their ability to
Adoption of AI Tools in Cybersecurity
00:08:45
Speaker
find vulnerabilities and exploit them on the fly. We no longer have the time window where we can wait and not tell people that's okay. You don't need to stay the whole day at work to patch.
00:08:54
Speaker
We no longer have the ability to make that decision because the minute something is discovered, it can be exploited and things are discovered all the time. Then for example, a lot of, not all, but lot of CVEs in open source tell you what the function name is.
00:09:07
Speaker
Right. And we rely on that in cyber defense. I don't think we'll have that anymore. So a lot of our assumptions are just being turned on their heads. Yeah. So. economics perspective, I think the only thing we can really do is be on top of this.
00:09:22
Speaker
I would ask is your team, are you, and it's easy already using cloud code cursor, whatever it is now to do everything you can to be at the edge. So you actually understand this.
00:09:33
Speaker
Right. So you use the tools. Don't just be the victim of the tools. It's kind of like if you go to an accountant and they don't use Excel, It's the same for me. The world is changing all the time. I don't know about Excel, but certainly they're still using an abacus. Let's just put it that way.
00:09:53
Speaker
That's fair. And I would say, there is a look, I'm now a startup in this space. And I want to go to CISO and say, you need to protect your agents. Talk to me. But honestly, just be relevant in two months, in two years by using these tools yourselves. Don't be afraid of them. They are powerful and they will empower you. That's the only way to be relevant.
00:10:12
Speaker
That is a perfect segue to my next question. We talk a lot about theory, but the last year has given us some pretty grim reality. We've seen anthropic enterprise hacking, you know the Zenity Labs research and into zero-click exfiltration that exploits ChatGPT and Copilot and Gemini.
00:10:32
Speaker
When you look at these sort of famous hacks of 2025, and you can add some more if you want, and some of the new zero-day stuff in 2026, do you see a new kind of
Old Challenges in New AI Contexts
00:10:42
Speaker
meta exploit emerging? Or is it just the same old insecure by design software failures, but now running at ML speed?
00:10:51
Speaker
I think we're starting to see some new things. But we don't have to look that far. We don't have to even look to zero click, even though the marketing department now releases zero clicks that is not and requires people to jump up and down three times and say, please to run, but they still quote it zero click. um it's true. and know it's Sorry, I'm just trying to be buzzword compliant here. It's very hard to do.
00:11:12
Speaker
I know, I know. But I think at this stage, we are seeing all the olds again. Like for example, oh, i I'll hide my prompt injection inside the GIF. So in the antivirus world, Oh, I'll add my hide my virus in a zip file, within a zip file, within a zip file, within an ARG file, within a RAR file. Oh, I went far enough. Now it will not detect me.
00:11:33
Speaker
We're seeing all the old back again and it's still security, but we live in security that has some assumptions turned turned around or turned on their faces. Number one would be this, ah we are moving, as Sunil Yu, my co-founder likes to say, from deterministic configuration to non-deterministic configuration.
00:11:51
Speaker
How do we deal with that? We're moving from a world where the intent does not necessarily align to action. We're using tools that are extremely highly privileged, powerful, and essentially you ignore our defenses.
00:12:05
Speaker
There's a lot of stuff that's changing right now.
Enterprise Use Cases for AI Security
00:12:07
Speaker
What i personally care about is enterprise security and safety. Safety, I'm going to leave aside because how do we even define it? And if you ask somebody in China or in Reston or in DC, even which is very nearby or in London, everybody will say something different.
00:12:24
Speaker
But on the enterprise side, I feel we have use cases which are clear right now. The enterprise wants visibility. The enterprise wants the agents to be safe.
00:12:36
Speaker
And that's where I'm focusing. With that said, There is one more aspect to this, which I'm not sure I have down. So give me a second here where I think, you know what? I'll i'll just bring it back to, are you using cloud code or cursor or whatever it is yet on your own? I'm bringing it back there. That's my message for today because you can, it's just speaking English.
00:12:57
Speaker
Take it back there. But my favorite attack, my favorite attack is honestly just an extension on the extension store that somebody goes and downloads into their ID and exploits and it exploits you.
00:13:10
Speaker
and nobody even knows it happened. right And not because the IDEs weren't vulnerable before, but because just introducing AI, introducing the the coding agents into this meant that again, philosophically, the entire perimeter has shifted again.
00:13:25
Speaker
It's not about AI. It's about- now know the guys Now the guys in sales can create stuff like a developer.
Philosophical Shifts in AI Security
00:13:32
Speaker
Yeah. it's it's It's called the citizen coders where essentially Forget sales, finance. Now, for example, if I, e in my company, told my CFO, I would like to understand if this changes and date changes, how does it affect us and what would we move around?
00:13:47
Speaker
It could take an hour, but it could also take three weeks to come back to me with an answer. Finance now uses code code. Forget the code part. It talks English and they have an entire model for me in five minutes. Right, right.
00:13:59
Speaker
And what else is there? These people are not coders. They're not trained. And so philosophically, What happens when everybody creates their own infrastructure and forget dependencies and vulnerabilities and keeping it updated or tracing for resources, IT is now fragmented. We no longer control IT.
00:14:16
Speaker
Yep, I think that's true. um That is a problem, but we still have philosophy issues in security too. Like, like let's talk about the measurement problem. In the old days, I used to distinguish between what I called a badness-o-meter, a tool that finds bugs and can tell you how bad things are, but means pretty much nothing when it reads zero.
00:14:36
Speaker
um Isn't that an email you sent in 2004 that actually responded to you on? I think 2006 when I published something in Dr. Dobbs. Remember Dr. Dobbs? but so anyway Right now,
FUD's Role in Cybersecurity
00:14:49
Speaker
the ML world is flooded with these sort of evals and red teaming reports that are really just fancy badnessometers. Right. Are we looking at things like what BIML has been thinking about, architectural risk analysis, you know moving towards some sort of real security meter for ai
00:15:10
Speaker
I don't know. i don't know if there will ever be a real security meter, but I do know one thing. The, I guess, the antichrist of, of ah I guess, meet measurements Is FUD, fear and security and doubts where we go in security.
00:15:28
Speaker
okay And throughout my career, I tried really hard to stay away, steer clear of FUD. Right. And I feel right now that regardless of how important measurements are and how we need to move forward.
00:15:42
Speaker
We are slowed down right now from understanding how fast we need to move because we don't use enough FUD. And I'll give you one example. Oh, that's really an interesting perspective. I think there's a lot of truth there. And honestly, in the early days, we had to use FUD to make the big companies pay attention.
00:15:58
Speaker
Like you remember the Java security breaking the things day? We had to make Netscape and Microsoft pay attention. And we have trained ou ourselves to avoid FUD to a degree, not the marketing people. And it's really hard. You give a talk, it scares people.
00:16:12
Speaker
Because if you talk to the board, you have to show measurements, you have to show a program, you have to show skills over time. With that said, I'm looking at at what's what's happening just in the attack space. Forget everything else, the visibility, the supply chain, everything running crazy because the value is finally there, right?
00:16:29
Speaker
ah Microsoft, I'm going on a side quest. Microsoft tried to create a cop-out for Office 365, spent tens of billions on this, mostly a failure for now. And With that said, OpenClaw comes out, completely insecure, 1.5 million users within a day or two.
00:16:45
Speaker
Right. It's absolutely crazy. and And the thing is, if I go out there right now and I need to talk to, so okay, I know i'll just say this. Security usually slows down the business.
00:16:56
Speaker
In this particular case, it's the business that slows down people and they're not waiting, they're
Challenges with Rapid AI Adoption
00:17:00
Speaker
adopting. ah Right? and Well, wood when i look, at the madnessometer is useful because it's going to actually have some stuff on it. Like it's going to say... So so here's here's the question. You are now CISO and you understand that you have no time.
00:17:15
Speaker
You have no time to wait for next year's budget. You have no time for the M&A that will take all your attention is your top one, two, three, four, and five priorities because you don't have time for anything else and so on and so forth. But...
00:17:27
Speaker
You also know for a fact cyber defense is going out of of style. and the way like It's important. It's not only going to be effective in this at this rate. So the question is, will you have time to respond because enterprises are slow and bureaucracy will slow you down?
00:17:42
Speaker
That was my hope a little bit in AI security. And what we're seeing now is everybody is just adopting it like crazy that we don't have that time. So you know what? Maybe we do need it. Maybe the Benosometer would actually help us.
00:17:54
Speaker
Measurements are the way to go. It's just, we don't really know what we measure yet.
Evolving AI Security Terminology
00:17:58
Speaker
Yeah. Too early to tell is the answer. um So I want to step back and ask for your help with some words that were used. I've always been a nomenclature stickler. My favorite word I think is kerfuffle.
00:18:12
Speaker
I don't mean that word here. Look. My favorite color is blue. My favorite animal is a dog. Your dog, by the way is amazing. Your dogs are amazing. Moonshine is his name. So right now the industry is throwing around terms like AI security, adversarial AI, machine learning security.
00:18:29
Speaker
like their interchangeable. um I like machine learning security because it focuses on engineering and building stuff. um And don't even get me started on hallucination versus just plain old being wrong.
00:18:42
Speaker
Are we doing ourselves a disservice by, you know, anthropomorphizing the terms that we're using? No.
00:18:53
Speaker
So the the the thing is, I can easily agree with you. But the industry takes a while. Okay, if you went on 2003 or 2005, whenever it started, to darkreading.com, a solid website with a lot of articles on security and a lot of learning you can do there.
00:19:09
Speaker
And honestly, where I send people now, because the mailing list no longer exists, and we don't really have the mentorship model to just learn. And the reason that's possible is when somebody writes, they will use completely different terminology than somebody else who writes on the same topic.
00:19:22
Speaker
And eventually it converges, but just reading what they leave out in their models, reading how they talk about it differently, features us. So my question is not necessarily, is this good or is this bad?
00:19:34
Speaker
But isn't this just just natural?
Impact of AI Model Homogenization
00:19:36
Speaker
Is it converging is the answer, I guess. Eventually it will converge. But if you say ML security, you might speak to people who have been there before say, ah, this guy gets it.
00:19:48
Speaker
But at the same time, you will miss out on most of the people who actually need the help, who need to learn about this. Because AI is the buzzword. Yep, it is. So um let's talk about ah this this kind of bigger philosophy issue. At BIML, we've been looking at what we call beigeification, which occurs through this process of recursive pollution. And the the idea is that, you know, if you're getting the bell curve of everything, it looks pretty darn beige. So we're using real data from the real world, but we're suffering from the enormous kind of bell curve effect.
00:20:28
Speaker
Academics study things like model collapse and semantic ablation at the end state, but they don't pay much attention to the damage that we can suffer along the way. um And so as we use AI generated content to train the next generation of models, we're essentially inbreeding the data and losing the kind of edges that make the model robust.
00:20:50
Speaker
Are we kind of building a digital monoculture? What do you think? Like, how much should we concern ourselves with this recursive pollution thing or should we just ignore it?
Fast-paced AI Advancements
00:21:01
Speaker
I think it will solve itself.
00:21:04
Speaker
Essentially, there is so much money involved and there is so much value involved. And there are so many competitors in this space just waiting for a way in that those who will not deal with it will just be left behind.
00:21:17
Speaker
I think that's really interesting, but i but I also think that mass appeal is much easier if you're beige. Like, honestly, television does not interest me, but it sure does interest millions and millions of people.
00:21:29
Speaker
Okay. So let me ask you then. Do you think i'm I'm claiming, oh, wow, yes, it is an interesting problem to think about, but the world is moving so fast. I think we'll move. We'll just whoosh right past this problem.
00:21:42
Speaker
I hope so. I don't think we will whoosh right past it. um But I think we've got, you know, things are going to move fast. We're going to change architectures. We're going to change training sets. I don't think i mean if even if we don't solve this problem, right the the world is moving really fast. The technology is moving really fast. that what's What's possible today?
00:22:01
Speaker
How many people out there, even heavy power users of AI, get even close to scratching the surface of what's possible today?
Transformation of Roles and Industries
00:22:09
Speaker
So I'd assume all the models from now to three years from now when we solve this problem suck and we can't use them.
00:22:15
Speaker
Does it really matter that much? That's a good question. And I don't think we know the answer to that, but that doesn't mean we should stand around not using these models. Absolutely. But I would say, i think what you're asking for, if we look at Cory Doctorow,
00:22:30
Speaker
If what we base ourselves to essentially base our society on and our technologies on and advance on is all in shitification, right?
00:22:41
Speaker
Then isn't it eating its own slop and just going to reduce its own value over time? And also the fact that, yeah you know, it is modeling the all of human creation through text, say, or video or text and video.
00:22:59
Speaker
Um, But like, you know, the the the average is not what we need to move society forward. That's the part that's difficult. But the same, I would agree with that. And I appreciate you thinking of these problems and I do think of them myself. But I look at this right now and say, what's needed to move society forward right now is we are already a die the storm.
00:23:25
Speaker
I'm going to shorten my whole spiel and my rant get off my stop box and say, We are at the eye of the storm. What I care about right now, changing the topic, is people seeing it and being a part of it.
00:23:36
Speaker
Because we need to figure out how do we make jobs redundant without making people redundant. And all it takes is to be on the edge, but just using it and trying it with English. That is what I truly care about. Because this is so powerful. It's no longer 2025 when people would say, you will not be replaced by AI, you will be replaced by a person using AI. You will be replaced by AI. as a CEO right now, my friends who are CISOs, will be replaced to a degree by ai We just need to be on top of it. That is the biggest concern for everybody. The technology will solve itself.
Conclusion and Resources
00:24:05
Speaker
I hope so. ah Really interesting conversation. Thank you so much for joining us today and for sharing your insights on this emerging field that we barely have a handle on.
00:24:16
Speaker
This has been the Silver Bullet Security Podcast. You can find a permanent archive of all of our episodes dating back to 2006 at To learn more about our work identifying and publicizing technical risks inherent in machine learning at BIML, visit barryvilleiml.com. There you can find the BIML 78, our work on LLMs, and our linguist research papers. Silver Bullet is a monthly series.