Is AI Security Just Rebranded Cloud Security?
00:00:00
Speaker
The security industry as a whole is being extremely dishonest right now. um I think AI security is just a recycled cloud security with a new label, a new fancy label with the sparkles emoji all over it.
00:00:15
Speaker
Sparkles. Yeah. that Sparkles vibes, whatever.
Introducing the Hosts and Guest
00:00:27
Speaker
Welcome back to Bare Knuckles and Brass Tacks. This is the tech podcast about humans. I'm George K. I'm George A. And today our guest is Amber Banui, who we both know for some time now. She is a product leader and has been at several security companies. And we wanted to get her take ah on what is the state of quote unquote AI security, which is a label ah in a spoiler alert, we all agree is vacuous and jazz handsy.
AI Security: Hype Cycle or Fundamental Shift?
00:01:01
Speaker
um But it was really good to have her on and get down into a little bit of the weeds, but also a wider philosophical take on where this gen AI moment is relative to where we've seen machine learning applied in the past.
00:01:15
Speaker
Yeah, I mean, I always love talking to Amber because she's doing the thing at the very bleeding edge and she's extremely logical and smart about it. So I get so much value out of her opinions.
00:01:27
Speaker
um And I guess, I don't know, I guess it's kind of validating as ah as a CISO that we're exactly on the same page, but the logic is is is completely lined up. um This is just another iteration of the hype cycle. um You know, we have to go back to the fundamentals of good security, good SDLC, like properly developing something, actually looking at how we how we do our business planning, business objectives, alignment between, you know, business leadership and engineering leadership.
00:01:55
Speaker
Um, yeah, I think there's, I think this is, this is an episode that's going to have a lot of common sense that, you know folks who are practitioners and folks who are business analysts really should get a lot of value of this because this is the perspective I think they need to hear from people who are on the cutting edge of trying to figure out how we get safe implementation balanced with good return on investment.
00:02:21
Speaker
And that's really what this episode comes down to.
Amber Banui on the Evolution of AI Security
00:02:27
Speaker
Amber Minoy, welcome to the show, my friend. is good to see you again. Sorry I couldn't come to RSA. Good to see you too, guys. RSA was definitely a wild one, but yeah, happy to be here. have a lot of ah fun things to share from what what I heard on the X before and off.
00:02:47
Speaker
but This season, we've expanded the conversation beyond just cybersecurity. We will we will start there. um But you have spent your career watching security teams kind of react to different waves of technology And, you know, for the benefit of our listeners, machine learning has definitely been a part of that stack for a very long time before the generative AI craze.
AI's Role in Changing Trust Boundaries
00:03:11
Speaker
But I guess at what point did you realize that this particular wave was not just a flash in the pan, but maybe felt something categorically different or maybe, maybe it's not?
00:03:25
Speaker
Hmm. ah That's a really good question. So, Yeah, it definitely feels like this wave is a little bit more enduring. um I think it started to feel different. And I remember this when I was back at ThreatSack when somebody shared it in one of our channels like, oh, hey, GitHub is doing this like co-pilot thing.
00:03:46
Speaker
What the heck is this? Don't really trust it. And I just kind of waved it off. And we didn't hear about this too much for, you know, a couple of years Yeah. um yeah I always feel like ah machine learning has always been, you know, big assist, you know, in in products. Like, you know, we see that it helps to make um detections better and, you know, predictions faster. And it's not just a security application, but it's a...
00:04:14
Speaker
um ah you know, the overall ah research play or overall ah supporting a factor in a lot of really good products. So that's been around for a while and I don't see it going away. I see us talking a little bit less about it because Gen.ai is now like the star of the show.
00:04:33
Speaker
Yeah. um you You see it coming in, you're it's making... more code, it's making more ah problems, it's making more artifacts. And and I think there's a whole new trust boundary there that, you know people are are convoluting with the overall AI ML as a whole. So um yeah, i I think, you know, now what I'm seeing is there is instead of, you know, AI ML being like a supporting character, or like a big part of ah a product, it's coming in as a um ah main main character.
AI Governance Challenges
00:05:07
Speaker
Yeah, no, that's like that's a good analogy. That's a good analogy. Yeah, it's it's not just like ah a niche tool anymore. um It's a main main contributor and also like a main risk factor. And I feel like there isn't, the lines are still very blurry around, you know, how to, not just how to govern that, but like how to evolve with that as as it's changing, as it's having more of a main role in a lot of the problems that the security teams are seeing.
00:05:39
Speaker
No. Okay. So I fully agree with you. i mean, my, and I hate this, right? Like i really try to be authentic as, as a, as a person, as a professional. So like anyone that ever says like, Oh, George is an expert. I'm not an expert in AI. I'm not a data scientist. I'm someone that has to manage an extremely complex and large um customer and security environment.
00:06:03
Speaker
And so In my my rantings and ravings, I've simply just gone back to the basics and fundamentals of what good security is, and I still believe that that's the the solution to this.
Co-Founding Seca: A New AI Security Model
00:06:15
Speaker
um You've gone and done something really good, and you've co-founded an association called Seca, which on a business level, actually do need to talk to you about because it's pretty cool if I want to get involved. But for the sake of the show, it's building a practitioner-led AI security maturity model.
00:06:36
Speaker
And, you know, like... the
00:06:39
Speaker
<unk>s there's a premise I need to push back on and that's AI security. Like, is it, is it actually, you know, bunny ears, is it actually a ah coherent category or is it just another traditional security problem that's like wearing a new hat?
00:06:54
Speaker
Because i think, you know, first of all, explain what this association is and then explain what your, your defined concept of AI security is for the audience.
00:07:07
Speaker
um Yeah. So AI Sucka, I guess I'll start about ah with this first. So um AI Sucka is the AI Security Alliance. um I started it with one of my good friends, Charlie Maynard, ah who's a VP of Vulnerability Management and Cybersecurity at Morgan Stanley.
00:07:26
Speaker
And um A lot of it actually spawned from us just having conversations around what the hell is all of this? ah You know, how do I take ah yeah what NIST is putting out there in the world and like bring it back into my company?
Vendor-Agnostic AI Security Maturity
00:07:42
Speaker
ah What products can I use to go make sure that a lot of these controls are enforceable and um maintain their coverage? And eventually what we realized, not just from talking to him, but talking to other folks that are on security teams that like,
00:07:57
Speaker
like Sonos and General Catalyst and um a number of other really awesome financial services companies, um is that there's nothing out there that is like vendor agnostic and doesn't also require you to take a lot of cycles in figuring out what it means for your own organization. and then once you do that, there's nothing out there that helps you keep up with how this stuff is evolving.
00:08:22
Speaker
both from a perspective of understanding what your maturity is, but also being able to measure that like continuously. So we're still very early days. um You know, we we do respect what NIST is putting out out there around, you know, aligning with um how you define and constrain ah your your posture, how you do the enforcement and the monitoring and like the the validation aspect of it all.
00:08:49
Speaker
um and And, you know, we are we are looking at that as a contributing factor, but it's it's not the, you know, main focus. And so, you know, what we're looking to do is to actually give the examples and give like a very fast and loose way to do some quick measurement of where companies are so they can get a baseline and And start to chip away as opposed to like doing this like huge undertaking of like, hey, how do I interpret this? What do I go by?
00:09:14
Speaker
you know you know, what the heck is going on here? And and never really making it very far. um yeah there's a lot of Yeah, there's a lot of paralysis by analysis right now. Yeah, there is.
00:09:25
Speaker
And a lot of the companies are doing the same things over and over again. you know, try it. It's like it really it all we really need is like some type of master template. What's worked for the big organizations out there to put it out in the world so companies can at least get the foundation and then go do the exciting stuff that is unique to their own company. Right. Like I think a lot of people are just trying to figure out how to do the same work and they don't know where to get started.
00:09:52
Speaker
but But I think the problem is bigger than this, right? You're talking very practitioner-led approach. And um i think you're you're in the same boat as me, and George knows this too. um When you're dealing with shareholders, right, these people are typically non-technical, and they just live and thrive off hype.
00:10:09
Speaker
and And the problem is trying to explain good logic, right? So you've done something that explains logic to the shareholders, or search to the the practitioner side of the house.
Communicating AI Security to Non-Tech Stakeholders
00:10:22
Speaker
going to also produce messaging that senior practitioners can then use to explain to shareholders to somehow try to penetrate the mountain of greed? The just, it's...
00:10:39
Speaker
it's really dumb like i i don't i don't know how else to say it like i'm sometimes the hype feels like a force field right and you just like i'm just like i'm like dude i don't want to see your company get compromised like i'm like i get like yeah i get that a dollar a day is worth more than a dollar tomorrow but we just don't want to see us get pwned are you going to create messaging that can enable people to explain to non-technical shareholders who think that because they have money they're smarter than all of us like how we safely do the thing.
00:11:08
Speaker
And then maybe it's a later question for the show, but it's like, hey, the AI isn't actually creative. The humans actually have to still do the creativity. Please, are you going to address that?
00:11:19
Speaker
ah Yeah, 1000%. So like, or at least we'll try. it's It's a huge part of what we're trying to tackle. Like, um one of the the things that, you know, we talk about a lot as a group, and we just had our our kickoff, but we've been having the conversations like one on one for the past couple months.
00:11:40
Speaker
is like, how do you create a language that helps to cut through the hype? Like, how do you give practitioners language that helps them go talk to the board in a way that the board's actually going understand? And it's just not a bunch of people talking past each other. So like the the thing that we put out in the world is like, we don't want to just be like another control or another framework.
00:12:02
Speaker
um It's trying to be that translation layer. So, you know, the security leaders we talked to that are part of the group that we're also interviewing as part of our research, um yeah they're telling us that they don't have a good way to explain to their non-technical folks, like their business folks,
00:12:19
Speaker
um that AI isn't just like some magical yeah magic. Yeah. Magic. And we talked about this like a couple of weeks ago. Yeah. um It's magic with magic. And and that's really not the case. It's another system.
00:12:32
Speaker
And it takes what you're putting into it and and makes it a lot bigger, in some cases, bigger than it needs to be. It requires a lot of permissions that humans are just giving to it. And I think that, you know, they're having a hard time going back to these business stakeholders um and, you know, giving giving them that context or at least that um working knowledge that like a lot of what you're giving it, you know, that, that creativity, that input, the accountability, the governance, like that's still on security, that's still on the business.
00:13:07
Speaker
um And so, you know, for us, like one of the main lines is ah we we we feel that ah the boards still can't tell the difference between the the capability of ai and also the responsibility model.
00:13:24
Speaker
And we're trying to build the first guidance or the first task at AI Seca to show that that is like, you know, where the risk starts, like where the risk begins and and and where really need to to build in, you know, the controls are early on.
AI's Role as a Force Multiplier in Security
00:13:39
Speaker
So it's almost like you're reading our show notes talking about translating, right? So after yeah ra there were a lot of hot takes, as always, from yeah several leading lights in the industry, one of whom was Phil Venables, former CISO Google Cloud, um who was writing more about AI security is not really a category and the thing that's not being addressed are sort of the fundamental risks like tidal wave of vulnerabilities from vibe coded products.
00:14:13
Speaker
um Attackers who aren't as resource constrained, right? They've lowered this the barrier to entry if they don't need to know code and they can just kind of hammer enough away with call out our GPT. And then, of course, this agent problem, right? I think business leaders hear agents and they're like, cool, better than employees, but not like, what are they interacting with, to your point, about permissions and in ways that you may be increasingly out of control. So that's sort of like A little bit more of the negative take.
00:14:45
Speaker
um I also saw Marcus Hutchins talking about, well, no one's talking about the force multiplier that it might also be for defenders in terms of discovering the vulnerabilities faster and stuff like that. So the question is not security related. The question is for someone who doesn't work in security, like the finance director, the HR leads, even a small business owner like what do you think this means for them? Like, how would you talk to them about this?
00:15:14
Speaker
This is great. I talk about this a lot. So, um, you know, I, I work for a company that brands itself as agent workforce. Right. And so this is like the main focus of the conversation. so so yeah, for,
00:15:31
Speaker
For the non-security people, um i think it comes down you know, the trust and exposure. Maybe not in those, like, terms because those are really, like, security terms.
00:15:41
Speaker
But it's thinking about, like, who or, like, what tools you're trusting to try and make decisions on your behalf. So, like, in thinking about finance and HR if you're, like, running a business, um in a lot of cases, you know,
00:15:58
Speaker
the AI tools are already touching a lot of those workflows. And like we see it every day. They're, you know, people aren't just building agents, like custom made agents to do this stuff. There's a lot of ai that's already in.
00:16:12
Speaker
a lot of the legacy tools, a lot of the new startups that are popping in. And so for me, i don't think that the risk is in what is going rogue, like, you know, the unknowns, it's what's already there. And um people just don't have that visibility or that aptitude to like recognize, or even if I ask them, ah you know, what what data are you putting in this? What permissions are you giving it? Like what's what's out there? And it's just because it's so widespread.
00:16:42
Speaker
It's so deep in a lot of the products that they already have um that, you know, if the main user can't answer that, like, i don't know how easy it's going to be for like a security team um to go and be able to assess that themselves, considering that every team has their own staff.
00:16:59
Speaker
um Every team is doing a lot of things that the current security tools can be blind to. When the they're not covered, they're immediately integrated there. um And so it becomes like a, it's also business hygiene
Aligning AI Strategy with Business Objectives
00:17:11
Speaker
I think that's that's where, you know, I've been framing it since a lot of my conversations, people start to get scared when I i say, hey, what about security? What about compliance?
00:17:23
Speaker
um So for me, like, you know, the the question that I, i I tell every leader that they should be thinking about is um going past, do we use AI? Are we AI first? Do we, you know, are we following our AI strategy?
00:17:38
Speaker
I think it should be also framed around, you know, what is the AI we're using a lot to do? Like, what is the surface there? um and And that should be part of most conversations, if not all around, you know, a AI first at an organization.
00:17:55
Speaker
Yeah, i I like that framing because it attaches the technology to an outcome rather than like, yes, no to technology. right yeah have have to agree with you. Like, I i think i think from my lived experience um across multiple use cases and organizations, I An organization first has to have an AI strategy to have that conversation.
00:18:20
Speaker
So that's that's ah that's a problem, right? Understanding like, and i I recently did a post about this and kind of pushed it because I think it actually is a foundation for how security leaders should ah ah frame the business case for safe AI.
00:18:37
Speaker
what's what's the what's the business objective in using AI, right? I don't think that we have quite figured out as an industry, like how to frame that context and that question. And and ah George, you'll see this later. I actually built something ah for us to actually look at. It's it's an entire infrastructure and framework.
00:18:56
Speaker
But essentially I think There has to be an alignment between the business side of the house and the engineering side of the house.
Security Enabling Business Growth
00:19:03
Speaker
And the problem is businesses blaming engineering for not being fast enough, but engineering has to scramble and pivot sometimes mid sprint, right? If not mid PI or mid quarter as to what they're doing, right?
00:19:17
Speaker
Based on real results versus predictive results and, and, you know revenue expectations, not being mad, et cetera. Right. So, you know, you and I are both in the same opinion, right? Like I, I, for a long time been saying like our job as security isn't to be gatekeepers, it's to be business enablers.
00:19:35
Speaker
We have to solve the problem of figuring out how do we allow the business to take advantage of, um, innovation and, and, and solutioning, not, you know, stopping people from doing things just cause it's easier to say no.
00:19:46
Speaker
Right. That's, that's how we get fired. That's how businesses don't move forward. So blocking AI adoption and just driving it underground, that's not going to work. But explain to me in your opinion, like, what's the framing of creating conversation where we're not letting organizations off the hook, but we're actually allowing them to adopt AI in a safe manner?
00:20:12
Speaker
And, you know, do you see it as ah as ah as as a governance failure? Because I see it as a governance failure, not a security posture failure. How do you see the balance between control, visibility, orchestration, and allowing employees to actually experiment with ai within the confines of safeguard rails?
00:20:39
Speaker
I talk about this a lot. so and So I've given now like several companies a playbook for, you know, how you could get
Managing AI Tool Acquisition
00:20:51
Speaker
started. One of my favorite tips, and I learned this from someone at she basically was like, become best friends with procurement.
00:21:01
Speaker
ah Because yeah if you're a chasing what people are buying, you're already too late. You're already too late. it She was like, we don't block. We just kind of see what gets bought and then go start asking some questions um based off of that. So that's like my funny tip for the day. And whenever I give present that to people, they're like, oh, I never thought of that. I'm like, yeah.
00:21:22
Speaker
You never think that you should just work with procurement to see what's coming in the front door. um So that's one way for companies that are trying to stop blocking. um Another funny story, my friend just sent me a screenshot of a conversation that's happening at his company where um there is a DevOps engineer that's arguing that he should be able to use the cloud, like remote control feature to send commands. He's like, how do you expect me?
00:21:48
Speaker
Is that the dispatch feature that connects the mobile? Yeah. Yeah. He's like, wait, wait, has someone hit him already? Yeah. I don't know. know no I think they're remote companies. They're going to need to fly over there. I'm sorry. You're going to send a drone at him. want to yeah punch him, but please go ahead. dis Dispatch a drone instead of use dispatch. But yeah, there there is that fight. He's saying, hey, we're three InfoSec people in a 900 person company.
00:22:17
Speaker
Like, these are the fights that we're having that we know about. They they got that person asked, but like... They're, you know, nobody really knows unless they they put themselves out there. So, you know, that's big. That's a big tension. um You know, how much you want to let innovate versus how much you want to control and some of the battles you just have to like pick.
00:22:36
Speaker
Because, again, Claude at that company is a sanctioned bull. Right. So, but but I have to ask, like, this is what's insane to me. You're telling me that, and George, you're you're at C-Society, so you can you know also confirm this.
00:22:50
Speaker
In my shop, like i'm I'm a process control freak. So my team is a core part of the procurement process. Nothing gets approved to be procured until my team approves it.
00:23:01
Speaker
Right? So that's a little little bit why we're overworked, but at the same time, it's it's a control measure. Right. So you're saying that people are just buying shit without security, actually looking at it, and they're just trying to orchestrate implementation without any causal assessment as to the data implications. That sounds right for some fast moving SaaS companies. Yes. Yeah. The the mid market fast movers are doing that.
00:23:27
Speaker
I hate this industry. Jesus Christ. Okay, cool. That's all. Happy Friday. Well, yeah. Well, the other thing is, you know, for security teams, it was always marketing. That was the hotspot because they could just put shit on a credit card, right? Just like sign up for stuff and then just like plug it into to Google Workspace or whatever. And, you know, without your standard risk assessment.
00:24:04
Speaker
Amber, I want to change tack here and let's sort of widen the scope of the conversation from just security and control. So there is a a version of the AI risk conversation that is in research papers, academia, keynotes, whatever.
00:24:24
Speaker
And then there is the version that is happening right now, which you've illustrated in terms of just, you know, whether it's access whether it's people just asking ah stuff or not asking. And that happens quietly and incrementally,
Public Perceptions vs. Operational Concerns in AI
00:24:40
Speaker
right? That's not like the doomsday scenario.
00:24:43
Speaker
um Which one do you think people should actually pay attention to? And when I say people, I don't even mean just business folks. I feel like I have this conversation a lot with my friends who don't work in tech because You know, they're just getting like two very different conversations. There's open cause amazing. Oh my God, it has all my passwords. And then there is also like, you know, uh, Eliezer Kowski's nonsense about a human Holocaust. I don't know.
00:25:16
Speaker
What, what, what is the conversation you're having with your non-tech friend? Uh, yeah. first of all, some of the conversations I'm having is I vibe coded an app.
00:25:28
Speaker
It's on localhost.whatever. Go check it out. And I'm just like, yeah, i'm not I'm not entering myself into that. That's just like, help me fix my printer type vibes. But yeah, um I think ah people, ah so
00:25:48
Speaker
I feel like there's a lot of hype out there. and um there ah I think there's a huge gap between the hype and the actual ah reality that that we're in.
00:26:00
Speaker
So, yeah, you have one crowd and even non-technical people like my own mom talking about how AI is going to inhumanity and hope you saved a lot of money because you might not have a job anymore in a year and blah, blah, blah.
00:26:13
Speaker
And then you have the other side of people that gives me the hope that they're just... spinning up a house um and local host app that has no guardrails and just doing whatever that thing asks them to do. And I'm like, okay, cool. Maybe we have a little bit more time than I thought.
00:26:29
Speaker
Um, I don't know. I'm just like, I i feel like it we're we're not quite there based off of what I'm seeing. I think there is a lot of hype cycle around trying to spin it that way. And and also, yes, and you as you say, spin, like there is intentional narrative.
00:26:49
Speaker
operations at work. there the The AI shareholders want that, right? Like there's a lot of want and in the hype there because it somehow makes them more money. But um I don't know. i i I feel like a lot of the the risk there and and and whatnot is is more operational than like big doomsday and scary.
00:27:09
Speaker
um But um I feel like, you know, i I don't think we're going to have the sci-fi apocalypse yet. I think we're going to see a lot of the same fast moving risks and more frequency that we've already been seeing over the year based off of just like you know, same security problems happen when you do the same old things, just in ah a different tool or a different surface.
AI Accelerating Existing Security Risks
00:27:36
Speaker
um So I think, you know, past couple weeks, what we're seeing, I don't think is truly just related to AI. um I think it's just accelerated by it. And um instead of thinking about, you know, hey, robot fleet is going to come kill us. It's like,
00:27:51
Speaker
Start worrying about, you know, like the random thing that you just downloaded because you saw an X post about it And it's just sitting on your computer and it's just doing a lot of the same things as, you know,
00:28:03
Speaker
a tool that you could have downloaded two years ago, just maybe with faster frequency or more reach. Um, so I feel like those are the conversations. It's like, uh, same security problem, just new cool clothes.
00:28:19
Speaker
Yeah. I'll be real with you, man. Like I, so there's a, there's a thing I've been kind of working on with George for last year and and from, um, From a build standpoint, there's there's ah an AI implementation piece for it um for certain use case applications in like a later version. But I actually made a call about um three, four months ago that for the initial GA, like,
00:28:41
Speaker
a a basic well-done script can do the exact same thing I would need an AI model to do for it. So I could save a ton of cost and effort just doing proper development.
00:28:53
Speaker
Yes. Yes. God damn it. So like actually, maybe a deterministic output is fine. Like, Like, ah like George, I'll explain it off file further in detail, but like literally I made the call. like We don't actually need AI for this thing. um But yeah, and it's I have to make the comment too. I don't think that Skynet is necessarily a thing. I do what I do see as the issue.
00:29:17
Speaker
um And maybe it's another episode. Maybe it's another conversation. Maybe we just go and have tea and Shisha and stay awake till four in the morning freaking out about it. um I think the push for universal basic compute as concept basically puts us in a place of techno feudalism once again.
Economic Control or Digital Slavery?
00:29:34
Speaker
Because to me, it reminds me of when um mining companies back in the day used to issue like strife. It was like the, they used to issue like dollars that weren't actually currency, but it was like money that the miners could only within the can. the company store. Yeah. Yeah.
00:29:50
Speaker
So it didn't actually give the miners an opportunity to have any kind of economic freedom. It just kept them locked into the system. And so I think if it gets to a place where all of the IP that gets produced out of using models still gets owned by the companies, regardless of who produces it, like the, whatever, whatever model company you're using, they own whatever you produce off of it.
00:30:11
Speaker
I think that creates really the internment of a new form of digital slavery that will prevent people from ever actually benefiting from the AI use.
00:30:22
Speaker
And I know that's a big statement. That's a big idea. But I honestly think that that's the doomsday scenario that yeah will be just... Yeah. Yes. The things do the things that I worry most about today is like what we talked about in the last episode, like people trucking in gas turbines in rural communities and just running them 24 seven and making people sick for stupid LLMs that provide no value. That's a risk.
00:30:47
Speaker
AI psychosis, which a new paper out of MIT is exciting, like mathematically proves is like nearly inevitable. Like that's a problem.
Current AI Risks vs. Sci-Fi Scenarios
00:30:56
Speaker
Like young women being subjected to deep fake abuse.
00:31:02
Speaker
That is a problem like this. None of this sci fi. That's just real shit happening right now. That sucks. Anyway, sorry. joke we yeah um i got I got off my soapbox. I'll step down. No, no, it's okay because there's the other point of that too.
00:31:17
Speaker
And you know, since we had Savannah as a friend of the show and she's done sex work and now there's a whole generation of, of AI content creators who are just doing the OnlyFans thing, but it's dudes who are just building women to duke other dudes to take money. So now actual sex workers are losing their, again, because I work in dating, so I know about these things.
00:31:36
Speaker
I digress. I digress. um So you're trying to build something independent at a moment when essentially the big players in tech are all racing to define the rules. How do you maintain a credible voice when the institutions with the most resources also have the most to gain from making sure that a particular answer gets accepted as the final solution?
00:31:58
Speaker
So I assume this is like relative to like ISACA, right? Yeah.
ISACA's Vendor-Agnostic Approach to AI Security
00:32:06
Speaker
Okay. I mean, it's the hardest part, right? Like there's going to be a lot of folks that are more well-funded than us, but, um or, you know, not as well intentioned, but they have more money to to throw at, you know, solidifying where they're coming from.
00:32:23
Speaker
um And it's a classic, like, you know, the people who are writing the rules or like setting the standards around or selling the shovels. And you see this with everything. Like cloud security it was that way.
00:32:35
Speaker
Application security was that way in some cases when I first started and and so on. um I think like for us, like our goal to stay credible and making sure and hoping and that people listen to us is like, we want to try to stay as practitioner led as possible. We have people in the group that, you know, have not worked for security companies at all or, you know, are doing the work for security companies, but they're staying completely separate.
00:33:04
Speaker
and vendor agnostic, otherwise they get kicked out of the group. Like very quickly we have, yeah, we have rules of engagement there that, you know, one, goal we've, we've already created the policy around it where it's like, Hey, like this can't go back in for you to go sell a product.
00:33:20
Speaker
Although, you know, once we put that public, like how enforceable can it be? But in terms of the influence, like we are very careful about it. um we're not lobbying for building something specific. We're trying to codify what's working in the companies that the practitioners in the group are coming from.
00:33:38
Speaker
So everything that we work on in terms of how we progress through it is published openly. Like um it's on our GitHub. We have a GitHub already with the initial sch guidelines.
00:33:50
Speaker
We're starting by aligning with NIST, but we're looking at some of the other frameworks and the contenders that are out there so that we could try to do mapping. You know, AIUC is working on stuff.
00:34:01
Speaker
Of course, there is like the EU AI Regulation Act. MITRE Atlas also. Yeah. yeah There's a lot of stuff out there that overlaps, just like a lot of the previously known compliance frameworks that no one's doing the mapping work for. It's like, why are you going to go do all the work again and when a new compliance framework pops up? Amber, we're going to close out here with the yeah the last question and the invitation.
00:34:27
Speaker
You are cordially invited to give a spicy take. If we zoom out entirely past frameworks, past threat models, past conference talks, what do you think is the deeper thing at stake here for business, the economy that you don't think the security industry is necessarily being honest about?
Critique of the AI Security Industry
00:34:50
Speaker
In other words, let's, this is your chance to fire a cruise missile through hype. Okay. um Okay. I have a couple plate ways I could take this, but will, let me, let me think of the spiciest way to frame it. um Okay.
00:35:10
Speaker
So. The security industry as a whole is being extremely dishonest right now. um i think AI security is just a recycled cloud security with a new label, a new fancy label with the sparkles emoji all over it.
00:35:28
Speaker
Sparkles. Really frustrating. Yeah. e Sparkles vibes, whatever. um i think there's a lot of pretending going on. I see a lot of people being self-proclaimed AI security experts that I didn't see were doing anything with AI a year ago.
00:35:42
Speaker
um And, you know, they're pretending this is like a brand new thing. ah But I think it's really the same ungoverned, unknown supply chain problem that was never fully solved.
00:35:56
Speaker
It's still not fully solved. Oh, 100%. Yeah. Yeah, it's so ah an old problem just with a new input. And um now I think the problem of us not fully solving this in the past is we have this new thing that is just moving a lot faster now. Like I said, it's not a new frontier. It's just moving at you know a faster speed and a bigger volume.
00:36:17
Speaker
And so every vendor I've seen lately, they're just giving you a dashboard or they're giving you their wares. And I think they're just ignoring the fact that A lot of these companies that they're selling to just have a bad discoverability problem to start.
00:36:34
Speaker
um And it's like, how are you going to give me a dashboard when I don't even know what's going on in the in the company? um Just overall, like from what's being used. um So I feel like it's less innovation. It's like more theater.
00:36:49
Speaker
um And lots of, I saw a lot of those dashboards on the expo floor and I was like, isn't that just an API call out to Claude or Chats. Oh, dude, all they're selling now is dashboards. And I appreciate you, Amber, because like you're my spirit animal. And like I literally just did the same call out a couple days ago.
00:37:05
Speaker
and but like I can't say it on LinkedIn exactly, but there was a more or less me saying you're all full of shit. This is really just, just make believe. That's it. Yeah. But how many people are looking for a fast solution to a problem they think they have? A lot. A lot of security people are, and it's not their fault. It's endemic of, like, what their current situation is, which is all the compounded problems that they didn't solve in the first place, plus this.
00:37:32
Speaker
So, yeah, I don't know. ah i guess my last thing is, like, Until we ah take a look at the hard, honest truth that, you know, AI security is mostly just marketing right now. And it's it's covering, like, that discoverability and look visibility crisis. Like, you we're just doing vanityity vanity security on it. Yeah.
00:37:58
Speaker
Jazz hands and sparkle emojis. That's where we're going to end it. There you go. We're just doing vanity security. That's right. Amber, thank you so much for the time. Thank you for lending your expertise and your experience. ah We really appreciate it Yeah. Happy that you guys invited me on. Hopefully it was spicy enough. All
00:38:23
Speaker
right, gang, questions for you to take forward. There is ah familiar adage from military circles, slow is smooth, smooth is fast. My contrarian take is that companies that don't thrash around ai this, AI that will probably make better business decisions.
00:38:41
Speaker
But I think the question that I wanted to highlight was the one that Amber posed, which is what do you want the AI to do, right? To really ground the technology in an outcome, whether it's societal or business,
00:38:54
Speaker
instead of just are we using it are we first are we winning the race those are the wrong questions to ask yeah i think what really like my question is you know ask yourself, why are we implementing this?
00:39:11
Speaker
What's the ah ROI? What's the objective? What are we trying to achieve? And, you know, have we actually manually figured out the process upon which this AI solution is going to automate the workflow? Because if you haven't manually figured it out and you haven't done the core foundational things to enable your organization and your environment to To adopt cutting edge technology in a safe manner that protects your data and protects your customer data. Is this really the right time to do the thing?
00:39:39
Speaker
Dope. Take it forward. We'll see you next week.
00:39:49
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:40:02
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. It helps others find the show. We'll catch you next week, but until then, stay real.