Introduction to Automation vs. Autonomization and Trust in AI
00:00:00
Speaker
automation as being you know taking a a manual process and automating it, but all of the decisions within that process have to be hard-coded into the into the process. All the logic behind it has to be sort of predefined and hard-coded.
00:00:17
Speaker
and We want to move to this new world where we talk about autonomization rather than automation. The difference between being that In you autonomization, we're really letting AI make decisions for us, decisions that aren't just based on predefined logic.
AI Decision-Making: Operational, Tactical, and Strategic
00:00:37
Speaker
and the And the big problem for getting there is our trust in AI systems and the AI decision-making process. So our research really kind of laid out this this nice framework of different types of decisions from operational to tactical to strategic, and then different ways that AI can enhance those decisions either by informing human analysts, making decisions on their own, but that require approval by humans, or finally moving to a fully, you know, autonomized AI decision without human intervention.
Introducing the Podcast and Guest Kate Wood
00:01:23
Speaker
Yo, you know what it is. This is the show. This is Bare Knuckles and Brass Tacks, the cybersecurity podcast that tackles all the human side of the industry. can trust, respect, and everything in between. i am George K. with the vendor side. And I'm George A., Chief Information Security Officer.
00:01:39
Speaker
and And today our guest is Kate Wood, who is Associate Vice President at Infotech for Security, Privacy, Infrastructure, and IT Operations. She is ah researching the front lines of how teams make decisions around AI tooling, how they adopt it, what they can do in that decision process to make better decisions.
00:02:02
Speaker
Yeah, this was a really good conversation. i think it was a really important one. And we got to pressure test some of our own hypotheses
Kate Wood on AI Tool Adoption and Decision-Making
00:02:09
Speaker
in here. And yeah, I had ah i had a blast. Yeah, i mean, she's a really smart person, pure researcher. she's been in the space for a long time. And, you know, I think we got to listen to some really perspectives on where we see the future of automation going, where is the applicability for AI and solutions going, um and where really security teams can derive value from using AI solutions and you know why they're not really implementing them in their environments now.
00:02:37
Speaker
and what folks who are in the AI vendor space need to do to make products that actually are you know truly ah appealing for security practitioners. and i think I think Kate really has her finger on the pulse and that.
00:02:50
Speaker
Yes. And in keeping with the latest ethos of the show, we also touch on mentorship and Kate's very unique advice about how to advance your career. But I don't want to spoil that. So we'll turn it over to Kate Wood.
00:03:06
Speaker
Kate Wood, welcome to the show. WOOD, Thanks so much, George. Really happy to be here. Yes, so let us start with an introduction to the areas of research that have piqued your
AI's Impact on Business and Security Tensions
00:03:19
Speaker
highest interest. You work for an analyst firm. You're keeping your eyes on lots of trends.
00:03:25
Speaker
Your remit is really broad in your title, Security, Privacy, Infrastructure, IT. t So I think the most obvious question is, what has your attention these days and why?
00:03:37
Speaker
It should come as no surprise whatsoever, I think, that artificial intelligence is what is on everybody's mind. And it's all about trying to understand what exactly AI is capable of doing, how we can actually put it to use in some way that so returns on the investment that we're putting into it, and you know trying to catch up with everything that's going on now in that world.
00:04:02
Speaker
Yeah, I noticed because you have security, privacy, industry literally all of those things are touched by ai including, um I guess I would ask if you can speak a little bit on that tension between the parts of the business that are like, we need the competitive advantage, put it into all the things. And then the security and privacy professionals who are like, wait, but you pay us to protect the organization from these risks.
AI Compliance and Governance Challenges
00:04:32
Speaker
Yeah, it's it's really pretty crazy. That's ah for sure. You know, we definitely see a lot of pressure on security teams to, you know, say yes to AI, even though there's so much that people don't understand about it yet.
00:04:48
Speaker
and And we also see that, you know companies need to comply with certain requirements. expectations around AI. Certainly, you know, we're seeing in Europe AI regulations are coming out.
00:04:59
Speaker
We've seen, you know, attempts to regulate AI in other jurisdictions. And so companies need to pay attention to these regulations, but often they're just offloading it to the security team because, well, the security team does compliance, right? So why can't they just also do this explainability stuff and this, you know, ethical AI stuff?
00:05:22
Speaker
But of course, you know, with the security teams aren't necessarily the right people to be doing that. So I think there's still a lot of work that companies need to do to sort of sort out the entire governance structure around how they're using AI, right?
Concerns Over AI Security and Open-Source Models
00:05:36
Speaker
Yeah, I'm going to i say one thing which will kick off to the other George here, which is we've had ah number of. AI security researchers on the show, we've had Adrian Wood will have ads Dawson coming on soon.
00:05:51
Speaker
And the thing that strikes me the most about their research is not the well-known prompt injection stuff around LMS. It's actually. poisoning the code itself, the models themselves. And as you said, there's pressure to put AI into all the things, which means dev teams are pulling down models off a hugging face or wherever else, just these you know open source models. And they're not taking the time or maybe even have the capacity to look at the code. And i try to take people by the shoulders and I say, we still haven't gotten AppSec right. Like just static code.
00:06:29
Speaker
And now you're going to put in models that are calling out the things that you don't understand. Anyway, that's, that's, um, it's a huge third-party risk, just hellscape.
00:06:41
Speaker
But George, I will kick that over to you as a segue. Yeah.
Resistance to AI Implementation in Security
00:06:45
Speaker
um You know, having to run security for a major CICD, it's the bane of my existence a little bit because every week my devs are like, well, what if we just run this through GPT and we just install it on Docker and then it just doesn't communicate to the network? And you're like...
00:07:02
Speaker
Guys, it's still proprietary. You're still doing code that's going to prod. I can't like let that happen." And then they just like freak out. And I'm like, like thankfully, my CIO is like super, super like, we're not going to touch this until we have a secure implementation, which we're like on the verge of.
00:07:18
Speaker
But even then, it's like, we just got the implementation done with AWS, but we still have to train everyone. And everyone's like, well, can i get a login? And you're like, no, no there's a process.
00:07:29
Speaker
Anyway, Kate, it's really good to talk to you. um I actually worked with a colleague of yours for quite a little
Vendor Risk Management and AI's Role
00:07:37
Speaker
bit. ah Isabel Hurtanto is a good buddy of mine. Okay, absolutely.
00:07:42
Speaker
That's fabulous. We shared like a – like she ran a pro services practice while I ran a managed services practice at our last shop. And then we both were like, fuck it, and quit the same day without coordinating. So that was kind of like – If you see her tell her to say hi, I'll shoot her a message probably after this.
00:08:01
Speaker
But on to the show, let's talk about vendor risk management. ah Currently, in most organizations, it's really a checkbox exercise that involves sending out a questionnaire and a backfilling interview if the questions aren't quite satisfactory.
00:08:15
Speaker
I know that you've done a lot of really good work specifically in this space. How would you want to improve on that process? And can it be modernized, in your opinion? Yeah, absolutely. I think, you know, one of the the big problems that I see with vendor risk management is just companies treating all their vendors as one size.
00:08:35
Speaker
And so they send out the same questionnaire, they do the same level of due diligence on every single vendor, and it's ah It's an enormous amount of work. you know Reading a SOC 2 report takes hours. you know Going through a questionnaire, especially if it's a big questionnaire like the standard information gathering questionnaire, you know can be a thousand questions or more and open-ended their questions. So you've got so much stuff that you have to comb through in order to understand what kind of risk you're bringing into the organization.
00:09:09
Speaker
So the first piece of advice that I give to companies is you need to really look at what am I buying and what is the risk that I'm bringing into my organization based on you know the the product or service that
AI Tools in Risk Assessments and Privacy Concerns
00:09:21
Speaker
And then let's put the level of due diligence into the vendors that we're looking at according to that risk. So if it's you know a cloud-based ERP system, you're going to want to do all of the ah checks.
00:09:35
Speaker
But if it's you know a relatively low-risk vendor, you may do much less so ah sort of risk assessments on them. That's a really big strategy that companies should be using to right-size their vendor risk management.
00:09:51
Speaker
And in terms of modernization, absolutely, there's there's lots of opportunity out there. The vendors are getting better at it. you know and They've been at it for a little while now. The the bit sites and the security scorecards and and ah those companies are getting a better at it.
00:10:08
Speaker
But I think even within companies, they can be doing more. We've been experimenting, for instance, with having Gen AI systems you know read a SOC 2 report and you know summarize it.
00:10:20
Speaker
What controls are not being tested? like what What controls hello is has the company not implemented? That's something that could take a security analyst hours to figure out.
00:10:32
Speaker
A LLM can take much less time to figure it out.
AI in IT and Business Processes: Privacy and Security
00:10:38
Speaker
I mean, like that's that's kind of the thing, though. It's, you know, what are the odds that you want to put proprietary information? Because I would pose to you the follow-up question of if I'm an organization, I've been asked to fill out a vendor risk questionnaire that I may or may not know goes through an LLM, and I have to reveal specific information about how I configure my infrastructure or how I configure security in my infrastructure.
00:11:04
Speaker
What then like, what assurances do I need to have that make it feel safe to provide you as an organization I want to do business with, with that critical information? Because i would still consider the results, even if you're getting like, cause we do that my work, we do like a light scale kind of end risk assessment for kind of easier organizations, like the burp speeds of the world.
00:11:26
Speaker
And then if we're doing like a more enterprise grade implementation, we have to hold the big annoying thing that everyone jokes about. Um, In both cases, though, it reveals sensitive information about how you have configuration.
00:11:39
Speaker
Do you really feel safe putting that through an LLM? Yeah, I think that, you know, if her if you are planning to feed any vendor information through an LLM to summarize it or whatever, you definitely need to provide a disclaimer and allow people to opt out of it.
00:11:57
Speaker
Of course, you know, eventually everything is going to go through AI and this is going to become moot. We see companies all the time saying, we're going to take your customer information, or we're going to put it through ah some sort of artificial intelligence ah system and now.
00:12:14
Speaker
So it's really just a matter of time before it's standard process for almost any IT or business process.
AI in Governance, Risk, and Compliance
00:12:23
Speaker
Yeah, it's there are two things there.
00:12:28
Speaker
at some point it's also just going to be a sort of agents talking to agents, right? Like send this off to their compliance agent. The other is i have never seen such an explosion around a specific category like AI is in all the security stuff. Machine learning has been in security stuff for a long time. It's the only way to deal with that volume of data.
00:12:54
Speaker
but out of nowhere, suddenly GRC is the sexy space because you can just put language models to bear, as you said, on all of these things. I guess that consideration becomes, if it's just LLM tooling reading long form things that may or may not have also been generated using LLM tooling, does it just become kind of a different flavor of the checkbox problem?
Building Trust in AI Decision-Making
00:13:20
Speaker
That's a great question. I don't know that I have a great answer for it. I i do think that, you at the end of the day, you have to understand what your risk tolerance is for for your vendors and understand kind of what level of assurance you're really getting out of any of this due diligence.
00:13:36
Speaker
SOC 2 reports are kind of notorious for being pretty, you know, wishy-washy. You know, vendor questionnaires, you're just taking their word for it.
00:13:47
Speaker
Even the things that companies like Security Scorecard and BitSight are doing are just, you know for the most part, open source information that they're relying on. So we're not really sort of lifting the hood, so to speak, and getting a good idea of what kind of security is in there.
00:14:05
Speaker
So it's going to drive more companies to adopting ah bit more of like a ah zero trust approach to supply chain risk. and you know really sort of walling off the riskier software components or hardware components that you're that they're buying.
Upcoming Event Announcement in Toronto
00:14:24
Speaker
um Hey listeners, listen up. We're coming to Toronto. We'll be setting the stage on fire with the opening keynote at Secure World Toronto on April 8th. And we'll be closing out the show with our signature event, the Cyber Pitch Battle Royale.
00:14:42
Speaker
Check the show notes for discount codes when you register for Secure World and a link to register for the Battle of Toronto. Let's see who takes home the belt. We hope to see you there.
00:15:01
Speaker
So I have this other hypothesis that I i sent you ahead of the interview, right? And the one of the benefits of this show is that we get to interview smart people so I can pressure test those hypotheses.
00:15:15
Speaker
So George and I have talked about the rate of AI adoption in security tooling. um We have a couple of folks who are field CISOs, who are friends, and they've said yeah out in the field, like it's kind of hard to get some organizations to adopt the latest and greatest features inside of the the things that they work on because the teams just aren't architected to absorb the information that fast.
Skepticism and Frameworks for Trusting AI Decisions
00:15:43
Speaker
Right. Because if you think about human organization,
00:15:46
Speaker
It's always been about human specialization. Here's the GRC team. Here's the IR team. Here's the SOC team. That's just how humans organize themselves. So I guess first I want kind of your impressions there because my hypothesis is that we will hit this inflection point where teams can't actually adopt the super fast responsive AI stuff because it's still going to run into this human process roadblock.
00:16:12
Speaker
um But yeah, let me get your thoughts there and I have a follow-up question. Sure, that's a great question. We actually did some really interesting research pretty recently that lays out a framework for how companies can really approach AI security.
00:16:30
Speaker
ah my My colleague, Fred Chagnon, was the primary author of this research, but it really lays out this framework for you know what types of decisions are we potentially going to outsource to AI?
00:16:43
Speaker
And we talk about you know automation as being know taking a a manual process and automating it, but all of the decisions within that process have to be hard-coded into the into the process. All of the logic behind it has to be sort of predefined and hard-coded.
00:17:02
Speaker
And we want to move to this new world where we talk about autonomization rather than automation. The difference between being that In you autonomization, we're really letting AI make decisions for us, decisions that aren't just based on predefined logic.
00:17:22
Speaker
and the And the big problem for getting there is our trust in AI systems and the AI decision-making process. So our research really kind of laid out this this nice framework of different types of decisions from operational to tactical to strategic, and then different ways that AI can enhance those decisions, either by informing human analysts, making decisions on their own, but that require approval by humans, or finally moving to a fully you know autonomized AI decision without human intervention.
00:18:02
Speaker
And so I think we've laid out a ah really nice way for people to make informed decisions about how much they're going to trust their AI systems to take on more security work.
Outsourced Security Services and AI Adoption
00:18:17
Speaker
We're definitely you know not there yet.
00:18:19
Speaker
AI systems that we know today, they don't really understand all that much. They're, in many cases, glorified text predictors, right? But we're moving fast towards a place where AI can make better decisions.
00:18:36
Speaker
And it's all about understanding when you can start trusting AI to make more and more decisions. Yeah. So you said risk tolerance before. So it's almost like your tolerance for autonomous decision making.
00:18:51
Speaker
Exactly. Because, you know, and an AI system, if it makes a bad decision, it could have significant business impacts if it says, well, I think that, you know, this block of IP addresses is potentially, ah you know, hostile. I'm going to block all of them.
00:19:10
Speaker
Then, you know, you might be interrupting business processes. So it's it's all about understanding things. what the what the risk is to the business of the decisions that AI is making.
00:19:23
Speaker
was going to type in because it's a lead-in to George's second part. I think what prevents me from buying in as a CISO, and you might deal with other CISOs that you talk to as well, it's really an output an output-based problem. like Nothing that I've seen in the technology space now is providing me with enough output that I would go to the board and spend political capital to try to purchase it.
00:19:51
Speaker
Are you seeing anything different? and That kind of leads straight into George's second question as well.
00:19:58
Speaker
Yeah, I think that that's a ah really good observation. The vendors, you know, are obviously making all kinds of, you know, ah statements about their capabilities of their systems. But, you know, just by going to, you know, any of the the main companies LLMs out there that they've got some really awesome capabilities, but at the end of the day, they still hallucinate.
00:20:22
Speaker
They still make all kinds of mistakes. And are we really going to trust them with significant decisions? So we need to really be clear about what we're asking AI to do before, as you say, we spend the political capital to automate or autonomize some of our our systems.
00:20:41
Speaker
Yeah, and my follow-up question had been you know somewhat answered earlier, and I think in your answer here, which is your advice to security teams that are facing that pressure. But it sounds like what the framework yourre you've worked on is suggesting is that it's reframing the question, not one necessarily of technology features and widgets and whatever, but having security teams just think about their tolerance for along the spectrum of decision making. Like, where are we comfortable? Yes, fine. You want to flag an alert or you want to catch something in the logs? Yes, you, you system are empowered to just like escalate that as you see fit. And then then maybe we jump in the loop, whereas maybe another organization is like just remediate it.
00:21:30
Speaker
It's like take it all the way to to conclusion. Yeah, I think that that's a very fair representation. I feel like there' is there is another risk, though, which is that a lot of companies outsource significant pieces of their security services to you know MSSPs, MDRs, those types of companies. And those are the ones that are probably going to be most pressed to adopt AI systems in order to, you know,
00:21:59
Speaker
bring in new to new business to make better marketing claims and also to alleviate some of their staffing concerns because staffing and MSSPs 24-7 is a difficult to business.
Automation and AI in Security: SOAR Implementations
00:22:13
Speaker
So I think that we're going to see AI systems, AI security systems creeping into our environments through those third parties as well.
00:22:24
Speaker
Yeah, that's ah an excellent point, because I have also just seen a trend towards more outsourcing, because there's a downward budgetary pressure on hiring FTEs. So more and more is just, you know, they're sort of fewer and fewer organizations that are maintaining their own SOC, essentially, right? They just shop it out to the MSSP. So that's an interesting idea that the exposure to AI and the decision-making actually comes from your reliance on the, on the outside party.
00:22:54
Speaker
Um, but yeah, over to you, George.
00:22:58
Speaker
Yeah, it's, you know, it's, it's, It's difficult because you know we still have to try to lean into automation. And I think automation is kind of the biggest immediate benefit of trying to implement anything with a machine learning model.
00:23:14
Speaker
I know like you know looking at SOAR implementations, any new SOAR implementation now is going to speak about what machine learning is going to have to have better decision making that it actually provides better adjusted processing when it goes through an incident or whatever process you're going through.
00:23:28
Speaker
I'm saying this as I'm like implementing a store right now at work. So it's like fresh. and and um But really I'd have to ask you what security processes do you foresee being completed or you know, going, going the way of complete automation versus what will remain manual, right?
00:23:47
Speaker
What will always have to have human fingers on keyboards to man. and And what's that limit like for us? Like what's, What's the cutting edge of this process automation adventure that everyone seems to
The Future of AI in Autonomous Security Processes
00:23:59
Speaker
go on? Because we can't seem to make it so efficient that we don't have to hire analysts anymore.
00:24:04
Speaker
So what is that line?
00:24:07
Speaker
Yeah, and I think that, again, that goes back to our decision-making sort of matrix, the the the framework I talked about earlier. you know, build a building up trust in AI and understanding how the evolving capabilities of AI can lead to higher levels of trust so that you can, you know, entrust AI systems to make more and more decisions for you.
00:24:34
Speaker
In the short term, you know, it's it's going to be things that are, you know, don't have a significant downside risk to them. So threat hunting in, in,
00:24:46
Speaker
you investigations, um you know first-line incident response, where the AI systems can suggest that this might be an incident that needs you know additional scrutiny on it, but they're not really making any decisions on their own.
00:25:08
Speaker
And then as we get better and better, but whereas as these AI systems get better and better, we can start entrusting more and more of our decisions to them. And ultimately, will we get to the point where we might say that AI systems can automatically adjust our security policies or you know can decide whether we're going to implement new controls or retire old controls?
00:25:34
Speaker
So those strategic type of decisions, That's still a long way off, but never say never.
Kate Wood's Career Lessons and Mentorship
00:25:42
Speaker
I think the the the capabilities of LLMs kind of took a lot of people by surprise.
00:25:48
Speaker
And the vendors keep promising more and better things. I'm a little bit skeptical, but keeping an open mind, it could happen. Yes, I mean, for all the feature sets, the hard reality is that the legal system and the liability only recognizes humans. not There's no one's going to hold an AI system liable for a bad strategic decision.
00:26:12
Speaker
um Yeah, so we'll be sure to ah link to the research around that framework in the show notes. I think that's really valuable for the listeners. um Last question here, Kate, is one that we often ask our guests, which is when you look back on your career and how you've gotten to where you are, do you have any advice for up and coming generation?
00:26:40
Speaker
I do. So when I've mentored people in the past, I always like to say, take a look at everything that I've done and don't do any of that. Because I've made so many mistakes yeah throughout my career.
00:26:55
Speaker
And you know I'll explain what I mean by that. So I think there's really three big mistakes that I made that I think other people should avoid making.
00:27:05
Speaker
The first one is was really not being authentic. at work. And you know that might seem pretty obvious coming from a trans person because I spent so many years pretending to be somebody that I never felt inside.
00:27:22
Speaker
And I understand that that's you know not necessarily relatable to all of your listeners. But there's lots of other reasons why people hide their authenticity at work. know Neurodivergent people, who you know I'm also blessed to be one of them, ah we often mask because we don't necessarily understand all the nuances of social interaction. So we wear all these masks all the time.
00:27:45
Speaker
you know Code switching is another it's sort of a strategy that people use to try to fit in. And, you know, we all have to be safe. We have to feel safe and we all have to, you know, feel like we can work and put a roof over our heads and feed our families. So we have to make sacrifices that at some point. But when we are constantly hiding ourselves, it's exhausting. And it really takes away, you know, from us being fully present at work.
00:28:16
Speaker
So ah really encourage people to embrace their authenticity People might think you're weird, but what's worse than being thought of as weird is being thought of as fake.
00:28:29
Speaker
And the the way to avoid that is being authentic. so that's the first thing I suggest to people. The other thing that I think is really important is...
00:28:39
Speaker
um not pursuing work that you love, because that's, again, something that that I failed to do in my career. you Kind of midway through my career, I was working for a company where There was really nowhere to go up in the company except move into management.
00:28:58
Speaker
And i didn't necessarily want to move into management, but I felt like I had to in order to you know continue advancing my career. And so I did. And you know it was horrible.
00:29:09
Speaker
Instead of solving fun security problems, which is what I love doing, I was sitting in meetings all day. i was working on metrics. I was filling out reports.
00:29:20
Speaker
And, you know, it it it it really drained me. I think, you know, you had, I believe it was Ben Howard on the show a little while ago, he talked about sort of the the stress and the and the mental health of being security.
00:29:34
Speaker
And I think that's a really a big part of it is if you pursue work just, you know, for a bigger paycheck, for a bigger bonus, for a nicer title,
00:29:46
Speaker
That's not going to lead to happiness ah necessarily. If you're really lucky, it will. But, you know, you really need to understand what's going to make you happy and pursue that work.
00:29:58
Speaker
And in security, we're really lucky that there's so many different types of jobs in security that we can sort of pick and choose what makes us happy in security.
00:30:10
Speaker
And so I really encourage people to to do that. Understand but What brings you joy? And pursue that. And then the final piece of ah of advice, you know, again, what I didn't do was I didn't find a mentor.
00:30:25
Speaker
And I know you've both talked a lot about mentorships and mentoring on this show. And i think that's really important. It's something that I failed to do. And I think it's it's something that a lot of, especially neurodivergent people, fail to do because we don't really know how to approach people.
00:30:44
Speaker
other people and say, hey, can you please help me? Can you please sort of mentor me in my career? but But it's so important. And, you know, you need to find opportunities to to seek those people out.
00:31:00
Speaker
Brilliant. Well, Kate, thank you so much for taking the time out of your evening to sit with us and to share your experience and your insight. And we really appreciate it.
00:31:12
Speaker
Yeah, you're awesome. Thank you very much. Thank you so much, George and George. Thanks so so much for
Conclusion and Listener Engagement
00:31:17
Speaker
inviting me. Appreciate it. And that's our show. If you enjoyed this episode, the best thing you can do for us is to share the program on social media, tag us on LinkedIn at bare knuckles and brass tacks.
00:31:30
Speaker
If you want to go above and beyond, leave us five stars on Spotify or leave us a review on Apple podcasts. It helps others find the show. You can also support us by becoming a sustaining member. You can send us a one-time gift or sign up as a member to provide ongoing support.
00:31:46
Speaker
Memberships start for as little as $1 per month. So really, for less than you'd pay for one cup of coffee per month, you can support the show using the link in the show notes.
00:31:58
Speaker
It covers our hosting fees, helps us make cool events and swag, and it lets us know that what we're doing is of value to you. We hope we can count on your support. We'll catch you next week.
00:32:09
Speaker
New episodes of Bare Knuckles and Brass Tacks drop every Monday. Until then, stay real.