Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Avatar
245 Plays6 days ago

On Episode 156 of the Silver Bullet Security Podcast, BIML’s Gary McGraw hosts Phil Venables.  Phil talks about the evolution of the CISO role from running an engineering team in the '90s back into running an engineering team in the mid-2020s, Agentic AI and tools using tools, the rise of machine learning security, when we might see an AI BSIMM, emergent computation and security control, and what role humans can play in AI rollout.

Transcript

Introduction to the Podcast and Guests

00:00:10
Speaker
This is a Silver Bullet Security Podcast with BIML. I'm your host, Gary McGraw, CEO of the Berryville Institute of Machine Learning and author of Software Security. This podcast series is sponsored by BIML, a nonprofit science and technology organization whose research focuses on machine learning security.
00:00:28
Speaker
For more, see barryvilleiml.com slash podcast. This is the 156th in a series of interviews with security gurus, and I'm pleased to have with me today my old friend, Phil Venables. Hi, Phil.
00:00:42
Speaker
Hey, good to be here. Phil Venables is a venture partner at Ballistic Ventures and a senior advisor at Google Cloud, where he previously served as the inaugural CISO.

Phil Venables' Career and Cybersecurity Investment Focus

00:00:53
Speaker
Over a distinguished 30-year career, Phil has held foundational leadership roles, including 17 years as the first CISO of Goldman Sachs and subsequent positions as their CRO, Chief Operational ri Risk Officer, and the Director on the Board. A globally recognized authority on information security and risk, Phil co-founded the Center for Internet Security, CIS, and has advised world leaders as a member of the President's Council of Advisors on Science and Technology.
00:01:22
Speaker
His current work focuses on early stage cybersecurity investments and the architectural intersection of AI, resilience, and business modernization. He holds an MSc in computation from Oxford University and a BSc in computer science from the University of York.
00:01:39
Speaker
Thanks for joining us today. Yeah, what was a pleasure.

The Evolving Role of CISOs

00:01:43
Speaker
Phil, you've had one of the longest running tenures at the top of the security field, um starting out as CISO at Goldman Sachs. I think you were in your mid twenty s through your ah path defining role as architect of systemic resilience at Google Cloud at age. Well, never mind.
00:02:00
Speaker
um How has being a CISO itself evolved and what have we learned about trust and technology and you know Did that stuff we wrote down in the CISO report still agree? It's kind of interesting because i think it's to some degree the CISO role has kind of come full circle. So like when i when i um if I'm being honest, got dragged into being a CISO many, many years ago, my background was software engineering and I just happened to stumble into building secure software, building various things, and then after that stayed in the security team.
00:02:34
Speaker
and um and And inevitably at that stage, and you know this was in the 1990s, there were some products you could buy, but lot of the stuff you had to build yourself. and um And even if you didn't build it, the stuff you bought needed so much like wiring together that pretty much the whole security team was essentially an engineering team.
00:02:59
Speaker
And then as things matured, you got into things being more regulated. There was things like Sybans, Oxley, and all of these other regulations in the 2000s. And the security team's kind of morphed into being more of a risk and compliance team. And then with the presence of nation's state threats in the too out late you know mid to late 2000s and then beyond that and criminal activity.

Integration of Compliance, Risk, and Security Engineering

00:03:23
Speaker
Teams evolved into being kind of government national security-oriented teams and you've got these kind of new waves of leaders coming out of security services and law enforcement. ah
00:03:33
Speaker
And then in the past few years, though, we've all learned that one of the the right ways to do security is to kind of build it into the platforms and the engineering frameworks and to provide solutions and not just throw policies at people. That's not like software security. While teams today are this mix of compliance, risk, security engineering, operations threat, the whole thing, the character of, I think, some of the best security teams out of there, even in like Fortune 50 major businesses, look and feel more like engineering teams lived the past than they did the primarily kind of risk and compliance, so even though they do risk and compliance. So i think it's come full circle, and and i and I don't think...
00:04:18
Speaker
Not necessarily the best security leaders have always got engineering backgrounds, but they many do. And even the ones that don't now take a very engineering solutions-oriented approach that to make make the secure path the easier path by delivering things. And I think in that respect, it's come full circle. It's kind of fascinating.
00:04:40
Speaker
Oh, that makes a lot of sense, but I think we're in for a world of change.

AI's Impact on Software Maintenance and Technical Debt

00:04:44
Speaker
We'll talk about that. um In 2008, which was a long time ago, you and I wrote about developing an internal market inside of an organization in order to manage the hidden costs of software maintenance, or let's just call it technical debt.
00:04:59
Speaker
Now in 2026, we're seeing AI development move at a velocity that a traditional kind of TCO model might have trouble tracking. what when an agent can spin up say thousands of microservices in an afternoon how do we apply that sort of economic rigor and you know are we automating technical debt creation how do we handle that well i think For many organizations I've seen, it you know the naively adopting models to generate software, you certainly see that being a big factory for technical debt and architectural debt and all sorts of other debt.
00:05:40
Speaker
However, they as so as they say, the the the future is already here. It's just unevenly distributed. You see some organizations, and it's not just the big tech companies or the foundation labs, it's a number of other companies that have quickly adapted to make use of agent-driven software development and have ah adapted it to make sure it's producing things according to their technical standards, designs, coding practices, and are making substantial progress and actually using that to refactor existing code bases and pay down technical debt. So I think there's there's there's currently kind of a world of haves and have-nots, and and I think ultimately everybody gets to the haves
00:06:21
Speaker
yes it's just ah it's just ah It's just a question of time. But one of the things that is interesting is, again, we talk about a world where there's going to be 1000x, maybe even more software than we've had before.

Establishing Robust Software Production Pipelines

00:06:35
Speaker
And that's not necessarily a bad thing, but it does point to the fact that organizations have got to have that basis of software control, control of their software production pipelines, testing pipelines. there There was an interesting Google Developer Operations research report a few months ago about this kind of impact. And, you know the conclusion is totally unsurprising, but it's still a useful conclusion anyway, that if you if you sprinkle AI onto a chaotic software development environment, you're going to get...
00:07:08
Speaker
chaos amplified. If you sprinkle it into a well-maintained software development pipeline, you're going to get productivity amplified. And I think a lot of organizations are just realizing now that they're underinvestment in that base level of software control.
00:07:24
Speaker
is hampering their ability to get advantage out of AI without introducing chaos. And so I think there's a lot of tone in the industry as you see that about kind of using AI to manage AI. Well, you know kind of a lot of what we need to do with AI is just implement the foundational controls and the baseline controls and reliability and software management practices organizations should have done years ago. You've really got to do that now and and it's to get the best out of AI or at least to stop AI introducing so much chaos.
00:07:52
Speaker
Yeah, i I agree with that, but I still wonder where, say, software architects come from, which we'll get to in a minute. um We're moving away from the problem of, say, developers with compilers of being inside of our network towards this weird Hofstadarian, strange looper agents platform.
00:08:11
Speaker
which are tools, autonomously build and test and deploy their own specialized tools. When the tool user and the tool become the same part of some sort of recursive process, where does the security checkpoint live?

Human Involvement in AI-Driven Software Processes

00:08:26
Speaker
Can humans like stay in the loop when the the loop is crashing over itself so quickly? Yeah, i mean, definitely. so you know, we're in this kind of, this whole space at the moment requires you to keep two conflicting ideas in your head at the same time. And so we're we're we're definitely in that world where the production of code from AI requires code review,
00:08:51
Speaker
yeah But at the same time, there's so much code. The existing code review practices just cannot scale and never hope to scale. and So you take a step from back from those two conflicting things. is There probably still is always going to be some absolutely centrally critical code that will always need human eyes on it. um What percentage of an organization that is, but you can imagine like the core code on cryptographic signing of trillion-dollar payments in a bank probably is going to need some human review constantly, as would be and the ability to apply root signing keys to all of your software. So there's elements that are always going to need that. Then the rest of it, I think the agents verifying the output of agents agents, as long as there's sufficient independence of that, it could actually have a lot of utilities. So I think there's there's still a lot to be done. The other thing i think as well, and and this kind of always reminds me, and and you know we're we're going to start dating ourselves pretty quickly, this reminds me of the early days of the internet when we had
00:09:57
Speaker
web servers, web browsers, firewalls, databases, but nobody had really got consistency of how all this stuff hangs together. And what the and the real gotchas were not in the components, they were in the arrangements and design patterns of how you built it all. And I think we're at the same stage now with agents and AI in general, that there's so many different potential design patterns of how you do this that nobody's really figured this out. So again, we we talk about agents, but we we should also be talking about skills. So there's all of these markdown files full of skills that agents use that need management and checking. And then there's
00:10:35
Speaker
all of the the rag input to the models for the agents to do things. And then there's all of the context memory that's built up and needs managing and pruning occasionally. So we've got this loose flying collection of agents, skills, memories, models, and those act as this complex system.
00:10:53
Speaker
And we've not quite nailed down what's the design pattern for managing all of that. And at the moment, we just we just say things like, we should do agent security. And everybody goes, yes, we should do agent security. But then nobody really knows what that means. And you have to drop down a level of abstraction and say, okay, this means we're going to have to manage you know manage the integrity of skills knockdowns. going to have to manage the memory context. We're going to have to figure out local and global memory. We're going to figure out after how the agents interact with Bob. So there's just so

Machine Learning Security and Data Governance

00:11:21
Speaker
much still to do. And that's where all the devil is going to be in those details.
00:11:26
Speaker
Yep. ah As we watch the rise of machine learning security, the industry seems over focused on AI for security, using AI to do security, um to the detriment of security of ai And I wonder how you yourself define machine learning security as an engineering discipline.
00:11:50
Speaker
Do we treat the model as a black box? Do we get inside it? Can we change its architecture? You know, can we control training sets or is are we done with that? Well, you know, again, this is another one of those spaces where,
00:12:04
Speaker
the conversation about what is security of ai quickly over-fixates on micro-exploits and threats. So we quickly get into talking about, you know, these are good things to talk about and manage, like model poisoning, tool injection, all of those things are important risks to manage.
00:12:24
Speaker
But we're going to have thousands of those... attack techniques and threat vectors over many years. And as we've learned from our past in software security and other security, you've at some point, this is the opposite of what i just said before, you in this space, you've got to abstract up and figure out what's classes of control that can mitigate all of those things. when Certainly my prior experience at Google and now when you step back and you look at what does it take to control and govern the security and safety of ai it really comes down to good old-fashioned software lifecycle risk. At the end of the day, this is a bunch of software, and so you've got to manage the integrity of the software supply chain, the integrity of the software, how it hangs together, all the traditional good stuff we know we need to do and
00:13:11
Speaker
quite frankly, many organizations don't currently do a great job of. Secondly, it becomes a big data governance problem. The the training data, the fine-tuning data, the model weights, the parameters, the test data that constantly keeps a model in in shape.
00:13:27
Speaker
Again, that's another space where a very few organizations have well-developed data governance and data lineage management practices. There's a third pillar, which is managing the operational risk. So what's the guardrails, the circuit breakers, the checks that that puts some deterministic controls on what are non-deterministic things.
00:13:45
Speaker
And then you've got this base layer of identity and access. So what's the what are the applications and the agents, what are they permission to do? What privileges do they have? How do we think about their identity? And then to your point, this broader problem of testing and validation. So in as you know in our traditional world of software engineering, you test at you've done a change, and then you don't really test constantly in production. You test on change, whereas with AI systems, because of drift and other factors, you just have to test continuously. and So if you do if you do those five things, software lifecycle, data governance, operational risk, identity and access, and testing and validation, all the things that emanate from that are the things that mitigate all of the micro issues, and I think focusing on that is is the right thing.
00:14:31
Speaker
So when we built the Bsim in 2009, which you really helped with, we were careful to define it as a descriptive model, sort of reflecting on what the world's best security teams were actually doing rather than telling what they should do I have a couple of questions around that. So first of all, who is actually doing machine learning security? Are there groups for that? Like, is there an AI equivalent of the SSG? And second, is there enough stable data from people that are actually getting this done today effectively to build an AI B-SIM or is it too early?
00:15:08
Speaker
um First things first, so I think you know there are a number of pretty good organizations. So at at Google, we created this thing called the Secure AI Framework, which went from principles to actually some decent level of detail. And then we handed that over to ah an open foundation to create the coalition for secure AI.
00:15:27
Speaker
um There's other things happening in the ML Commons that are at more detailed level. There's the Frontier Model Forum, where the the foundation labs collaborate on controls around protecting the high-end models.
00:15:40
Speaker
um And then there's some other groups. So there's a there's a company called the AI Underwriting Company that have been doing work with a bunch of us to create a standard called AIUC1, which is a specification of controls that can be used by audit firms to audit in a SOC2 framework, what you're doing. And so and there's many other things. The Cloud Security Alliance have done some pretty amazing work. And um and um you know there's another thing called AARM.dev, D-E-V, d v um that is like an agent action open standard. So there's loads of things and that have gone from these kind of abstract into specific controls.
00:16:20
Speaker
And there ah to your last question, there are some organizations that are doing well to implement that and that also quite a lot of... ah tech companies and cloud providers are baking these things into their platforms that they're delivering. But there's still a ways to go. And I think we're probably on the the cusp of being able to do a you know an AI sim and and get some get some early indication of which companies are doing well because

Managing AI Security in Organizations

00:16:45
Speaker
of what reason. So I think it's it's probably something worth that BIML could be a useful useful venue for doing that.
00:16:53
Speaker
Maybe um i i was really thinking less about these organizations that are umbrella organizations and more about, you know, is the AI risk control group inside the CISO organization in an enterprise? Or I know you're thinking much bigger, um like let's just take a bank, for example.
00:17:13
Speaker
Is there one place where ai security lives or is it just all over place? ah Well, so that the the banks are ah good example, in my experience, of those of those companies that are doing it well because they've they've got this established structure for doing it. you know've Because of what they've done on algorithms and the use of machine learning in the past, yeah they they have ah a security team that's well aware of this. They have a compliance team that's quite sophisticated.
00:17:40
Speaker
Their lawyers are quite sophisticated in these things, and they have these model risk validation teams that are i'm very well equipped to validate and test models, and even the later, more generative-based models. So I think they're in good shape, but it doesn't live in one place. It's a collection of teams that are part of the process of validating and vetting. Now, when you get outside of banks, it's it's rare to have that degree of sophistication and structure.
00:18:08
Speaker
And to the point your question, the CISOs are having to pick up this. And and so CISOs are becoming more like chief digital risk officers in a world of AI because the board and the rest of the C-suite is turning to the CISO. So I've seen this situation a few times is the CISO will be sat in a board meeting and the CISO is presenting, here's my AI for security work and security for AI work. And the board goes, that's great.
00:18:33
Speaker
What about safety, bias, compliance, privacy, industry? And the CISO goes like, or me and not the legal people or the compliance people. As long as you hold the bag, hold all the bags. Yeah, and so you know it's you know it's a great opportunity for these those to actually kind of take that broader business risk manual because they're being seen as the places where that...
00:18:56
Speaker
central function should evolve. you know And some CISOs are grasping that and doing it amazingly. Others are you know others are kind of pushing back on it. And and so you know it might be one of those moments where CISOs need to embrace the yeah the desire of their organization to have them be more ah more you know have more ownership over the entire risk set. Right, right, right.
00:19:16
Speaker
Yeah, I mean, you've you've always been way ahead of the curve. So I think the curve is just catching up to you on that one. um In light of agentic AI, which we've been talking about mostly, do we reassert the essential nature of the architect, like the software architect, information architect, security architect, and so on in building systems?

Role of Architects in AI Systems Management

00:19:39
Speaker
Because in my view, architects can envision the right sorts of harnesses for AI and play the role of kind of whiteboard overseer. And in some sense, the there's a lack of if you don't have that architectural rigor already in place, it's difficult to handle the architectural requirements of agentic AI properly.
00:20:03
Speaker
and And so I'm concerned about enterprise architects, you know, that are coming from sales, say, a sales guy who says, I want to build this thing this way. And the AI does it, but anybody from technology who looked at the architecture would go, oh come on, we can't do that.
00:20:19
Speaker
And you certainly see some of that. so organizations that have... appropriately encouraged vibe coding in their user base of a process by which some of the systems that emerge from that get re-architected to be production ready. Many, of course, many of these vibe coded things just say as like end user applications, just like,
00:20:44
Speaker
advanced macros in Excel or scripts in a Databricks data bri day warehouse. don't like the trading floor, guys. that That's right. But you know but yeah like even in those environments historically where you did get user-developed apps, those that emerged to be critical got refactored into being a real production application. The interesting part of that question, though, is that, you know because i I actually don't know how this should turn out. So when you look at some of these super-developed, super-software engineers that are basically just not writing code anymore. They're orchestrating agents to write code.
00:21:21
Speaker
They've gone from being software engineer to being software architect yeah um almost by definition. So the real question is, not so much whether we have enough software architects, but rather whether we're equipping our software engineers to actually understand and drive architecture and whether the models themselves have enough understanding of architecture so that they're not just generating code, they're generating architecturally good sets of code. And I think that's and that's a ah very good open question.
00:21:52
Speaker
Yeah, I think so too. i think that it actually is very hopeful for people that are software professionals. We never really did understand where architects come from, but they're going to be even more important and we need lots more of them now.
00:22:06
Speaker
So you've recently written about the second order risks of trillions of interacting agents.

AI Systems and Biological Immune Systems Parallels

00:22:12
Speaker
This feels like ah the moment to revisit Stephanie Forrest's DARPA work on computer immunology.
00:22:19
Speaker
In the world of agentic AI swarms, where behavior can be emergent rather than designed, are we moving towards a security model that looks less like a set of well-defined rules and policies, which we've been talking about, and more like a biological immune system?
00:22:35
Speaker
And you know how do we build self-tolerance into these agents so that we don't have an autoimmune reaction inside of our own systems? Yeah, I mean, I do think, and I've long thought that an immune system analogy to cyber is probably one of the better analogies.
00:22:55
Speaker
and um And certainly one of things when I was at Google Cloud, we used to talk about cloud being a digital immune system in the sense that if you can early enough detect an attack in one place and then analyze and propagate controls to defend against that faster than the attackers can get to other parts of the environment, then you have, in effect, got this loop that creates this immune system effect. I think in a world of trillions of agents, either broadly or in your own environment,
00:23:26
Speaker
Having the ah ability to constantly update and refine that in response to detected problems in part the environment, it makes that a necessity, not just ah not just a nice-to-have. It's sort of a loop, kind of, if you think so. No, no, exactly. But the yeah the real thing I worry about, i mean, it's the reason I kind of wrote about these second-order effects things is in every wave of technology change, we get fixated on the risks in front of us, which is, you know all you really can do. But the really interesting risks come as a second-order effect. So, like, when when smartphones happened in 2007 and 2008, we were worried about all sorts of risks. We weren't worried about the risks you know,
00:24:08
Speaker
social media platforms because they didn't really come until they were enabled by phone. We weren't really worried about all sorts of things that smartphones gave us. And ah the same thing is going to be here in AI. So again, in that world of billions of agents, all of which are operating independently with different models, with different reward functions, competing with each other and interacting with each other. I mean, that's a that's like a...
00:24:32
Speaker
like an evolutionary soup that is going to cause all sorts of weird emergent properties that I don't think we're really prepared to monitor or even know how to control. And by definition, you can't control a complex system. You can just nudge it in various ways. i think there I think there are two metaphors that are helpful. The immune system is the one that I know that you um have talked about in Royal too, of course.
00:24:55
Speaker
um And I think another one is the colony metaphor, you know, ants versus versus the whole colony. What's the colony doing? I don't need to talk about individual ants or paint little numbers on each ant.
00:25:07
Speaker
And and that's the future. So we have to think about that starting pretty soon. No, that's right. I mean, I intuitively think that AGI will come from, as an emergent property of agents, you know yeah as you said, it's the ants versus anthills is going to be a useful way to think about this.

Potential Risks of Unpredictable AI Agents

00:25:29
Speaker
And it's just at what point Again, going back to kind of our financial analogy, you know when are we going to see the first agentic flash crash of a trillion agents descending on some business's agent because it mistakenly offered up a cheap deal?
00:25:46
Speaker
i think that i I think we're like 12 months away from something like that happening, if not sooner. Don't say it You're too good of a prognosticator to say that. ah This has been really fun. So we've talked about what's right around the bend, you know agents and swarms operating at machine speed.
00:26:08
Speaker
I want to close on what I think is an optimistic note about the three I's, intuition, inspiration, and insight.

Human Intuition in AI-Driven Security

00:26:17
Speaker
A model can predict the next token, but it doesn't really have a hunch. Sometimes the design feels wrong and it doesn't experience some sort of eureka moment where it connects to unconnected domains.
00:26:29
Speaker
um So as we get the busy work of security taken care of by automation through AI, do human traits like intuition, inspiration, and insight, is that like what the ultimate premium is going to be for a high-end professional in our field? Yeah, I think that, and it's kind of a corollary to that, is taste.
00:26:52
Speaker
So like design, taste, which is sort of like how you direct things to happen. And I and i think... the The question i I don't really know, so i think there's going to be a premium on security people that can know how to direct a set of technologies to a goal by still having their imagination of what are the attacks, what are the potential offensive and defensive tactics to harness AI to do that.
00:27:24
Speaker
But then I also think that by definition, the ability of to create boundless scenarios and possibilities and then analyze them and score them, can that in effect stimulate an even better intuition?
00:27:42
Speaker
and And I just don't know the answer to that. The only thing the only thing I know in the use of AI is The ability to have a bunch of software is not going to be a competitive edge anymore. The ability to have managed perception, taste, understand what problem you're solving, and the same thing applies in security, is going to be the premium. And I think if anything, it's going to, back to that kind of, DISO is becoming the chief digital risk officer point, think it compels security people
00:28:13
Speaker
to do what they always should have been doing, which is to think broadly about their business risk, their business resilience, or their mission risk if you're in ah if you're in a different environment, and not just focus on the security because I think they're going to be called upon to be much more multidisciplinary, to remain useful, and actually just to amplify what they can do on security.
00:28:35
Speaker
Fantastic. Thanks so much for your insights today. It's awesome. Yeah, always a pleasure to talk. This has been the Silver Bullet Security Podcast with BIML. Silver Bullet is sponsored by the Berryville Institute of Machine Learning, a nonprofit science and technology organization whose research focuses on machine learning security.
00:28:53
Speaker
You can find a permanent archive of all of our episodes dating back to 2006 at slash technology slash silver bullet podcast. Show links, notes, and an online discussion can be found on the Silver Bullet webpage at berryvilleiml.com slash podcast.
00:29:11
Speaker
This is Gary McGraw.