Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Continuous Red Teaming in the AI Era image

Continuous Red Teaming in the AI Era

S3 E34 · Bare Knuckles and Brass Tacks
Avatar
0 Playsin 7 hours

This week, Ads Dawson, Staff AI Security Researcher at Dreadnode, joins the show to talk all things AI Red Teaming!

George K and George A talk to Ads about:

  • The reality of securing #AI model development pipelines
  • Why cross-functional expertise is critical when securing AI systems
  • How to approach continuous red teaming for AI applications (hint: annual pen tests won't cut it anymore)
  • Practical advice for #cybersecurity pros looking to skill up in AI security

Whether you're a CISO trying to navigate securing AI implementations or an infosec professional looking to expand your skill set, this conversation is all signal.

Course mentioned: https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-DS-03+V1

————

👊⚡️BECOME A SHOW SUPPORTER

https://ko-fi.com/bareknucklesbrasstacks

For as little as $1 a month, you can support the show and get exclusive member benefits, or send a one-time gift!

Your contribution covers our hosting fees, helps us make cool events and swag, and it lets us know that what we're doing is of value to you.

We appreciate you!

Recommended
Transcript

Collaboration and AI Model Flaws

00:00:00
Speaker
If you've got machine learning engineers, you've got security operations engineers, and you've got like, you know, app sec engineers. And I think like the almost it sounds a bit corny, but the recipe for success, like internally within a business is getting these people like in a room and understanding the individual components of like what you're actually doing and the scaffolding that goes into this.
00:00:24
Speaker
And then each kind of player to that is able to then, you know, provide their input. And I think from there, that's probably the best like holistic approach you could probably get.
00:00:35
Speaker
I hope that makes sense.

Understanding AI Model Vulnerabilities

00:00:36
Speaker
But like, that's how a business would be able to like understand these like fundamental flaws of models and see how that's like exaggerated when you throw it into an API or something like that.
00:00:53
Speaker
All right, it's Bare Knuckles and Brass Tacks, and we're at light speed because George is in an airport and he's trying to board.

Introducing Ads Dawson

00:00:59
Speaker
Today's guest is Ads Dawson, an AI threat researcher of the highest order and a lead author on the OWASP top 10 for LLMs.
00:01:07
Speaker
He's built tools, burp prints, and a whole bunch of other things, and he's at Dreadnode, and we are super psyched to talk to him today. Ads is absolutely incredible. He's the real deal. I've been thirsting for finding someone like this for so long because we deal with so much nonsense in this space.
00:01:23
Speaker
He speaks with clarity. He speaks with expertise. He is the guy that I hope our listeners really take attention to. And honestly, Ads was a great find, George. I didn love that you've maintained this friendship in the last couple of years. You yourself being an AI guy too.
00:01:37
Speaker
I really love that Ads came up here and gave us his expertise. And I think this show has a lot of value for our listeners.

AI Red Teaming and Security Insights

00:01:43
Speaker
let's Get into it. Ads Dawson, welcome to the show. Thanks for you guys for having me. It's a pleasure. um Yeah, I've been looking super, super pumped for, so thank you for having me.
00:01:55
Speaker
Absolutely. Yeah, we met two DEFCONs ago. um i don't even remember how. I think we ran into each other on LinkedIn and then we ran into each other at Caesars RIP.
00:02:07
Speaker
But ah yeah, been following ever since and really good to have you here. um We have made a point of talking to AI Red Teamers on this show because

Dreadnode's Security Approaches

00:02:17
Speaker
trying to get the conversation a little bit beyond the casual prompt injection, blah, blah, blah, that sort of goes out and about.
00:02:25
Speaker
Also, not least because AI will get further embedded into business processes and systems and security teams, Need to know what's up. So with that, Ads, why don't you set the scene for us? ah You have been in AI red teaming longer than most people have even known that it's a discipline.
00:02:44
Speaker
And yeah, give us ah an overview of where your research is, what you're interested in, and kind of a little bit beyond what most people know on LinkedIn about Gen AI applications. But even beyond that, as an example,
00:02:58
Speaker
We talked to Adrian Wood, whose work is mostly about model poisoning and compromising the infrastructure of AI. But yeah, tell us where you're at, what Dreadnode's up to, what you're up to, and and we'll go from there.
00:03:10
Speaker
Yeah, sounds great. Thanks. um Yeah, so my name's Ad. I'm an AI security researcher. I currently work at Dreadnode where our mission is improving, um defining, and i guess demonstrating um ah like the yeah the use of agents across adversarial ah capabilities given in like offensive security scenarios.
00:03:36
Speaker
So I'm like, I have the most awesome job in the world where I kind of have two sides of the coin where um a lot of it a lot of it is very research based, which is also just a gift in itself. But I spend a lot of time like AI red teaming. So whether actually that's like, you know, red teaming actual models, classifiers, things like that.
00:03:59
Speaker
Using that and kind of baking that into our products, which we call Spyglass, which is like our a redum red team tooling. um And also kind of on the other side of the coin, which I kind of mentioned beforehand is um building and developing a harness where we can deploy agents given like an offensive security setting where they have to, like, let's take a very simple example of something like capture the flag.

AI Security Culture and Challenges

00:04:24
Speaker
um and kind of pushing models like to the full extent and full capability. um So kind of using AI and using AI and abusing AI, um is i guess, is ah a very short summary.
00:04:38
Speaker
um I've worked as a security engineer for a while. I originally started out as a network pen tester. kind of fought my way into web, and became like a bug bounty addict, I guess, lurking at night.
00:04:51
Speaker
um And then I actually used to work for a foundation model provider, which was kind of like my path into AI. um So kind of like secure engineering, application security.
00:05:02
Speaker
um But it was really it was really insightful for me because i kind of got to understand and be a part of like a whole model development lifecycle. So as on top of like your standard SDLC, kind of like the model ah pipeline as well and that whole deployment process, which is, I think, been really beneficial for my insights of understanding the difference between vulnerabilities and models that they inherit themselves.
00:05:29
Speaker
And obviously when you throw them into a an application, which is like everyone's throwing out chat box right now. Right. So yeah. And you raise a good point on the model development pipeline, which is came out of research facilities, right? Universities and then eventually like maybe large companies, but they were R and D departments. They weren't necessarily building consumer products, which makes me think that that we have less maturity around how to secure those pipelines just because the culture wasn't there to but like it is for, you know, developing web applications or. Yeah. Yeah.
00:06:10
Speaker
No, I totally agreed. Aligned with that. I guess there was never a need to do that. um and And then suddenly there

Evolving AI Security Practices

00:06:18
Speaker
was. Yeah, until until like, and you know, ChatGPT launched and then, ah you know, realized that you can easily throw a wrap around GPT and, you know, build your own chatbot. And then,
00:06:31
Speaker
I guess that's how this whole kind of rolled into. And, you know, we we've I guess there was findings of, you know, traditional security operations stuff wasn't being done in a model pipeline, whether it's like for registries or just just the applications themselves. So it's definitely maturing and we're definitely getting there.
00:06:49
Speaker
um I guess it's I'm a big fan of like ah cultivating like, you know, an organic and wealth of knowledge between the two and kind of bridging the gap.
00:07:01
Speaker
Yeah, it's trying to try to improve us. Yeah, absolutely. All right. dad So I'm really happy to have you on because, you know, to me, represent an authenticity that I think is really lacking in the eye space. Like as ah as a security leader, as someone has to run a program with accountability, with budget, with a team at a highly targeted organization, actually multiple at this point,
00:07:24
Speaker
um you know I look at someone like you as like the real deal. First of all, thank you for that. Thank you. thank you it's for a kind i say that with like it's i I talk a lot of shit about a lot of nonsense. I usually slag most people, so please take my phrase. um oh yeah What are your thoughts then? What are your thoughts on, I'd say, all the grifters in the AI space?
00:07:50
Speaker
I find it hard to take seriously most people who talk about AI and LLMs nowadays. Like our friend in Canada, Helen Oakley, she does amazing work on S-bombs and AI bombs.
00:08:01
Speaker
And I see what you've done in your work and what you represent the community. And obviously as a software in dev CISO, have to rely a lot on the OWASP standards and kind of what you contribute to that conversation. so thank you for that. Thanks. But I also see a lot of poser prompt engineers, as George kind of pointed out.
00:08:17
Speaker
And I use that very loosely. to You can call them prompt engineers. Like they only wish they could be like you.

Demand for AI Security Expertise

00:08:24
Speaker
So so my my real question is, how do we how do we kind of divide the shaft from the real deals and where can we find a pipeline of folks who are like you?
00:08:35
Speaker
and And is there a collective, is there a group? Because CISOs like me are thirsting for connecting with people like you to actually help us solve this problem. Because the secure implementation issue and then bringing in AI models, bringing in AI capabilities,
00:08:48
Speaker
Our devs are screaming for it. I spend almost every single week looking at policy review, looking at process review, looking looking at different technologies that are claiming LLM capabilities. So I have to deep dive into a whole bunch of supply chain bullshit.
00:09:02
Speaker
And I would just love to know if there's someone or a group of folks who are experts who are coming together and where can we find these resources as CISOs. Yeah. it's thats First of all, thank you. That was ah was super sweet of you.
00:09:15
Speaker
um I guess if I can speak from my own experience. um So for me, it was really difficult. Like I'm really not kind of like an academic background kind of person.
00:09:26
Speaker
um So for me, like reading like archive papers and stuff at first was like incredibly overwhelming. Like I didn't do well at math at school. um and And I kind of, i I guess I was really lucky because I you know i was working at the foundation model provider, but I was involved in kind of like leaders around red teaming.
00:09:49
Speaker
um And for me, that's one of the reasons why i actually took the move to Dreadnode was actually working for ah my my CEO, who's actually ah like, you know, it's no surprise. He's a role model of mine.
00:10:02
Speaker
um So I think I kind of surrounded myself with people who are in the security world and also exploring the machine learning world. So you mentioned people like Adrian beforehand.
00:10:13
Speaker
um And that's, I guess, what we try to do with the OWASP community. So it's a very large community now. um i think it's... i mean some Having a team of people who are really versatile is really helpful towards like understanding like AI security within a business. So if you've got machine learning engineers, you've got security operations engineers, you've got AppSec engineers. And I think like the...
00:10:42
Speaker
almost it sounds a bit corny, but the recipe for success, like internally within a business is getting these people like in a room and understanding the individual components of like what you're actually doing and the scaffolding that goes into this. Yeah.
00:10:57
Speaker
And then each kind of player to that is able to then, you know, provide their input. And I think from there, that's probably the best, like, holistic approach you could probably get.
00:11:08
Speaker
I hope that makes sense. But, like, that's how a business would be able to, like, understand these, like, fundamental flaws of models and see how that's, like, exaggerated when you throw it into an API or something like that. Mm-hmm.

Cross-functional Collaboration in AI Security

00:11:21
Speaker
Yeah, I have to agree with you. Like I've leaned that like naturally I've leaned into just relying on my team leads and relying on the engineers and just trying to break down silos. So we facilitate conversation because the issue is everyone has their interests and stake at the table.
00:11:35
Speaker
But from just a personal management standpoint, if people aren't talking to one another and trying to constructively solve the problem together, I think when egos come into play, not to break away from the pure tech talk, but George knows, you know, all this comes down to the human element.
00:11:50
Speaker
we have to look at how we manage our teams. And and what you're saying, Ad, is just it really comes down to building a culture of open communication and dialogue where people can share ideas and constructively come the solutions together because no one has, like, the be-all God answer. We're all trying to figure it out as we go, and we're doing our best, right?
00:12:08
Speaker
Yeah. don Thank you you. You put that really well. And ah there's only a handful of people that I know that understand that full spectrum. um So, yeah, you you nailed it.
00:12:20
Speaker
Yeah, that makes sense. We're in new territory, right? Like model developers focused on model development, AppSec engineers focused on CICD pipelines, and now we're jamming it together. And that's been happening for a relatively short amount of time, right? So thinking that anyone's got that expertise. But I also keyed in here, Ads, when you said holistic, it's a big, I don't know, flag that I plant in terms of this cross-functional capability because...
00:12:52
Speaker
As machine learning gets more embedded into a business, machine learning is very good at taking one thing and just doing it orders of magnitude faster. Right. And so when I think about how security teams are architected and organized, it's a long human specialization.
00:13:08
Speaker
Right. We've had here's the GRC team. Here's the SecOps team. Here's, you know. And i feel like that's going to kind of get blown up or need to change. I'm a big fan of NVIDIA's ah cross-discipline red teaming methodology, right, where they've got GRC folks and ah ai ethicists and also their ML engineers working together.
00:13:29
Speaker
ive i feel like that's kind of where we need to go. But it's almost like you read my my questions ahead of time.

Adapting Security Assessments for AI

00:13:35
Speaker
So my question now is, AI is going to get further embedded in the business, whether that's in...
00:13:41
Speaker
revenue line operations like marketing, customer service, whatever. How should security teams think about traditional pen testing, red teaming, you know, doing third party risk assessments on models is not a thing.
00:13:57
Speaker
it It can't be so fucking complicated. That would make it was just nightmare fuel. And I ask this because I want to dispel, I think a myth that, know, CISOs sit at the top of this technical acumen and they're up to date on all the things and they know all the stuff and you can see the look on George's face.
00:14:17
Speaker
No, they got they've got families, they've got responsibilities, they're in budget meetings, they're doing audits and they are asking one another like, hey, how do you do Like they're not sort of the be all end all and they do need that help. So I guess my question is, do you think These teams need to start building out in-house talent. do Is it so specialized that they should look to outsource? Like, I guess I'm asking for the mental model. Like, how should they think about red teaming, not just web apps, network, cloud environments, but now AI applications?
00:14:52
Speaker
Yeah, that's an awesome question. um So I think and like the first first thing, a lot of this is contextual and relevant to the the business that's actually deploying, you know, let's say AI.
00:15:06
Speaker
um So like George will know, like if if you just have like a standard, you know, REST API with no kind of AI functionality, you're used to getting like, you know, an annual pen test, which Benchmarks across like the standard OWASP top 10 is like the thing that you see most commonly. Whereas now we've kind of entered this domain where there are so many domain expertise categories where you have a chatbot and you want to cover those kind of things. right You want to make sure that you can't exfiltrate data or cross-site scripts or you know SSRF.
00:15:41
Speaker
if a model's got like excessive agency into like you know a Jupyter kernel or something like that, which you can like laterally move across an environment. But at the same time, you don't want i you know what kind of happened with like Microsoft A on Twitter, or formerly Twitter, right? um There's also another side of the coin which is introduced where you your your a chatbot starts badmouthing another business or you tell someone to go F themselves and all of a sudden like you've got like you know like kind of like a P1 priority.
00:16:16
Speaker
um And those things are like non-deterministic. um you know like The stochasticity of models is also like another challenge as well. right and Regression is like a big thing that I think about and is something that Dreadnode is thinking about when we're actually building these products that we can give.
00:16:33
Speaker
a certain level of confidence to people. um But and I'm sorry, I kind of digressed. But in in answer to the question, i do think that what what I tried in my previous experience is like a security champion approach.
00:16:45
Speaker
um And when it comes down to like contextual, like this we've been doing things like threat modeling for a long time. And whilst it's not like, you know, ah a silver bullet, I think like threat modeling is a best, is ah is a great way to start and be like, okay, this is like,
00:17:00
Speaker
you know, um the model that we're deploying was trained on this data. So therefore you can almost threat model the type of red teaming you might want to approach when you're doing things like safety.
00:17:11
Speaker
um So like ah my previous roles, um we had a you know flavor of like safety experts who were like maybe super um like skilled at some kind of like elements of like safety harms and things like that, you know, whether it's something like CBRN.
00:17:31
Speaker
But at the same time, then you've got your app set professionals who are trying to mark down elements to exfiltrate data or something like that. And I guess based on that, then you might want to combination of internal people and external. So whether like this and there there are external businesses who like solely focus on red teaming or have like, you know, data sets or are able to generate a lot of synthetic data based on specific domains as

Continuous Red Teaming in AI

00:18:01
Speaker
well. Right. So because a lot of that is not in-house, like it is impossible to be like, if you need to cover a wide spectrum, but like we have all these skills and we have all these people who are like really, really skilled up in these, in these domains.
00:18:16
Speaker
um So I'd i'd say ah a flavor of both, but a lot of it can come just from threat modeling help you understand what you care about. And like each company really should have like its own taxonomy, which, you know, that's one thing I've always tried to build is a taxonomy of like, what do you care about? Like, what is the most important things to us?
00:18:35
Speaker
Then you can start to threat model and kind of work your way down from there. It's the best success that I've seen so far. are Hey, listeners, if you dig the snark, the stories and the big swings we take, we'd appreciate your support. You can now become an official supporter of the show. You can send us a one time gift or sign up as a member to provide ongoing support. Memberships start for as little as one dollar per month. Just follow the link in the show notes.
00:19:05
Speaker
Each membership tier comes with a unique set of benefits, including exclusive discounts to the BKBT swag shop and even advisory services for your team. So really, for less than you'd pay for one cup of coffee per month, you can support the show.
00:19:22
Speaker
It covers our hosting fees, helps us make cool swag, and it lets us know that what we're doing is valuable to you. Many thanks to listener Evan D for his recent pledge of support. We'd love to have yours too.
00:19:35
Speaker
Now, back to the show. are I'm not fan of prognostication, but a prediction here is that the pace of red teaming will have to increase because we are used to protecting systems where, okay, here's the data.
00:19:55
Speaker
We take it in, gets stored here. And it's kind of this static element. And then when you start to add machine learning to a business, it really stretches the limits of that CIA triad, right Because it has to act on the data. It will probably write to the data.
00:20:10
Speaker
And so that, to me, seems like you can't get away with an annual pen test. Like you're going to have to be looking over the shoulder of the model much more frequently because if something starts to change, ah you know, in Q2, like it could get you into a bad situation by q four but you can't wait a whole year to figure that out.
00:20:30
Speaker
Yeah, 100%. You should be red teaming ah you know twenty four seven generating synthetic data twenty four seven um i I would probably predict that there's going to be some kind of shared evaluation, you know call call like calls for shared evaluation soon, because like you're like you're rightly saying, there is no... like there is no perfect set of data sets for like XYZ domain categories, right? um That ah little are the best. So I think if I was to predict, I think something like that will come out because everyone at the moment is just in their own kind of little...
00:21:08
Speaker
and Sorry, not their own little, but you know they're in their own they're in the siloed problem and they're just generating synthetic data and and using that. But yeah, you're absolutely right. like Red teaming like in your actual model development pipelines, just as you would run like SAS scanning on your SDLC, right? like and Continuously.
00:21:27
Speaker
um no One thing actually kind of love about my job... You know, my my ceo my CEO is always challenging me to spend more money on inference, which chat wish I love. Yeah, that that is ah that is a rare problem to have. Yeah, it's super dope. um But yeah, I kind of like just to the point I'm making, it should be um continuous. um And just try and get as much coverage as you can.

Building AI Security Networks

00:21:57
Speaker
Yeah, and like i got I got to say, to go back like to George's point at the start of that question. might I really as CISO only have so much bandwidth and as things get more complicated, I really don't know what the fuck is going on lot of the time. I'm really trying my best.
00:22:15
Speaker
The issue comes down to building a trusted network with people who do have expertise. And again, it comes down to the personnel back there, right? Because as a CISO, you have to go out and network and build trust within the community and within your peers.
00:22:28
Speaker
And then you have to work with suppliers that actually do know what the fuck they're talking about because those are the folks you lean on And I think within your teams, like I really have an approach where I lean on my second line managers because they're experts in what they And then I try to provide them technology enablement with the cutting edge tech because it's my job to go out and build those relationships.
00:22:48
Speaker
So filtering through the bullshit and finding the gems, finding the real people like you, who I can then introduce to the guys who are doing the thing, who are solving the problems while I'm trying to maintain a program.
00:22:59
Speaker
I think that's what's going to end up trying to be like trying to be a modern CISO in this AI-enabled environment. that we are just dealing with just new threats every week, and we don't really have time to to wrap our hands on anything, one in particular, but it's going to come down to your network. And now your network is your net worth, I think has a lot more connotation than you're running security operations.
00:23:21
Speaker
um I think that's something you're going to probably see a lot more. If you're not seeing it already, I'm sure you've got a lot of CISO friends who are constantly going to you being like, hey, dude so like, here's the thing. but How do I deal with this?
00:23:32
Speaker
Yeah. um Yeah, i ah I guess I'm fortunate enough that, you know, I seem to, i I have a few sliding DMs. um and You know, um always, always like to think I'm approachable. So, yeah, I guess, you know, I could be signing my death warrant here, but...
00:23:55
Speaker
um But no, thank you. That's extremely kind of you. And yeah, connections do go a long way, um especially with such a small community until this like matures more.
00:24:07
Speaker
You know, everyone's going to have to rely on this kind of stuff. Make sure you're following the right people um when it can, you know, if it's your RSS feeds for security topics and stuff like that, or whether it's someone that you follow on social media.
00:24:21
Speaker
So that brings me to the actual question is how should security operations folks think about mitigating against AI risk, not only from external threats, but as our developer teams try to implement these capabilities within their own CI CD, or sorry, the SDLC, excuse me.
00:24:36
Speaker
A lot of our dev team colleagues, don they want to use the tools, so but the question of secure, you know open source connected implementation still remain in the air. I've spent the last two years going to almost every conference trying to figure it out. no one has a solid answer.
00:24:49
Speaker
So even Amazon Q, even Copilot, even Gemini all have their issues. What do you see as the solution to solving the secure and enablement and implementation problem? Yeah, one thing actually i think is um kind of ah underridden, so to speak, or kind of gets snuck under the rug is um is shadow AI in companies. I think shadow AI is actually probably one of the biggest risks to companies right now.
00:25:18
Speaker
um And I'm very much a big fan of like security through transparency, not obscurity um and facilitating the teams um as much as you can. Like you do a great job of it, George. And i I think only then you you kind of bring like that trust.
00:25:35
Speaker
um Otherwise, you've got someone using a you know non-authorized application to try and get that work done for that deadline that they need. And that's kind of how you end up with something in a bucket or something like that lining around. Right. um So I think.

Mitigating AI Risks in Development

00:25:50
Speaker
what what you What you're kind of referring to is is the best approach. So you're doing a great job of just providing that transparency and providing that but tooling and that capability through um through individual.
00:26:03
Speaker
And I guess there is like no what we know one best products but again kind of aligning something that fits well for everyone um and you know like including those multiple stakeholders like George pointing out um like cross cross function domains um is the is probably the best approach I'd say nice Yeah, so, Ad, let's turn our attention here to more practical considerations.

Transitioning to AI Red Teaming

00:26:34
Speaker
You mentioned a little bit in your background of how you moved from network security into sort of kicking the door down to work at a model developer, which gave you this ladder up experience.
00:26:45
Speaker
I talked to a lot of pentesters, a lot of red teamers. you know They're still coming into the industry through the traditional channels, you know ah web application stuff. Some go off ah and find their way into cloud environment. Like they sort of you know splinter off into into areas of interest and specialization.
00:27:06
Speaker
AI sort of around the fringes, you know, like they might use GPT to write some scripts or they might. But like actually red teaming AI systems themselves is still quite new for most. So where would you point people um who are interested in learning more or trying to get that skill? Because to my mind, that's the future.
00:27:31
Speaker
And that's where we need to need to head. Yes, a solid question. Thank you. So if I can shout out any particular training that I've done myself is the NVIDIA Black Hat training from last year. um It's absolutely phenomenal. It's actually super cheap for what it is. um It's kind of crazy how cheap it was. like i the You have a cinema GPU usage, so I bought it under like three different accounts.
00:27:59
Speaker
um Classic hack. Yeah, yeah. yeah so um but yeah i i would so i would immediately point folks to that as being the best resource i can also provide a link in the show notes um that was done by the by the uh the fantastic team at nvidia um and a a couple of people from from drag now as well but um that no bias.
00:28:23
Speaker
It goes through the specific inherent vulnerabilities in models and helps you understand like machine learning concepts and then from that you're able to piece and just understand, be like, okay, yeah now I understand if I throw this on a REST API, you your brain immediately starts ticking and be like, okay, this is how inversion could be like a real problem if I don't do...
00:28:47
Speaker
yeah Yeah, I think that's a really good point. i Because to George's earlier point about the LinkedIn noise, I think everyone wants to take like prompt injection courses and whatever, which is fine. But LLMs are just but the one sort of UI to a lot of these larger applications, as you pointed out in the background, are third party classifiers, ah different models chained together.
00:29:15
Speaker
And I would like to see people get more fundamental, like into that architecture rather than think of AI and conflate it with chatbot. Yeah. Like, yeah. And um honestly, I think there goes something said about like, like

Continuous Learning in AI Security

00:29:32
Speaker
reading. um it And so for me, like not as an academic person, especially, it was really difficult for me to like, you know, read an archive paper.
00:29:42
Speaker
And as traditional security people, that's not something that we're really used to. you know We're like we used to reading write-ups and blogs and things. So trying to trying to harness ourselves a bit better around like actual research, because if you look at the papers from like five, 10 years ago, those are like the those are showing like the the functions and and the outcomes of these like of these flaws, um but of which a lot of them are still valid.
00:30:10
Speaker
And, you know, there's tools that you can use, whether that's AI, to to help you achieve that as well, right? Like, you know, I would throw a lot of, like, mathematical equations from papers into GPT and be like, explain this to me like I'm five. um So it's kind of like a little hack as well that I found useful.
00:30:30
Speaker
Nice. Beautiful. Well, I guess we'll wrap it up there because George needs to board his flight out. um But ads, thank you so much for the time. It's been a long time coming, but I'm glad we got you on and I hope we run into each other out in Vegas. Appreciate it, brother.
00:30:48
Speaker
Thank you so much, guys. i appreciate you a lot. um Awesome. Big fan of you both. Big fan of the pod. And yeah, this is dope. So thank you. If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs.
00:31:06
Speaker
New episodes of Bare Knuckles and Brass Tacks drop every Monday. If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review that helps others find the show.
00:31:19
Speaker
We'll catch you next week, but until then, stay real.
00:31:27
Speaker
Let the airport audio pass.
00:31:33
Speaker
I do little really cool light bands, honestly. It's just annoying. Dude, dude, I'm super jealous of your bed, so you're like the coolest person I know already.