Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Protecting data as the critical supply line for AI Applications image

Protecting data as the critical supply line for AI Applications

S4 E26 · Bare Knuckles and Brass Tacks
Avatar
112 Plays5 days ago

We need to stop treating our data like something to be stored and more like a mission critical supply lines.

Andrew Schoka spent his military career in offensive cyber, including stints in the Joint Operations Command and Cyber Command. Now he's building Hardshell to solve a problem most organizations don't even realize they have yet.

Here's the thing: AI is phenomenal at solving problems in places where data is incredibly sensitive. Healthcare, financial services, defense—these are exactly where AI could make the biggest impact. But there's a problem.

Your ML models have a funny habit of remembering training data exactly how it went in. Then regurgitating it. Which is great until it's someone's medical records or financial information or classified intelligence.

Andrew makes a crucial point: organizations still think of data as a byproduct of operations—something that goes into folders and filing cabinets. But with machine learning, data isn't a byproduct anymore. It's a critical supply line operating at speed and scale.

The question isn't whether your models will be targeted. It's whether you're protecting the data they train and interpret like the supply lines they actually are.

Mentioned:

Recommended
Transcript
00:00:00
Speaker
If I assume that bad actors will continue, as they've done so far, will continue to find new creative and really cool ways of breaking enterprise AI platforms and foundational models, which they they continue to do in in new and exciting ways,
00:00:15
Speaker
um then my interest is from a from a zero trust perspective or a data loss perspective, my interest is

AI Security Challenges and Data Hardening

00:00:22
Speaker
in ensuring that the data that's in that application is secure. If I assume that access is going to be um a a nominal problem, which it continues to be, then my interest is making sure that the data is not going to be exposed through that application. And so when we talk about data hardening,
00:00:38
Speaker
our Our approach is to find a way to protect sensitive content being used to train a model in such a way that if somebody breaks that model or somebody has access to that model in production, they're not able to extract, exfiltrate, or infer contents of the training data set.

Andrew Shoka's Journey from Military to Cybersecurity

00:01:04
Speaker
Yo, welcome back. This is Bare Knuckles and Brass Tax, the tech podcast about humans. I'm George Kay. And I'm George And today our guest is Andrew Shoka, founder, CEO of Hard Shell. Andrew comes from a military background working at U.S. Cyber Command, also Joint Special Operations Command, offensive side of cyber. But now he is in and living founder life with a tech company that is trying to harden and assure the data layer for cyber.
00:01:38
Speaker
training AI and yeah he's very down to earth very good discussion and I think there's a little bit of nuance here that I would like listeners to pay attention to which is we talk about AI security which is usually AI securing things but what Andrew is talking about here is securing the things that make AI possible which is kind of a different ah earlier farther up the chain more upstream conversation.

Techniques and Importance of Data Hardening in AI

00:02:04
Speaker
think that's that's kind of where um it gets really interesting because we do address like the confusion between you know how to actually secure your infrastructure, how to secure your business but then understanding what securing a model actually is.
00:02:18
Speaker
Understanding securing data that's being utilized for model training um and this goes beyond just like a simple like DSPM kind of thing or going past data classification. mean that's that's still kind of like enterprise commercialization. Yeah, it's table stakes.
00:02:35
Speaker
We're going into the nitty gritty of like, okay, how do you actually implement, you know, data level security in AI models that are being trained utilizing potentially um you know mandated information such as PII or such as healthcare information. So then how can you actually optimize training of those models using that sensitive of information while still respecting the privacy of your customers, of your personnel?
00:02:58
Speaker
So I think if if you have any care interest at all of really getting down the nitty gritty how we do this or or what are the bleeding edge technologies or theories how we do this, this is the show for you.
00:03:10
Speaker
All right, let's turn it over to Andrew.
00:03:20
Speaker
Andrew Shoka, welcome to the show. Thanks, George. Really i really appreciate you guys having me on. Yeah, so we're going to dig in because there's lots to cover in very little time. So you come from an offensive cyber background, and now you have shifted out of the military to the private sector, building, I think, what could be called like data layer assurance for enterprise ai Can you take me back to that light bulb moment where you had the
00:03:51
Speaker
oh shit, this is a problem that has not been solved or this is ah this is a problem. Am I taking crazy pills? Does no one else see this problem? Like what's that light bulb? Can I make money off this problem better? yeah If it's just that like, oh shit moment, i like what, like the one I had 24 hours ago or the one last week?
00:04:09
Speaker
Oh yeah, yeah. Take me back to the founding oh

AI in Sensitive Areas: Healthcare and Finance

00:04:12
Speaker
shit moment. Yeah. um Yeah, you know, i I spent my career in in offensive cyber and worked at at Cyber Command, Special Operations Command, these really interesting places of the U.S. government that that try to do some unique things with with cyber. And it turns out that that AI is is really good cyber.
00:04:32
Speaker
Helping in in a lot of those challenges with with developing software, with finding vulnerabilities in software. So when when i when I left the government and I you know kind of wrapped up my time at at SOCOM, got got back into into academia, was was working with my my now co-founder on...
00:04:53
Speaker
this this AI security thing, which you know his his research had kind of come in right as LLMs were starting to kind of become a thing. And we we kind of looked at we were originally just looking largely back at the government and thinking, okay, how can we solve this problem for the places that we used to work?
00:05:11
Speaker
And then we looked at like places like healthcare and we looked at places like financial services. And as we saw AI starting to hit these places and starting to get implemented, honestly, like pretty poorly in a lot of ways. One of the the major blockers we saw with that was the similarity back to the the same problem we had seen in in the U.S. government, which is that AI is is really, really good at solving problems in places where the data is really, really sensitive.
00:05:37
Speaker
And if you want to use AI in healthcare, it it can be really, really good at solving some some healthcare-centric challenges. But there's this thing called HIPAA, and there's a thing called HITECH.
00:05:48
Speaker
And it turns out that a lot of medical data is is is really sensitive and really personal. And so if you want to safely use AI, we we kind of had this... yeah Yeah, call it it an oh shit moment or or a light bulb moment of, okay, yeah, like then maybe we we ought to be approaching this the same way that we're trying to secure like sensitive government data.
00:06:08
Speaker
Maybe it's worth trying to put that type of protection on on people's medical information so that healthcare systems can take advantage of ah of what AI can actually do for them. And so just for the sake of our audience, let's get into of the just specifics before we move forward.
00:06:23
Speaker
What Hardshell is doing is hardening the data so that as is it goes into training or it is being analyzed by models, the data itself cannot either be poisoned, corrupted,

AI Model Security vs. Traditional Software Security

00:06:37
Speaker
extracted. Like but this is my supposition, but like let's have you articulate it ah No, I like yours um because I know you and I have talked about the the the storytelling challenge before. i've I've tried to liken it a little bit to to DLP in the like traditional cybersecurity sense where there's this problem of taking data and encoding it into model weights.
00:07:00
Speaker
And then the model has has a funny habit of remembering that data exactly how it went in. which you know you you want a model to be capable based on the data it's trained on. You don't want it to be regurgitating exactly what you train it on. And so at at HardShell, we we focus at you know how how do you prevent a model from exposing training data once a model's been trained on that data?
00:07:25
Speaker
Or how do you prevent a model from having data that's that's poisoned or altered in a way that's going to affect the actual performance of the model? Nice. Okay. Yeah, I mean, that's fascinating. There's there's a ah couple different directions. i kind of want to go with this. The first is it's interesting on the focus of large language models, ah considering the trend towards shifting towards small language models as a matter of saving power and capacity, since we can't sustain the use of LLMs as it is today.
00:07:53
Speaker
there just is not enough space. There aren't enough data centers. Building data centers in the desert and in Arizona just don't make sense. but It's, it's, but you know, I think, I think,
00:08:05
Speaker
You know, we spend a lot of years talking about software supply and trade security. And and as ZSL, I deal with this quite a bit. Typically, it's the usual platformization versus best in breed debate. And I typically don't like going down the platformization route since if that supplier gets compromised, then you're compromised all across your stack. And that's a super fun thing to deal with from an IR standpoint.
00:08:25
Speaker
um But I think, you know, now that we're seeing ML model poisoning and data model manipulation, right, so so the training data now is actually becoming the source of let's say, the the attack threat surface.
00:08:38
Speaker
Like, how is attacking ai models similar to a traditional supply chain attack? And where does that analogy completely break down? Because it's it's hard. You go to a lot of conferences and people will spend all sorts of time pretending like they know what they're talking about in terms of model poisoning or how your AI actually gets compromised.
00:08:57
Speaker
But I'd like to understand from your perspective, having worked this from a national defense standpoint, now trying to commercialize it. What does this mean from a translation for, let's say, let's say I'm i'm i'm a security operations person. I'm a security leader back in like 2015.
00:09:14
Speaker
And we're talking about the theory of AI. How do you create that parallel between the two so I can understand it?

Balancing AI Performance with Security and Privacy

00:09:21
Speaker
That is such a good question and and gets like right to the heart of yeah how do you evangelize not not not AI for security, but like security for AI, which I think has been one of the the biggest anachronisms for for us to to just try to or articulate like, hey, what why does it matter? Why is it different? Why is it important?
00:09:43
Speaker
um I think your your question about large language models and and small language models is is actually a really helpful exemplar of that. Like, that's that's one of the areas that that we're most excited about, like especially in like what I would consider to be more traditional machine learning applications where there's some really interesting use cases for small language models, both within a cybersecurity context and then more broadly within a couple of different industries.
00:10:10
Speaker
But the problem is that like that that data security risk that that that George and I kind of touched on with, you know, hey, when when you put sensitive data into a model, how does training a model potentially introduce new risks to it? And small language models because it's a smaller data set.
00:10:26
Speaker
your sensitive entries, your your high value entries are are even more uniquely exposed and easier to attack. And and this is where the, that there is some helpful thinking to draw on from traditional, you know, network security with,
00:10:42
Speaker
you know, data, you know, if you want to make, if you want to make data secure, never train an AI model on it. Never use it for anything useful. Just, just bury it in the ground. Right. But it doesn't do you any good. Like if, if you're trying to be innovative and you're trying to use machine learning to improve patient outcomes or to process financial records more swiftly. So like,
00:11:00
Speaker
there's always a degree of risk in how much you use and expose data. And and I think that that traditional security mindset of understanding that you know a perfectly secure system is is a system that is not useful to anybody.
00:11:14
Speaker
and And I think that that trade-off between accessibility, performance, utility, and then security and privacy is something that's that's really, really helpful when it comes to AI.

Challenges in AI Implementation and Adoption

00:11:24
Speaker
That I think right now the pendulum has swung towards How do we make models better, larger, faster, and more capable?
00:11:31
Speaker
And now that you know there's been some research from from Anthropic on on data poisoning of large language models, there was a great piece in The Atlantic um a couple weeks ago about you know, what happens when we can prompt foundational models to regurgitate copyrighted novels and and text, which I think is that people are starting to realize that, hey, we can't just build these things exactly how how we've tried to build software originally. And we have to start thinking about how we make them secure and how we make them ethical and responsible to use.
00:12:04
Speaker
But then, so the follow up to that then is, is the source really of the risk in how an organization is implementing AI? Because what I see typically on the ROI side where we're just gonna put AI everywhere. And there's like, no tangible path for ROI and you're wasting a ton of money. But I think it also creates inherent infrastructure risk across your organization from a business process level because like I'm going create this automated process for no reason at all. Like you don't need to, know, I have a friend that works for one of the big banks in very high level position. Just rub a little AI on it.
00:12:41
Speaker
um Literally, but but their their mandate from their board is like, like they run this like office of AI for the entire bank. And it's like, everyone has to have this suite of like 40 something AI tools and basically automate themselves of their own jobs. And I'm like, I was sitting around hearing this the other day. And I'm just like, this is this is insane. Like, is this not yeah and where the risk is coming from?
00:13:03
Speaker
Right. and and then And then either nobody uses it or or or someone uses it and starts putting the but the KFC like like secret spices and formula up onto the enterprise AI platform. And all of a sudden, someone else is getting out. Yeah, like it's it's it's a totally valid question. And like I think we're stuck in the very early days of AI implement implementation now. And and I, like from from a hard shell perspective, we'd love having that exact conversation with that type of leader and saying, hey, look, like, I understand you probably got an AI adoption mandate.
00:13:36
Speaker
You're probably getting pressure from your board, from stakeholders to say, how are you using AI? and And your challenge as somebody who's like a technologist is, okay, like, how do I not just do just just AI for AI's sake? And how do I make sure that, like, from a risk perspective that,
00:13:52
Speaker
I'm not either potentially harming our ability to stay compliant with whatever framework we've got in our industry, but I'm also like not going to cause the business, as like from a from a security operations perspective, I'm not going to like create an unacceptable business risk.
00:14:06
Speaker
And so we we really try to approach it as less of a security per dollar idea and and more of a hey, can we give you a a tool that's going to help you be able to use AI where you actually need to use it, which is with like sensitive operational data? Because I think like at the end of the day, it like I think you hit the nail on the head. it just It just comes down to an implementation problem. Where are you implementing it? Are you just doing it for AI's sake? Or are you actually giving it good source data to use?
00:14:37
Speaker
Yeah, I think, um, I think one of the reasons why the conversations are tricky at this stage, because the only context that we have, and this is what humans do, right, is like pattern match the context is what what have I seen before, is traditional software.
00:14:56
Speaker
You know, we've spoken with um AI red teamers, you know, and and they're very clear a model isn't executable. And i think when you say that in a security context, suddenly you're like, oh, right. You know, and then also different from the way traditional software behaves with data, right? Which was to take static elements in, do something with that.
00:15:19
Speaker
Whereas models have to sit and sort of continuously read, write and access highly sensitive data. So it wasn't like, let me just store this in this bucket in this safe behind this lockbox. Right. Like, and it and it works completely differently. so From your perspective, from this moment where you realize, okay, people want to use this, but they're sort of missing a security layer around the most, the fuel, right? Like let's let's ah secure the fuel lines.
00:15:51
Speaker
ah We don't just pour gasoline like literally into the car, right? We have a special process that goes into a tank, a fuel injector. Like we have a wave because it's a highly flammable explosive material. We're driving around in internal combustion engines, right?
00:16:05
Speaker
um So can you walk us through what that hardening means? Because i think today we have a lot of people saying their AI platform is secure.
00:16:18
Speaker
But that's such a like a broad word that they, i mean, they could have something at the browser layer for prompt injection. They could have something that mitigates ah copy paste to the clipboard. Okay, cool. But like, I want you to really explore this like kind of source material that you're getting to, right? that That data layer. So... Yeah. um I was in a conversation last week with a ah a former mentor from um the Department of Defense and he he remarked on this, is like like saying you have a a data problem is kind of like saying you have an error problem. It's like, great, like, cool. like you you Yes, you've got this thing. There's there's a problem

Future of AI and Data Security against Modern Threats

00:17:00
Speaker
with it. Like,
00:17:01
Speaker
how saying, oh yeah, like cool, like i've I've got the AI, it's it's secure. it's ah To me is is sort of a ah more of an indicator that making that statement is a problem than it is any actual description of of the problem itself.
00:17:15
Speaker
um And I think that that kind of gets to our approach. Like if if i think I think back to how we built the cloud security stack about like 15 years ago, and all of a sudden enterprises started shifting to the cloud, all of a sudden there was this this idea that you could have cloud-based software and and built into a a delivery model for a lot of enterprises, which by the way, like we're still figuring out from a cloud security perspective, there's still... I think I've heard of a couple cloud security companies recently that have been seem to be doing fairly well. like it's It's still a problem that is being figured out very profitably for um folks who can innovate in that space.
00:17:51
Speaker
when I look at how we're building AI security and what that stack is is being built, because it's still very much in flight, I think you're spot on. Like people tend to jump to, oh yeah, like I've i've got a wrapper on JetGPT, I've got a security appliance that I can model and and find where I've got employees using AI. i can track my my AI integrations across my software.
00:18:16
Speaker
i can look at sort of having, you know, basically antivirus for models. And I can scan and find when I've got a bad model that's being used in my enterprise. That's great. But it's it's it's one slice of that stack and it's one part of that problem. It's really only the, yeah that's like the user layer.
00:18:34
Speaker
Exactly. Yes, exactly. Like that's like saying, cool, like like we we've done everything we need to do for security, but weve we've just done like application security and we've just done access controls. Nothing about how that software is built, how that software is deployed, where that software comes from. There's there's other layers to it. And, you know, i think, you know, from our perspective, when we look at, ah can you hit on kind of the data hardening piece is,
00:18:56
Speaker
If I assume that bad actors will continue, as they've done so far, will continue to find new creative and really cool ways of breaking enterprise AI platforms and foundational models, which they they continue to do in in new and exciting ways,
00:19:11
Speaker
um then my interest is from a from a zero trust perspective or a data loss perspective, my interest is in ensuring that the data that's in that application is secure. If I assume that access is going to be um a a nominal problem, which it continues to be, then my interest is making sure that the data is not going to be exposed through that application. And so when we talk about data hardening,
00:19:34
Speaker
our Our approach is to find a way to protect sensitive content being used to train a model in such a way that if somebody breaks that model or somebody has access to that model in production, they're not able to extract, exfiltrate, or infer contents of the training data set.
00:19:56
Speaker
Alright, when we come back from the break, we will talk to Andrew more about transitioning to founder life, what US Cyber Command is like beyond Hollywood depictions, and finally, data as a critical supply line to be protected.
00:20:22
Speaker
The inference part and sorry, ah this is why I hate a lot of AI jargon. Let me be clear. What you said about being inferring training data, not to say inference like, ah you know, foundation model people use it is I think really important. You and I had talked offline about, for example, if somebody downs a drone,
00:20:49
Speaker
you know, in a wartime capacity or maybe maybe it's a surveillance capacity, could they pry it open and get at or poison or otherwise compromise the data set that the drones are being trained on? Because they are. I mean, that's just a reality. So...
00:21:07
Speaker
um Yeah, I think it's it's it's it's the perfect use case, right? if If I want to deploy a small language model or a really specific machine learning algorithm for project detection. Yeah, exactly.
00:21:19
Speaker
um And then I assume that someone's going to get access to that. like it It is no longer theoretical. It is it is proven and in practice that with access to either intentionally or unintentionally to a production model, I can extract through either a large volume of queries or by specific statistical techniques in interacting with the model. I can extract information about your training data set.

AI Security Strategies and Military Comparisons

00:21:43
Speaker
And in the example we talked about with with downing a drone,
00:21:46
Speaker
If I can figure out that you overrepresented tanks that are painted green in your data set and underrepresented tanks that are painted orange, guess what I'm going to go out and do with all my tanks? I'm going to go paint them orange and all a sudden your model's not goingnna go to be able find them.
00:22:00
Speaker
Yeah, i I appreciate that. And I also think for the listeners, you know, a similar analogy would be In the mission that went after Osama bin Laden, one of the helicopters failed, right? There was some problem. And as SEAL teams are trained to do, attach some explosives to the front of the hardware and you have to like destroy the stuff that you leave behind so it can't be, you know, taken apart and used again. So I i think of it that way in that kinetic context. Like if you have...
00:22:33
Speaker
more and more software layered into your systems. ah Eventually, you know, hardware can be compromised, but how do you protect kind of the source material behind it? Yeah, I think you kind of bring up a good point as well. I mean, I know that, you know, in some of the side projects I'm working on, that that model data accuracy is a real big thing that we're focused on ensuring like that it could be validated, it can be verified. um You know, particularly now we're getting to a point where i think in the future, very, very, very soon, data is going to become a full-on asset class that's going to be traded and commodified like anything else.
00:23:09
Speaker
And so you're going to end up dealing with a lot of data fraud. Data fraud in the sense that you're dealing with financial fraud, you're dealing with counterfeiting. well now you're going to deal with the same thing with like data that is is um repudiated in a way that doesn't actually comply with what, you know, buyers think they're getting from sellers. And that's another separate conversation. I'll probably connect offline about that, Andrew.
00:23:31
Speaker
um But ah to the point, i think when you're working on a lot of these things, like you've been working but about a year now on Hardshell, right? Yeah, so yeah we we we hit our one year anniversary next month.
00:23:43
Speaker
Congratulations. Happy birthday. Thanks. um So you ah look really, really like refreshed and well slept. It's wonderful. and in the And the things I do outside of work. he's not He's not jealous at all. He's not jealous at all. I don't really sleep. I program and I do things. I got about two, three hours of sleep a night. So I really appreciate it. um You know, one power to another.
00:24:07
Speaker
But will say, um you know, We talk a lot on this show about how a lot of founders are brilliant technologists. You seem like one. You seem like you'd be really cool and fun to work with. I'm an idiot. don't know why my friends work with me, but like there's some brilliant people out there and you're definitely one of them.
00:24:26
Speaker
and so, you know, when, you know, I find that, A lot of people, when they transition from practitioner into that whole founder, CEO type space, it becomes really, really friggin miserable because there's the reality of building and running a business in this economy, in this reality, in this investor space of all things. Because Lord only knows you got to talk to like 50, 60 VCs before you even get enough for a seed round. And that alone is just ridiculous because- You only talk to 50?
00:24:59
Speaker
it's' it's They're still counting. they're still you You've got a much higher hit rate than I do. So so it's it's it's interesting because... um you know like some of the it's a game that people kind of understand at a high level of like what the rules are but really the rules aren't solid and you know like some people can walk into a meeting and it's just like you don't even have an mvp your poc barely makes sense you threw together some janky powerpoint deck and they're like oh but we like you and so here's five million dollars and you're just what is this right and then you have something for example or you have something really built you put the work into it but then
00:25:36
Speaker
You're dealing with all this scrutiny. It's like, oh, but you're still only pre-revenue and all. And you're like, the the thing works. I showed you it works. And that can get really, really frustrating, especially as a practitioner where our lives are more like a to B to C, where it's like problem, technology options, solutions, confirmation, and you go forward.
00:25:55
Speaker
The game is not like that. it's It's a very, very terrible game that I don't know why i know why people do it, but it's just, it's it's not fun and it's not fair.

Leadership Lessons from Military to Startup

00:26:05
Speaker
So, you know, what surprised you the most about Founder Life? and And I'm a veteran as well. I served for a long time the K&R Forces. I went to the Officer Academy there. i have a little bit of empathy of ah maybe what your experience is.
00:26:17
Speaker
you know, what what do you find that... what what translated from military leadership principles as you were experienced and you were taught ah versus the ones you've had to throw out the window, which to me, like, you know, it I look at number one in our in our K and forces leadership principles is clarifying objectives and intent because no one really really like clarifies objectives and intent seems to change almost on ah on a daily or weekly basis.
00:26:45
Speaker
As an example of something that used to be a bedrock of my founding leadership principles and now it's kind of like going out the window as... this game changes hour by hour. Like what have you found has been working from your, from your military leadership time versus what's had changed?
00:27:00
Speaker
Oh, what a great question. um i I totally resonate with the but the the struggles and the the challenge of being a founder. I i got very, very lucky to to meet my co-founder, Hunter, who you guys would would would really enjoy meeting sometime, just a a truly brilliant technologist and somebody who saw around the corner a little bit, I think, on on AI and the need to to build this foundational security layer.
00:27:27
Speaker
um i I talked about this a little bit with um some other vets recently ah through a group called VetSec that does a lot of of work in the space with, I know George is familiar with, with trying to support veterans getting into cybersecurity. And i I told them like, hey, like this, this whole founder journey, that there's a lot that you can rely on from from the military. And you know, in my service, I think there were there were lessons in trusting a team and and the power of empowering really talented, really smart people and then getting out of their way and letting them solve problems the way that they know best um that that I think resonate very, very strongly with the team that we built at Hardshell. um You know, we got extremely lucky to bring on three just really world-class ah other members of our founding team, Ben, Andrew, and Sammy, who
00:28:16
Speaker
all have just continued to impress and and I think really really make make hard shell possible in the first place. And so I tell them like I have the easiest job in the company getting to be the CEO, the janitor, the IT t support, caterer, and and just frankly just getting out of their way and just letting them solve problems. Yeah, just let rip.
00:28:36
Speaker
which you know was was something i I learned very, very intimately back and in SOCOM with getting to work with with the guys and gals there. I think, frankly, though, it was ah it was a real challenge in going from a world where I often had a very clear stated objective of, hey, cool, like you are responsible for for finding a way to access this iAd system over the next six months.
00:29:00
Speaker
You are responsible for finding out and working with your team how to do that, but here is your very clear end state. It's not gonna change for six months. This is a critical strategic mission, whatever, you need to do this. That does not exist in the world of entrepreneurship. the The thing that I think is the priority right now is probably not even gonna be on my radar next week. It changes constantly.
00:29:19
Speaker
the this like isotropy of, hey, like I've got this thing I need to work on. I've got this thing I need to on. This thing my advisor or this investor is telling me to do um was really a challenge at first. And I think that the best way that we want to combat that is having really solid advisors, mentors, and people who took an interest in in coaching us and making us. So it's just just just like George did with, I think, a lot of our storytelling and and how to evangelize this. Finding a good, strong network of friends and supporters to help you navigate that transition.
00:29:50
Speaker
So really, it really comes down to still finding your tribe. I mean, it's the same thing. like ah Like, I came from the second world, so probably similar to, hey, find this access, find this thing, find this target. And then, you know, you'd you' work with your partners, you work with your agency partners.
00:30:06
Speaker
um But I find what's interesting is going to founder life, you have to play multiple hats and multiple roles. Because when I did the thing before, like you know, you work with, let's say, regional targets and like, here's your AOR. AOR being your your area of responsibility or, often say, area of operations, AOO.
00:30:25
Speaker
You are there. And um there are very specific rules about, hey, you you can't do what to be called singing tours and where it's just like, cool, I'm working in this mission, but I want to know what's happening in this country. So I'm going to looking that up because then a nasty and NSA auditor will flag you and tell you that you're a bad person. So I'm putting this very, very high level and politely.
00:30:45
Speaker
um But, you know, in in the founder world, like you wear a different hat every week. You might be in the CEO seat, but founder led sales a thing. You got to be a recruiter. Then you have to actually do solutions architecture. Then you have to sit there and come up with like workflow UX. Like your're you're everything is all in one.
00:31:04
Speaker
With your team, do you guys find that you interchange your roles seamlessly and it's easy? Or, or you know, do people, because you're all really smart, do they go through that typical thing where there's like, this is patch, this is the thing I do. In the most endearing way possible, the tism kicks in and you're like, no this is my way of doing it. I'm down this path. And like, are you guys able to to flexibly work together? Because if you come from a soft background, like it's it's very similar to how soft work, where it's like, yeah, that's your role on paper, but you're doing whatever for the mission.
00:31:34
Speaker
Are you guys able to seamlessly transition that? That's such a good question because everyone just preaches, you know, take take ownership, take ownership. And in a startup, it's it's very like, hey, someone needs to own this. Someone needs to own this feature or somebody needs own this aspect of the product.
00:31:49
Speaker
um That has been a struggle. I think especially as as founders, um you know, my my my co-founder and I really... It was tough for us once we started hiring folks to understand, okay, I'm going to hand this thing off and it's it's going to be okay.
00:32:04
Speaker
um I hired this person because they are phenomenally talented at what they do. um And then a lot of trouble lo and behold, yeah, it's it's trust. Yeah. and And, you know, turns out like, yeah, hey, like I hired this person to be good at it. And and they are.
00:32:19
Speaker
um It's really tough at first. And it was something we struggled with, like both wanting to be involved in understanding every aspect of the product, every aspect of the business. And even as just as as the company has grown, as as we fundraised, as we started to to work with customers,
00:32:35
Speaker
You know, weve we've each had to kind of grow more independent as founders. And, you know, between what I do for the business and what my co-founder does, that that's that's also drifted a little bit. And it's more trust and autonomy that you build within the leadership team. And and also that that extends down to the team as well, just as it gets rapidly more complicated.

Cybersecurity's Role in Protecting Democracy and Data Perception Shift

00:32:55
Speaker
um Yeah, like george George is totally spot on. It it is 100% about having that bedrock of trust in the team. Yeah. um All right. Well, I want to close out here with the reverse question, which is just fun for our audience. So what is the biggest misconception you think people have about Cyber Command?
00:33:18
Speaker
You know, if you go out on the street and you're like, I was in the military and I worked for and NSA and Cyber Command, they definitely have a Mission Impossible still from the movie in their head. What would you tell civilians this really like?
00:33:33
Speaker
um there's There's a little bit of that. um You know, I think just particularly over the last couple of months, we've seen some really visible projections of what what what true cyber warfare can look like in ah in a very kinetic and and physical sense.
00:33:48
Speaker
um But i I think, unfortunately, you know, Cyber Command and a lot of other professionals, you know, within the U.S. s government in in the intelligence community are are never able to really trumpet their success stories and and talk about these incredible things they do. And I i think one of one of the great examples of that is is what Cyber Command has done with election security, which is something that's that's incredibly politically fraught and incredibly difficult from a um an organizational perspective. But it's it's not all just the, you know,
00:34:22
Speaker
turning off the cameras, turning off the lights, turning off, you know, the the OT systems. Some of it's also working on these these very strategic level issues that are more of a a threat to democratic processes and and the threat to, you know, kind of the the democratic...
00:34:38
Speaker
systems across the globe. And what we found is that, you know, the information space is a really key vulnerability for democracies. And i think it's it's super incredibly important to have professionals in that space within information security and cybersecurity who can contribute to what democracies are trying to do to to preserve democratic order and advance the the the idea that you can balance freedom of speech with with also having an you know a a a secure and stable information environment where misinformation and disinformation isn't allowed to flourish.
00:35:14
Speaker
And so I really wish Cyber Command would get more credit and more visibility for the the frankly incredible work that it does in that space. Because the nature of it, a lot of it's never going to, you know, unfortunately make it into the public sphere. But, you know, when I think about some of the most impactful work that I had the chance to to be a part of and to see, it was a lot of it related to to things at at the, you know, election security and and misinformation space that I think really made a important difference in our ability to have that type of dialogue as ah as a country.
00:35:44
Speaker
Yeah, I think two two points there. I mean, last week's episode was with a nuclear security expert and just thinking about, yes, a lot of unsung heroes do a lot of work so that when you hit the light switch, you can trust, you know, most days the lights come on, right? Or that you can refrigerate your food or, you know, that ah hospitals have electricity. So, yeah, I take that point. Thank you very much. And then the second point, you called it the information space, which I...
00:36:12
Speaker
ah quite agree with. But I think maybe our conversation today is is also germane to this idea that organizations are used to data as a byproduct of operations, right? Well, we have to collect patient data so that we know. And yes, to make some operational decisions, but the the difference with machine learning and broader AI applications, I think, is one of speed and scale, right? The information created is rapid and the deployment and analysis is rapid. And i so I think if if they start to conceive of the data as less efficient,
00:36:51
Speaker
this thing that goes into the folder, into like a filing cabinet, and more as a critical supply line, you would think about protecting it a little bit differently.
00:37:04
Speaker
So i I love that analogy. Yeah, spot on. Nice. Well, Andrew, thank you again for jumping on with very short notice and for sharing your story. Really appreciate it.
00:37:14
Speaker
Absolutely. Thanks so much, George. Really appreciate it.
00:37:20
Speaker
questions to leave you with, I think after talking with Andrew, The one that really stands out to me is, again, sorry, listeners, I know I talk about culture, but it is like a cultural mind shift in businesses as they understand the role of data. It has echoes of 2013 when everyone was saying big data, which was really just like, let's grab a whole bunch of it. But it was too early and we didn't have the tech to really operationalize it or do much with it other than store it and like pretend to do stuff with it.
00:37:53
Speaker
But how do we move beyond that and think about data as part of a critical element in our business supply chain? I think on on that line too it's really how do we then get board and organization level buy-in to understand the nuance of the problem? Because I think the issue is a lot of organizations, a lot of boards, a lot of investors are driving driving their portfolio companies and and and their personnel to rush face first into implementations that make absolutely no sense.
00:38:26
Speaker
And, you know, if the if the business case itself doesn't actually make sense, then for sure the technical implementation is not going to be done with the kind of care necessary to protect data.
00:38:37
Speaker
So really the question is, how do we then get boards to to start putting on the brakes and understanding that, you know, They're not rushing into an AI bubble, but there's likely going to be an AI correct correction in the market.
00:38:51
Speaker
How do we smartly implement these technologies so that we're one, getting a return on investment and two, not setting ourselves up for an economy-wide compromise at some point?
00:39:03
Speaker
100%. All right. Take that forward, listeners. We'll see you next week.
00:39:09
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:39:22
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. helps others find the show. We'll catch you next week, but until then, stay real.