Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
How Expertise becomes a blind spot in technology development image

How Expertise becomes a blind spot in technology development

S4 E17 · Bare Knuckles and Brass Tacks
Avatar
108 Plays14 days ago

Graeme Rudd spent years taking emerging technology into austere environments and watching it fail.

He assessed over 350 technologies for a Department of Defense lab. The pattern was consistent: engineers solved technical problems brilliantly. Lawyers checked compliance boxes. Leadership approved budgets. And the technology failed because no one was looking at how humans would actually use it.

So he went to law school—not to practice law, but because solving technology problems requires understanding legal systems, human behavior, operational constraints, and business incentives simultaneously.

Now he works on AI governance, and the stakes are higher. "Ship it and patch later" becomes catastrophic when AI sits on top of your data and can manipulate it. You need engineers, lawyers, operators, and the people who'll actually use the system—especially junior employees who spot failure points leadership never sees—in the room together before you deploy.

This conversation is about why single-discipline thinking fails when technology intersects with humans under stress.

Why pre-mortems with your most junior people matter more than post-mortems with experts.

Why the multidisciplinary approach isn't just nice to have—it's the only way to answer the question that matters:

Does it work when a human being needs to operate it under conditions you didn't anticipate?

Recommended
Transcript

The Future of Human-Computer Interaction

00:00:00
Speaker
one with all frontier technologies that we've ever had, it causes a massive amount of disruption, but the human capacity to adapt to it and change is pretty amazing, right? um I think about my four year old son, by the time he's my age, his ability to interact with computers is going to be so alien to me, I can't even conceptualize it now.
00:00:21
Speaker
And so I think, you know, While we are experiencing real tremors right now, I do think that at some point, whatever it looks like, who knows, but we will come to a more stable point in terms of our interaction with this technology.

Introduction to 'Bare Knuckles and Brass Tacks' and Guest Graham Rudd

00:00:43
Speaker
I'm George K... I'm George Day. And this is Bare Knuckles and Brass Tacks, the tech podcast about humans. Today, our guest is Graham Rudd, who has a storied background. Let's see, he started out as a special forces medic, moved into what is called Red Cell in the military, which is kind of like red teaming, except for it was like, take it into battlefield conditions and beat the hell out of it to test weaknesses in whether it was software or hardware.
00:01:12
Speaker
and then went and got his JD ah because he wasn't done learning. But this conversation isn't about cyber specifically. It's about how do you think through the second order, or third order effects with new technologies? ah I don't know. Like always, I feel like we could have gone for hours, but it was a really great conversation.
00:01:31
Speaker
No, I think this was good. it It kind of goes back to traditional tech podcast roots that we kind of came from. And um I think what I like about it, though, is we're addressing the real cutting edge problem.

Challenges in Software Development and AI Deployment

00:01:41
Speaker
You know, we go into, again, software development lifecycle. We talk about the issues of vibe coding. We talk about organizations and businesses that are trying to pump just junk code into production and the negative impacts. And then we can also touch on the fact that there are psychological impacts by removing the guardrails from how AI gets tested and deployed. And like, why the fuck?
00:02:00
Speaker
Why AI? it on a T-shirt. Yeah, let's turn

Graham's Transition from Military to Law for Tech Legal Understanding

00:02:03
Speaker
it over to Graham. put it on a t-shirt yeah let's turn it over to graham
00:02:16
Speaker
Graham Rudd, welcome to the show. Thanks for having me on here. It's good to be here. Yeah, we are very excited to dig into your experience as military-level red teamer, or practical applications of technology, stuff like that. It's a big, ah varied background. You and I had had a separate conversation before we were recording about that, and that is why we are here. So,
00:02:42
Speaker
Why don't you just give ah the very quick kind of five minute bio? um Because I think that background will bring a lot to bear on the interview.
00:02:53
Speaker
Sure. um ah started my career in the military, special forces medic. And a um about midway through my career, I transitioned into National Guard and started doing some red teaming for a subset of a department of defense lab called Unique Mission Cell.
00:03:08
Speaker
Um, that temporary duty assignment during red teaming turned into a full assignment there. And so I worked for a number of years for the subset of the lab and we assessed new emerging technology, pre-acquisition and realistic field settings.

Field Testing Technologies in Real-World Scenarios

00:03:21
Speaker
So we take 25 to 30 technologies out to austere field settings with a completely networked infrastructure that we'd build out each time, run the technologies through full mission profiles against live opposing forces that had computer network security and radio frequency capabilities.
00:03:38
Speaker
And then all that would be captured by data collectors and a network if it received or transmitted. And that report would be given to industry members to help shorten their R&D timeline and it gave government a better idea of where technology actually was.
00:03:52
Speaker
And I did that for a number of years. I'd say ah easily over the time I was there, seeing about 350 unique technologies come through. um I really love the job, but we were on the road a lot. And so I transitioned out of the military for a number of reasons, mostly ah physical injuries and limitations.
00:04:11
Speaker
Ended up going to law school in large part because i had worked with a lot of attorneys that didn't understand technology at all, and it was really difficult to get experimental work done.
00:04:23
Speaker
And often the roadblock was lack of understanding, not the real reason um being cited. And I also wanted to do something on entrepreneurial, so I knew that having some idea of law would be helpful.
00:04:34
Speaker
um So I ended up going to law school. I did a bit of time in-house um as in-house counsel and then at a law firm and then transitioned to where I am now, which is working with assessed intelligence. We help companies leverage AI in a safe and responsible manner.
00:04:50
Speaker
Nice. That was a very succinct bio. I appreciate that. But what I really appreciate is in our background, Red teaming is largely code driven,

Operational Vulnerabilities in Technology

00:05:00
Speaker
right? But I love this idea of like, okay, well, you've made a thing.
00:05:05
Speaker
Let's see if it works in a sandy environment and, you know, a real world um thing. So you can, ah can you speak a little bit about how that background in what I guess what I'd call pragmatic practicality, um,
00:05:22
Speaker
You know, how do you bring that to bear today when you look at how technology is being developed? i I remember in our conversation the specific anecdote about something that You know, the developers would say like only weighs, you know, 15 ounces or whatever. But as you pointed out, once it's on a heavy piece of equipment, day three, mile eight, that 15 ounces does something to the person who's having to carry it, right? So what is that that bridge between, you know, field testing ah hardware to how you think about, you know, either response whether it's responsible AI development or whatever, like talk a little bit about
00:06:02
Speaker
Yeah, you know, I think when I look at red teaming, like the red teaming that I was doing when I was to the lab, some of it would fall into what people might classify as like operational test and evaluation or how to improve user interface, things of that nature. Red teaming was a more holistic look for us um at not just the actual technology itself, but the enabling functions around it.
00:06:24
Speaker
So what do I mean by that? You know, if you have a 19 year old soldier trying to use a piece of equipment that has a number of cables that needs to be plugged in and they're not color coded and this person's not an engineer. Yes.
00:06:38
Speaker
It becomes really difficult. These are things that some of the

Red Teaming and Identifying Tech Vulnerabilities

00:06:41
Speaker
engineers and people that build it and work with it every day, they don't think about this. It's just second nature to them. But for a 19 year old in the middle of a rainstorm in a pitch black, like this is a completely different experience. So it's helping, you know,
00:06:54
Speaker
not just red teams will find vulnerabilities, but also find vulnerabilities in a way that people use or don't use the technology. Yeah, i think like really ah operational vulnerabilities. is Right, right.
00:07:05
Speaker
um And I think, you know, that's really important more now than ever in terms of AI coming on the field is because there's so many permutations of what this stuff can do. There are so many different ways that it can be applied for good and for bad.
00:07:20
Speaker
And so having a more multidisciplinary approach to dealing with it and with a lot of technologies now that are interacting with ai are going to help organizations prepare for the for the things they can prepare for at the least and then anticipate some of the more unexpected issues that might arrive and so one of those is you know how is the user going to actually be able to perform with this

Advising on AI and Managing Unknowns

00:07:47
Speaker
maybe it's uh something in a sock where it's just information overload or it's a soldier having a sensor put on the end of his rifle that is 15 ounces. But over time, that becomes really, really difficult to maintain.
00:08:00
Speaker
um And so, you know, when I was the lab doing assessments, we tried and find ways to expose those vulnerabilities or issues um through experimentation. And now where I'm at, we really do that through audit and advisory services.
00:08:15
Speaker
So not as much like a realistic field setting, but the principle carries over and it's really helping companies or organizations deal with a really chaotic unknown and a lot of different possibilities and attack surfaces now. So it's just not the same mindset that will bring you success, in my opinion, that we just use in the cybersecurity realm. And and we we need to go broader.
00:08:39
Speaker
And so I think that's something that we're really trying to bring out from our previous experiences. What I hear you saying is you cannot rely simply on lawyers. That is very true.
00:08:52
Speaker
yeah Yeah. First of all, it's a pleasure meet you, Graham, and your work is like super impressive. as well. Very happy that we

Innovation and Its Pitfalls Amid Changing Landscapes

00:08:59
Speaker
were able get you on. You know, it's funny. It's Nullabiesides Ottawa right now, and I just did a morning keynote, and the whole thing was talking about ah patriotism over profit.
00:09:10
Speaker
And so the concept of that, I think, speaks to kind of what you're talking about, because we have to look at trying to maintain our defense and technological advantage over adversary states as a whole of state or a whole government type of approach now.
00:09:25
Speaker
So, you know i think if we just try to isolate and silo out realms, without actually creating interoperability. i think that's where it fails. And then like I have a i a second background as well from the Canadian military, and I kind of saw how how negatively impacting it could be when innovation is looked at in a piecemeal or spot type basis.
00:09:49
Speaker
And I'm wondering, like, for me, like, my whole thing, you know, this week was talking about, hey, how do we... for lack of a better term, how do we unfuck the cyber defense problem at a whole government level from coast to coast?
00:10:01
Speaker
And I think you guys like are, you know, as our as our allies down south, you guys are still also trying to figure that as well, and particularly as you guys are going through changing administrations and changing methodologies and all the turmoil.
00:10:13
Speaker
is it is it hard to maintain that operational mission-based focus as the climate that you're dealing with changes, right? Because the shifts now are happening in a way that's been a little bit unprecedented by a little, i mean, a fucking lot.
00:10:28
Speaker
And it's hard when you get really smart people who get fixated on doing a thing and this is where we're going to build and is we're going to do it and this is how we're going fund it. And then the funding gets switched and the mission gets switched and then there's, you know,
00:10:41
Speaker
you don't know if you're going to be able to work with Derjif from wherever anymore because he might not have his job. And you're like, how do you balance yourself dealing with all the inconsistency that keeps getting thrown at you?
00:10:54
Speaker
You know, I don't think there's one any, like there's like one single answer I could point to.

Importance of User Feedback and Secure Design

00:11:00
Speaker
But one of the things that has struck me as really pertinent nowadays is as things are shifting consistently,
00:11:09
Speaker
there are basics that we can do so that you're at least prepared as much as you can be for the issues that you might arise come across, whether they be shifting regulatory sands, a changing threat environment, or just the ability of technology to cause malfeasance inadvertently.
00:11:27
Speaker
And when I was a lab, one of the core things I took away from my experience assessing all these different technologies was that the real recipe for success that I saw for all these different technologies that came through was i willingness to incorporate user feedback into the product very early on.
00:11:48
Speaker
And an understanding that secure by design, secure foundation was absolutely essential. No matter what was gonna come in the future, they had to start there. The companies we saw that jackrabited to an MVP, but didn't do any of the security work that they needed to do, for example.
00:12:04
Speaker
they often would end up having to go back to the drawing board built from the ground up because they had missed important parts of the technology that were required at a later stage. And I think that really that's the best that you can do is do the basics well that you know you have to do.

Security Risks from Neglecting Due Diligence

00:12:22
Speaker
Have discussions and think about potential unknowns that may impact you are as an organization. and and then really go forward and see how you can react in the most agile manner.
00:12:37
Speaker
And when I say go back to the basics, I think a lot of people use that term and don't really follow through on it. And by that I mean, I've spent a lot of time talking to companies recently who are adopting AI.
00:12:51
Speaker
And whether they're building it out or they're using it their workflows. And what keeps striking me is that all the basic due diligence and processes that we'd use for traditional software, you know, in terms of vetting, terms of making sure they were secure, seems to go out the window when it comes to AI for a lot of organizations.
00:13:12
Speaker
And so I think before we can even deal with the shifting regulatory environment or alliances or things of that nature, we need to make sure that we have the basics as organizations done to the best of our ability. So hopefully that was responsive. Oh, no, I think it's good. I think, you know, it speaks to the bigger problem. Like again, you know, i get asked the implementation question all the time. And to me, I go back to,
00:13:34
Speaker
You know, I went on a whole thing on stage yesterday talking about how, you know, vibe coding is going to be a cancer that destroys the software development industry because you're taking people have no understanding the basic concepts of SDLC or software development lifecycle.
00:13:49
Speaker
You're giving them the power to come up with lines of code that they'll understand if it's Python, if it's JS. if it's Ruby itself kind of thing, like, does it get implemented? Is it Cobalt? Like, what is this producing?
00:14:00
Speaker
What's the difference between terminal and bash, right? Like, because when you see how the instructions come out on it, so these people don't understand this and they're trying to put code straight to production, which is even more insane.
00:14:11
Speaker
We've now had this like, apprehension against QA, which I'm like, you know, if you don't do code scanning, if you don't actually validate if the thing works, how's it going to function?
00:14:22
Speaker
And i'm I'm concerned that it's not even just about, you know, on a... On a theoretical level, why are we applying AI into this process, which I think is kind of what you're speaking to is why are we doing it?
00:14:34
Speaker
But on the technical level, it's, you know, where does AI or AI implementation fit into a correct enterprise SDLC? And i think we've allowed ourselves to forget that there are best practices that are tried and true through a lot of failure for a reason. Are you seeing the same thing?
00:14:54
Speaker
but Yeah. Um, what that immediately brings to mind to me, um, when I first started working at the lab, it was a height of like, um, a lot of the IED incidents going on in Iraq.
00:15:07
Speaker
And there was just like, there was such an immense and necessary desire to get technology to help fight this really nasty tactic. Um, to the point though, that There were systems that were being purchased that seemed to be completely non-functional.
00:15:24
Speaker
And I feel like there was more of a wish there that this stuff would function. It's like an aspiration. I'm purchasing aspiration. Absolutely. And I feel like in the AI space, we're starting to see a lot of that too, where people are buying an idea, not a capability, and that's becoming a real problem.
00:15:44
Speaker
Yeah, just get back to the, what is the why? we need to use ai Why, right? i'm trying to it' To your point, George, I um had a conversation with somebody, don't I feel like software engineering, there's a lot of conflation between pounding the keys to generate code versus the actual engineering, which is like, what is the problem we're trying to solve? And, you know, like from a symbolic logic standpoint, yes, an LLM, whatever is good at generating the lines of code, but it's not always great at architecting or engineering through the problem.
00:16:23
Speaker
which is what a more sophisticated engineering team is capable of understanding. So it's just like, make it do this. and Anyway, that's ah that is um takes me to the next question.
00:16:34
Speaker
um So Graham, we'll get into the multidisciplinary aspect, but I'm curious to get your take on what do you think are kind of the most outdated assumptions driving decisions in tech development today? Because I feel like what we're intimating in this conversation with this is what we used to do. Then when AI comes, we throw it out. through I feel like we're trying to copy paste old mental models onto a technology that is fundamentally different in the way it interacts either with data or people. So what are you seeing when you talk to clients or even just from your past? What are those outdated assumptions that we need to update?
00:17:18
Speaker
You know, I think one thing that i hear a lot is We've got a really great cybersecurity team. We're good in terms of ai implementation. And that's where I take pause, right? Because I'll just give one example that really cuts through to this is the chatbot psychosis.

Psychological Harm from AI Interfaces

00:17:34
Speaker
This is becoming something we're seeing more and more of. Unfortunately, every day, yeah there are more people who are turning up, you know, dead or severely harmed because of their interaction with this.
00:17:46
Speaker
So some of these are being used at work right now. And in the near future, I don't think it would be surprising if we find that people are having some sort of psychological harm caused by chatbots that they're interacting with at work. It's happening outside of work.
00:18:01
Speaker
And so, you know, that's where it really comes together. Is your cybersecurity team equipped to deal with? psychological harm caused by a chat bot, that's something that's more on in the realm of HR and your company's leadership. And that's where I think that we really are struggling to break out of the mold. This is not traditional software we're purchasing.
00:18:23
Speaker
It's not the same as Salesforce that you bought 10 years ago. We weren't having conversations like this, you know, and that's just one example, right? But I think that's a really, really salient one for sure. Yeah.
00:18:37
Speaker
And, um, we i feel like we will be relying more on people in the future with humanities degrees and philosophy and other um you know interesting ways of learning or framing the world to help us deal with some of these new socio-technical problems that are arising. Nice. Okay. I agree with that. Yeah, I think it's... um Yes, to your point, SAS did not pose psychological...
00:19:09
Speaker
issues you know and I think the design decision was if I put an LLM in a chat interface I make what I believe the godlike technology I'm speaking from the early open AI perspective accessible to the public unintended consequence oh when it starts to go off the rails the longer the person talks with it and the and the statistical distribution starts to shift into the outlier territory maybe that design decision comes into question. Right? So yeah, get it. Hey, just a quick word to say thank you for listening and to ask a favor.
00:19:53
Speaker
If you're digging the new direction of the show, which is looking more at human flourishing and the impact of technology more broadly, share it with friends. It really helps the show. We're really trying to grow something here organically.
00:20:06
Speaker
We don't do paid media. We don't do a lot of sponsorships, so we'd appreciate getting the word out and getting it to people who care about the questions that we're tackling, how to keep tech human, and how to make technology work for us instead of the other way around.
00:20:25
Speaker
And now, back to the interview.
00:20:29
Speaker
I think i think you know it speaks to kind of a bigger issue, and that's, you know, unfortunately...

Safety Issues in Unchecked Tech Innovation

00:20:35
Speaker
we've leaned into this ah toxic form of innovation where there are no longer guardrails, right? We're letting everyone just do whatever they want.
00:20:44
Speaker
And, you know, now you're, you're not just, you know, your government, but many governments are are combating against, you know, basic safety, right? And in the worst part is when we lean into safety, it's, it's more so creating and reinforcing the surveillance state, which, know,
00:21:00
Speaker
you know, we're giving up civil liberties to apparently allow our data to be harvested. So all of these prompts and all these interactions that they allow users to to do in an unmitigated manner are just data points that are continually collected.
00:21:16
Speaker
ah And it's it's kind of tough because the complexity of the issue is speaking more towards we are trying to hyperinflate this growth market by allowing people the quote unquote freedom to explore it as they wish.
00:21:31
Speaker
But the reality is any of those activities are still heavily monitored. And I think people don't understand, you know, from the- You have the freedom to be logged and experimented on. Like from the Patriot Act onward, from the prison program onward, and this is all public knowledge stuff at this point.
00:21:51
Speaker
you don't actually have the freedom to experiment with these things. And I think beyond just understanding from a use case level, hey, like what what could happen to someone psychologically if they talk to these bots for way too long, especially in an era where we're dealing with like, you know, the loneliness pandemic and people have no friends and people can't connect with normal people. And now it's like, hey, you can have a full on intimate quote unquote relationship with a bot, which is still to me an absolute madness.
00:22:21
Speaker
It hurts my soul every time I think about it. Like, it's like George and I openly made the joke, like, go outside and touch grass, make friends, do a thing, right? I fear that the drive for for toxic levels of innovation without guardrails, without direction, without governmental guidance that considers the greater good,
00:22:41
Speaker
um I just don't know how this ends, Graham, without a catastrophic breakdown in society, ultimately. And and by that, like, i'll to be blunt about it, it leads to civil war.
00:22:53
Speaker
It leads to Great Depression. And I don't think, you know, we're all people on this call who believe in civil rights, who believe in in freedom and personal autonomy. I don't think that was the point of all this.
00:23:05
Speaker
So in your opinion, as someone who's on the cutting edge of the research, Is there a way that we can try to pull back and get us moving and developing in a direction where we are building these technologies in a manner that is safer for society without having to spy on everyone to do it?
00:23:25
Speaker
um You know, i I think none of these issues are easy to solve at all, but I do take um some hope from a couple of different sources is One with all frontier technologies that we've ever had, it causes a massive amount of disruption, but the human capacity to adapt to it and change is pretty amazing, right?
00:23:49
Speaker
um I think about my four year old son, by the time he's my age, his ability to interact with computers is going to be so alien to me. I can't even conceptualize it now.
00:24:00
Speaker
And so I think, you know, While we are experiencing real tremors right now, I do think that at some point, whatever it looks like, who knows, but we will come to a more stable point in terms of our interaction with this technology.
00:24:15
Speaker
But then the other thing is too, is, and this is not the way that I think is ideal, but I do think market forces will actually start pushing us to correct some of the, you know, for the lack of guardrails that we're seeing.

Market Forces and Insurance in Tech Risk Management

00:24:31
Speaker
and And that's going to come, I think, primarily from two sources. First, actually being insurance. If you think about what it's going to take for a cyber insurance policy to be written in 2026, it's not going to look like 2025. It's not going to look like 2020. And insurance traditionally has been a large check on some of the more outrageous behavior that we've seen in society, right? It's our way to deal with externalized harms and how we can collectively share the risk.
00:25:01
Speaker
So I think that's going to start driving some actual real fundamental changes to some of these behaviors because there are real harms and there are real expensive harms. Copyright lawsuits are not something that you want to be a party to. most companies don't have the funds to fight that.
00:25:17
Speaker
So I think some of the- Looking at you, philanthropic $1.9 billion dollars settlement. Yeah. and Yeah. um No, i don't think all companies have that type of war chest lying around. So, um you know, I think that might be it. And I hate to say, you know, we've got to rely on the markets or insurance, but I do think that will be and has traditionally been a pretty big um shift. But then also it's be business to business. um You know, you mentioned VibeCode and being a ah you know, an insidious problem, we're going to find out more and more as acquisitions are made, mergers, as things are restructured or or change management becomes a reality for some of these vibe-coded products or products that they don't realize have vibe-coded components.
00:26:00
Speaker
As that stuff starts falling apart, causing real harms, businesses are going to start taking, I think, a much harsher view of these these products and how they evaluate them and then surprise, surprise, we'll go back to the basics and say, we need a basic form of attestation.
00:26:16
Speaker
You can't just do whatever you want. I mean, and so I think those forces might help balance some things out. Will it be enough? That remains to be seen, right? But I think those are there, which I am hopeful about.
00:26:28
Speaker
Some heroes don't wear capes. I'm glad you brought up insurance because I remember in the late ninety s In the early 2000s, as we entered sort of this dumb ass debate about whether climate change was real or not, I remember telling my friends like they can deny it all they want.
00:26:45
Speaker
insurance will change the equation because the math doesn't lie. Like if there is a higher risk profile for an area and you can't get home owner's as insurance because you're just in a flood zone, like insurance will just factor that in. Like you can deny it all you want, but if you can't insure your home, that's just a that's just the the market telling you the reality. um So I want to turn our attention now to rather the current state, but you had mentioned um both in our previous conversation and just earlier here,
00:27:15
Speaker
about the types of education, about the types of thinking.

Multidisciplinary Approach in Adapting to New Technologies

00:27:19
Speaker
So digging into that multidisciplinary approach If you had your druthers, like, what would you tell people who are listening to this podcast who are probably like us, kind of stuck in the the tech media haze, trying to keep up with news, trying to keep up with model development, trying to think through these problems, but very much through this keyhole of tech, where would you point them to start looking to kind of round out their thinking and and bring new, I guess, paradigms to bear on these questions?
00:27:54
Speaker
I think there needs to be a fundamental shift in our ability to conceptualize the need to learn this technology. And I think that, you know, whatever sources people feel comfortable with that are reputable, there's a lot of local ones happening, at least here in Portland, for example, there's great AI groups that are forming and they're doing amazing presentations, or you can find organizations like For Humanity, which my co-founders are um fellows with that provide free educational resources that are reputable.
00:28:26
Speaker
But I think really just getting some education started because I've talked to so many people who really just don't want to deal with this yet or don't realize that it's going to impact them. So you take like skilled trades, for example, right? They're like, we build stuff. I'm a plumber. I'm an electrician. This doesn't matter to me.
00:28:43
Speaker
And it's like, take a step back for a second. the insurance policies that your business have, they're gonna be using AI if they aren't already at so some point, right? How you get money lent to you by a bank, that's part of it too.
00:28:55
Speaker
Then you look at search engine optimization and the way you advertise, plan clients and do all the things. Like AI is a part of your life right now and you may not even realize it in that regard, right? Threat actors, criminals, um I hate the term threat actor, um they are using it.
00:29:13
Speaker
So even if your organization is not, You've got to start learning about this. This is like saying, I don't want to use a mouse and keyboard. You can get away with it i mean for a while. There was a time, right, when ah early, late 90s, as things were coming online, they're like, oh, I don't need a website.
00:29:30
Speaker
i was like, if you don't have a website, like you don't exist essentially right now. um Yes, that's a good point. But what about other... realms that people could be thinking through instead of just AI and tech. I know you and I have talked about your philosophy, talked about other, like what are, I guess, I guess what I'm trying to ask is what do you find yourself leaning on, right? When you're thinking through things that aren't just from a technological perspective.

Legal Implications of AI-Enabled Crimes

00:29:58
Speaker
For me, it really comes down to the second and third order effects and trying to use a multidisciplinary approach to a to see if there's any of those things that you can anticipate. So I'll give you an example.
00:30:11
Speaker
When Sora 2 went live, do you all remember there was that prolific copyright bombing essentially that started as soon as it went live and there were a lot of hosts specifically with Nintendo characters because Nintendo was well known to be extremely litigious about their IP. That was done very purposefully.
00:30:32
Speaker
From my understanding, when they released that Sora version, they had forgotten to carve out video game characters. And so that's what happened.
00:30:44
Speaker
If they had maybe had a more broad view of people about intellectual property that they need to be on the lookout for, protect themselves up from, gaming may have come out. I mean, it's a multi-billion dollar industry. And and this is the type of thing when engineers are building out Sora, I don't think the last thing they thought was gonna be Sam eating Pikachu.
00:31:07
Speaker
But that's where we were like what within two or three hours of it being released. um And that's just one example. It's kind of a silly one, but I think it belies like real problems that we're seeing, which is you need people from a lot of different walks to look at things. So another way I think about this is there are crimes being committed right now that are AI enabled.
00:31:28
Speaker
And that that is going to require a prosecutor and an investigator who are going to have to understand how to find evidence, how to categorize it and keep chain of custody, how to do it in a way that's not going to violate fundamental civil liberties.
00:31:43
Speaker
And then they're going to have to articulate this to a judge and jury in a way that is comprehensible to them. And do we have a large cadre of legal minds that are ready to hit the courtrooms right now and do that? I don't think we do.
00:31:58
Speaker
yeah I think there's a lot of people that doing a very valiant effort, but that means that there's going to have to be lawyers who are non-technical getting involved in these conversations understanding the nuances of what a technology can do what the implications of semi-autonomous behavior from an agent mean from a legal perspective um and so it's going to require all these people to start coming together and talking about these things where you know 10 years ago as an attorney it's like as long as we're not getting sued i don't care what
00:32:31
Speaker
Right. I think one of the things that I heard in there, aside from like really, when you see really dumb mistakes, you know, whether it's a PR snafu or this, like you release something and then and of course it does all the dumb shit that everyone was like, how could you not have thought of that?
00:32:52
Speaker
Whenever you encounter that moment, I usually think it's because of either one of two reasons and or both. Could be you have only people on the team who think in one way, right? So there was no one there to be like, what about, you know, because they didn't have anyone there. So you have this homogenous

The Need for Diversity in Tech Teams

00:33:10
Speaker
level of thinking. Or...
00:33:13
Speaker
you have a culture where that kind of dissent is not really encouraged or tolerated. And sometimes it is both, but usually it's because of that homogeneity layer. um So to your point, yes, you need to bring in different kinds of thinking.
00:33:30
Speaker
One thing that George brought up was a quality assurance and how that seems to be falling to the wayside. and one of the things that struck me during this conversation and just from my general observations is There seems to be an increasing tolerance to ship software with bugs that you'll work out once they've gone live. yes And I think that's bled over into the AI space, but has much more insidious implications because AI can go a lot in a lot more directions than you're just like CRM software, for example.
00:34:03
Speaker
And so I think we're going to see some real nasty aftershocks because of that. Yeah, that like just ship it and and we'll we'll patch it later is super dumb when you think about something that can sit on top of data, manipulate data, write to data.
00:34:20
Speaker
ah That's a yes to your point. Yes. But but my question is this though, Graham, and it's kind of like where we bridge the delta on this is folks who are in those from a non-technical standpoint, you're in now those pivotal positions, right?

Educating Non-Technical Roles in Tech Basics

00:34:35
Speaker
Where do they actually begin the education process to try to make themselves I don't know. Dude, it's early in the morning. I'm like halfway through a coffee. Fucking not useless. That's the question. How do you make yourself not useless? like Like basically, how do you make yourself not a pylon that makes things worse?
00:34:57
Speaker
um If you're already like in your career, you might be 5, 10, 15 years into it. You're in this role. There's probably a lot of pride, a lot of ego involved. um You know, because people have a hard time acknowledging like, I don't know.
00:35:10
Speaker
And, ah you know, like i I take a lot of pride in, like, having the comfort confidence and comfort myself to look at a situation or a question and be like, i I don't know. I'm going to find an expert or you give me some time to research and I'll get back to you.
00:35:24
Speaker
And I think people on a basic level of pride um can't seem to have that level of honesty anymore. So it's like if we can't acknowledge that we have an education gap,
00:35:37
Speaker
how do we fix that is kind of my issue. So where, where does someone who's maybe in one of those roles who actually wants to actually, you know, improve themselves, educate themselves, uh, more of a healthy contributor in the process, where do they begin that education and education or what would you recommend they do?
00:35:56
Speaker
Yeah, that's a really great question. Um, I think the first one is obviously finding reputable sources, which is very difficult in today's world. Like you go on LinkedIn right now, everybody's a snake oil salesman.
00:36:08
Speaker
i'm an AI expert. I've done all this stuff. you know it's all over the place. So it is really difficult to find that. But I think most people, especially if they've been around for a while, so have enough you know friends, acquaintances, professional colleagues that they can ask, where do you find value in a reputable source?
00:36:27
Speaker
But then I think more importantly, if you are an organizational leader of some sort, and we did this a lot at the lab, was we would do premortems and we would do premortems with everyone and in particular with junior employees in these organizations because we often find and the premortem was, okay, so we're about to go live.
00:36:47
Speaker
Tell us how this is going to fail. And it was amazing some of the stuff that junior... employees in these organizations would come pipe up with and the looks you would get from some of the because we would make sure it was a safe space for them the looks you'd get from some the leadership was clear like this is the first time they're ever hearing about this i love it i love it and we have seen that repeatedly in research that an over-reliance on expertise leads to kind of ossified thinking it's why i mean i think they have found um
00:37:20
Speaker
you know, like college account, like undergrad accounting students can sometimes spot embezzlement faster than, you know, publicly traded CFOs or medical school students who, who can pinpoint heart attack risk faster than cardiologists, you know? So I love that. i love bringing in the juniors to, to think around the corner.
00:37:44
Speaker
And, I mean, often, right, they're the ones that are going to be bearing the brunt of the issues. Like when was doing incident response, they were always targeting, right, brand new employees, employees that just came on are working like an accounts payable, receivable. um And, you know, if they're not part of that conversation, if they're not part of the educational journey for your organization, it's going to be really difficult because at the end of the day, they're the ones that are at the greatest risk for targeting um or for having
00:38:14
Speaker
you know, egregious errors happen. Like they don't understand that vibe coding is a really bad idea without supervision

Frameworks for Navigating Tech Challenges

00:38:22
Speaker
and knowledge. Right. Um, but yeah, I think we are starting to see in terms of like some of the, um, accreditation bodies like ISO, for example, or NIST, they're generating a lot of fantastic information too. And that's a great place to go. And that's why, you know, we work with the frameworks that we do is,
00:38:45
Speaker
If you have a basis of a framework to work off of, at least then you can start educating yourselves on the critical domains within that in a way that makes sense for your organization.
00:38:57
Speaker
Love it. Well, Graham, that's the time that we have. Thank you so much for joining the show. Really enjoyed the conversation. I think we'd just go for hours, but we don't have that kind of It's great to be on here. I really appreciate it.
00:39:13
Speaker
You're awesome, man. i You know what? It's really refreshing to bring someone out here like you. It's very topical right now. I think if our show, which has gone ah in a bit more of a non-pure tech direction, if we can bring in more folks like you, if you can open up more folks from your network,
00:39:27
Speaker
who have these kind of perspectives, who are more on the leading edge, we would appreciate that because we're trying to drive a conversation where we're trying to make things better and we're not trying to admire the problem. We're trying to be realistic about solutioning.
00:39:40
Speaker
So thank you for humoring us and please bring us more folks who are like you because we need you guys right now. I'd be happy to. It's a pleasure and it's been a fascinating conversation. All right, we'll talk to you soon.
00:39:53
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:40:07
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. that helps others find the show. We'll catch you next week, but until then, stay real.