Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Thinking Like an Adversary, and How to Prepare for AI in Work and Life image

Thinking Like an Adversary, and How to Prepare for AI in Work and Life

S4 E7 · Bare Knuckles and Brass Tacks
Avatar
0 Playsin 13 hours

Phil Dursey joined the show this week to cut through the hype and talked through what red teaming for AI means in mindset and practice.

The conversation reveals a fundamental problem: organizations are rushing to implement AI without understanding their own workflows. Executives are buying "the thing with the AI" expecting magic efficiency gains, but they've never mapped out basic business processes. You can't automate what you don't understand.

Phil's approach starts with the right question: "Are we using the right tool for the use case?"

We also talked about education and kids. Find out why Phil argues philosophy and humanities give you the biggest advantage when working with AI systems. It’s what he looks for in hiring, too. The ability to formulate good questions, understand context, and think clearly matters more than technical prowess.

And finally we touch on the job market. We're heading toward AI capabilities that will exceed human professionals in specific domains. The displacement won't be overnight, but it's coming.

If you're implementing AI in your organization, this episode should make you pause and ask harder questions. The technology is powerful, but power without thoughtful application is just expensive chaos.

Mentioned:

Recommended
Transcript

Introduction and Risks of AI Deployment

00:00:00
Speaker
you know you're looking at at deploying a ah particular solution that has this new set of risks, it can really help to sort of think through, well, not only what could go wrong, but what could a an intelligent adversary ah due to this workflow?
00:00:19
Speaker
and And it doesn't have to be technical. These are valid questions that anyone can ask, yeah anyone who's interested in using these types of tools. Hey, you know does does it make sense in this case to use ah you know a hammer for the job versus a screwdriver? And what could be leveraged in those instances ah from an intelligent adversary? is As we say, you know the adversary always has a saying.
00:00:51
Speaker
Welcome back to Bare Knuckles and Brass Tacks. This is the tech podcast about humans. I'm George K. And I'm George a And today we are talking to Phil Dursey, expert red teamer of AI systems, has even written a book on the subject. And if you don't know what red teaming is, don't worry, we're going to get into that. But what we really get into is how to test these systems, both technically and also just as end users.
00:01:22
Speaker
Yeah, it was a really good conversation about someone that's lived and grieved this for least the last 10 years. wow And it's rare to find someone that can say, I've been doing this for 10 years and they've actually been doing it for 10 years. He's got the books and then the corporate creds to show for it.
00:01:37
Speaker
um But I think kind the more the bigger things that we touched on this episode are the big picture implications of, you know, what does this mean for people in their day to day lives?
00:01:48
Speaker
What does this mean for kids and for education? And, you know, how do we how do we use this stuff? without essentially entrapping ourselves into more economic downturn. So, you know, again, we are no longer the BKBT of old. We're trying to tackle the bigger issues. We're trying to have a lot more fun, lot more interesting people. And Phil fits right into that.
00:02:12
Speaker
Let's turn it over to him. Phil Dursey, welcome to the show. Hey, thanks, George and George. It's a pleasure to be here.
00:02:24
Speaker
Yeah, we are very pleased to have you and also your expertise. So today we're going to be talking about red teaming, AI, and we're going to try to keep it out of the technical weeds. um My first question to you is something that struck me about the release of ChatGPT, Claude, and other chatbot LLMs.

Red Teaming in AI: Concept and Evolution

00:02:45
Speaker
is that it was the first time I had seen, quote unquote, red teaming, that term, escape from cyber into the general public at a much greater ah level than I had seen before.
00:03:00
Speaker
And so let's start with there. Let's start with like, could you define for our audience the concept of red teaming and and why organizations engage in it? Yeah, absolutely. So ah the the history of red teaming goes back really ah pretty far back. And in fact, there's even reference to ah to a similar concept around devil's advocacy, ah you know, millennia in the past. And so this is this is a ah pretty established concept.
00:03:33
Speaker
ah Essentially, it's a form of alternative analysis. So um the modern concept of red teaming is ah really structured approach ah to ah to achieving various ah ends around ah de-biasing And I'm helping organizations, companies ah break out of ah of group sick, essentially. And so it's ah a structured way of taking a fresh take and i'm breaking down some of those barriers so that we can have really clear thinking.
00:04:11
Speaker
ah It's a ah method of getting to the ground truth, essentially, the brass tacks, as it were. Oh, like where to work that in there. um Could you tell us a little bit about how that methodology differentiates now as it relates to AI systems versus traditional software?
00:04:31
Speaker
Yeah, happy to. So as you alluded to earlier, you know, I think red teaming has really been a a foundational practice in in cyber for all for decades.
00:04:43
Speaker
And it has ah popped up in several other ah domains. ah so you know the The AI red teaming piece, which I'll get to in a second, ah is ah ah one of those flavors where ah it's says this kind of methodology applied to a contemporary problem set.
00:05:05
Speaker
And essentially in in terms of ah the yeah other areas, other domains, you see this sometimes in investment strategy development, different types of strategy development.
00:05:18
Speaker
Often this is paired with ah techniques and methodologies around war gaining. And it's really sort of a a subset of of that kind of approach. um the way that uh that red tuning has really kind of evolved recently it's it's been i think really narrowly defined uh in terms of retaining uh to really focus in on on essentially llm uh prompt manipulation uh prompt injection uh things of that nature so i think
00:05:52
Speaker
ah Historically, there's been a a much broader scope when it comes to war game wargaming and and red teaming practice.
00:06:03
Speaker
and And in this instance, it's a much more narrow scope. and And I think you know it it's it's ah ah a relevant practice and and in that one specific instance for for AI specifically.
00:06:22
Speaker
However, i actually wrote ah the book on this. but It's called Red Teaming AI ah from NoStruxPress. And part of the reason for for my wanting to write the book was to actually help broaden that definition ah such such that it has that full systems thinking kind of approach.
00:06:43
Speaker
and And so far, it's been really well received. And I think a lot of folks are thinking about broadening the scope ah for ah beyond just ah hacking the the prompt ah and and really thinking about AI systems from a holistic perspective.
00:07:02
Speaker
Awesome. So Phil, and again, like carrying on the conversation, that it's really, really cool to actually sit out and talk to you ah here because like we just met a happenstance at DEF CON of all things.
00:07:15
Speaker
And then you're just being this whole crew, this whole world, professional yeah and testers. And we're all really awesome dudes. And then, by the way, shout out to that, what was it, Toro, that Japanese place. It's I'm the real in Vegas. that's Thank you for that. That was

Consumer AI Products: Necessity and Risks

00:07:29
Speaker
amazing. Yeah. um Yeah. So where I would look at is beyond, and was trying to talk about this before the show, beyond tech applications. When we look at mainstream real world applications for people designing LLMs that are being um put into the open market, whether it's a freeware thing or whether it's a subscription service, as we're seeing tons of ads on, you know, on Instagram, on TikTok, on all mainstream social media platforms that non-tech people are on.
00:07:59
Speaker
Right. And they're subscribing to these things. And so this is kind the the emphasis, the question where i want to focus. How do you then read team for a product that's going to be going to the open market? That's not some B2B security or tech SaaS thing, but it's like actually a tool or a platform that's designed to you know, help you automate how you renew your driver's license or something like that, right? Like it's, it's how do we, how do we make the translation from tech specific or tech industry specific into a mainstream approach to red teaming or purple teaming or or simply just being aware of the potential things that could go wrong when starting to utilize a new LLM in your life.
00:08:47
Speaker
Sure. This is a really important question. And it is a good ah good opportunity for me to ah to pose this question, ah which is really the fundamental red team in question is, are we using the right tool for the ah for the use case?
00:09:10
Speaker
and And often people sort of skip beyond that, but as a ah systems thinking red teamer who's ah who's really, you know, we we go into organizations to ah to really understand ah the underlying assumptions ah around LLM implementation, ah we always start with the question is, is an LLM necessary for this application?
00:09:38
Speaker
And sometimes you need an external team, an external red team to really pose those kinds of questions, ah which fin which are are hard when you know you've been working on product for six months and you're you know you're ah you're on a tight sprint schedule.
00:09:56
Speaker
and asking those fundamental questions ah is and and really testing those assumptions is really, yeah it's really important to have an external perspective to ah to make sure that that's even ah you know that that application is actually relevant.
00:10:14
Speaker
um And i think to your broader question, ah you know Looking at whether or not a a ah a certain assumption set has really ah been rigorously tested, thought through,
00:10:34
Speaker
it can really help to have that that structured and approach to make sure that ah you are exploding those assumptions and and really trying to to get to the the truth of the matter.
00:10:48
Speaker
and it And it could be the case that, you know, that in a specific implementation, whatever that architecture looks like, ah that the the underlying assumptions have ah have maybe um unnecessarily increased your attack surface, ah have introduced ah more risk than than the business really ah needs to for a given application. And it's our our job to go in ah not only to to test and evaluate the ah the implementation of the LLM itself, whether it's at the prompt or the infrastructure layer,
00:11:30
Speaker
um but also to to question the the business assumptions and see, is this thing scoped in a way that is optimal for the business? and And that can really influence design decision going forward.
00:11:44
Speaker
But then we're where then are the options available? So let's say like if I was someone who was fresh off the street, whatever. I never really worked in tech, but I want to use this new AI tool I'm seeing ads about.
00:11:59
Speaker
How then would I go about looking into this information? Like what, what if somehow I'm eureka'd with the thought of like, what could go wrong with this? yeah And I'm not in tech.
00:12:10
Speaker
How do I figure that out? Yeah, ah so it it helps to put on the adversary's hat, so to speak. we we talk In Red Team, we talk a lot about the adversarial perspective and adversarial thinking. And i think is if you can ah you know take a pause, you're looking at at deploying a ah a particular solution that has this new set of risks, uh, it can really help to sort of think through, well, not only what could go wrong, but what could a an intelligent adversary, do to this workflow?
00:12:52
Speaker
And, and it doesn't have to be technical. You know, I have a, you know, i come from a very technical, uh, cyber background, um, by, uh, these are, uh, these are valid questions that anyone could ask, uh, who, you know, anyone who's interested in using these types of tools, Hey, you know, does, does it make sense in this case to use, uh, you know, a hammer for the job versus a screwdriver and, and what could, uh, what could be, uh,
00:13:21
Speaker
what could be leveraged in those instances um from an intelligent adversary as as we say you know the yeah the adversary always has a say and so thinking through that process what um you know what is the the possible ah the possible design space from the adversary side what what does that look like And I think that can help people think through privacy issues, ah security issues, business issues in a way that that brings ah that that fresh thinking, that doubles advocacy, that's really important when you're looking at taking on a new project or a new tool set.
00:14:02
Speaker
Yeah, I really appreciate the adversary has a say. And um i also take to heart that that mindset is something that i remind people of when they think of hacking as a very technical enterprise.
00:14:18
Speaker
And I'm like, well, Have you ever been at work and somebody put in some control and said, like, you can't copy paste this and you found a way to like email it to yourself to get your work done? and they're like, yeah, I was like, well, you're a hacker. like Humans are hackers.
00:14:33
Speaker
um And it's is interesting that that mindset is very easy to employ when I am trying to get to something that is in my way. I think it's a little bit harder for people to harness that mindset from like a preemptive, like in George's case, tool use. Like I'm going to use this.
00:14:50
Speaker
How could this be abused sort of thing? um I also take your point and we've been trying to be very clear on this show in delineating LLMs as one type of machine learning and so-called AI system versus other AI systems and machine learning models that are built into just about everything we do, whether it's the Netflix recommendation algorithm to the Facebook news feed to other things that use machine learning.
00:15:18
Speaker
But AI is the the jargon du jour. And so i guess I want to ask you, given... your background, your experience, your expertise in the company you're running and in trying to test these systems.
00:15:33
Speaker
What is your advice to businesses? So I guess George was asking about consumers, but what is your advice to business leaders who feel this pressure of like, oh, we just got to put AI into the thing or like, what are we doing about What what are some things that I mean, maybe you would have them consider or walk through before they just kind of like put a whole pressure on devs to just bolt models onto every part of their process.
00:15:57
Speaker
Yeah, it's ah it's a very real ah issue that that folks are are navigating, obviously. um i think first and foremost, one of the the the biggest risks to business and and even yeah some more professional-grade consumers ah is in not using AI. I think that's one of the biggest risks is that you, you know,
00:16:26
Speaker
you You may miss the boat on ah getting up to speed on the capabilities around these tools, how to use the tools effectively, those kinds of things. So I think it's a ah really important emerging area of ah technology that that all businesses should be at least understanding and um maybe not implementing, you know,
00:16:48
Speaker
ah haphazardly, obviously, but certainly exploring the space and and really understanding that. So I think that's the really the biggest risk is not using it.
00:16:59
Speaker
um Now, that that presents a whole other set of risks, yeah of course. and And I think, ah you know, there's there's some fundamental questions that, you know, that I yeah that i mentioned previously around, you is is this the right use case for this kind of tooling? And that's a really important one.
00:17:20
Speaker
I think there's there's been a bit of a rush to ah you know to sort of implement more much more broadly than is probably appropriate or even effective.
00:17:33
Speaker
you know these these this This class of tooling is actually pretty specific and and it's in its capability set in a production environment.
00:17:45
Speaker
where you would want it applied in a production environment. And so really understanding you know what exactly is the capability, how can I manage the scope so that it's actually ah boosting or or or uplifting our effectiveness in the business and and really tracing that and and mapping it to to advances in the field.
00:18:12
Speaker
so it's a really fast-moving space obviously ah there's a ah lot of innovation happening and it can be ah you know it's it's something to kind of just jump in and uh and start to implement these things ah you know across the board i've seen that in several cases and and that can go really really sideways especially in terms of cyber risk and other things But I think it's really important to understand your use case, your workflow really well, um and then see see where the this type of tooling can really ah bolster and impact the work positively without introducing unnecessary risk.

Integrating AI with Business Workflows

00:18:55
Speaker
And know that's kind of speaking of generalities. Yeah, I mean, I appreciate your emphasis there on understanding your workflows because I think I would challenge that maybe a number of businesses haven't done that deep internal research. They are looking more at how can I use AI to automate a thing to reduce cost rather than like, oh, did you know that when a customer asks support, it takes about five emails before the thing gets to the person who might actually solve the problem versus let me automate customer support with a chat bot
00:19:30
Speaker
And still not solve the five email problem, right? so Exactly. like Yeah, yeah i I appreciate that. Yeah, it's you know it's ah it's it's like other types of software systems ah where the the workflow is really important to understand. And you know I think it's helpful to to put on your product hat when you're looking at and bringing in ah a tool set or a system like this.
00:20:00
Speaker
and think, you know, what what is the user experience going to look like and how is that going to really impact our bottom line and how does that impact our risk and and then navigate that that calculus in a way that's really thoughtful.
00:20:19
Speaker
ah Because as as we're saying, you know that these these systems are are really powerful ah and and they can cause you know a lot of ah lot of ah risk and and damage ah without being thoughtful and and careful about it.
00:20:36
Speaker
um it's It's a lot like... uh you know i'm kind of a car guy and uh it's it's like you know putting a a really uh you know it's a like a a massive engine into you know what type of vehicle exactly and uh you know is this gonna uh are the brakes gonna handle this uh this kind of power um and and those kinds of questions i think are really uh really important to ask
00:21:08
Speaker
Hey listeners, we hope you're enjoying the start of season four with our new angle of attack looking outside just cyber to technology's broader human impacts. If there's a burning topic you think we should address, let us know.
00:21:21
Speaker
Is the AI hype really a bubble about to burst? What's with romance scams? Or maybe you're thinking about the impact on your kids or have questions about what the future job market looks like for them.
00:21:33
Speaker
Let us know what you'd like us to cover. Email us at contact at bareknucklespod.com. And now back to the interview.
00:21:44
Speaker
Yeah, funny because a George talked about like the AI implementation problem at the enterprise level. It's it's kind of A lot of executives will buy the thing with the AI and just expect it to magically make their business efficient, but they've never done the homework of understanding you have to manually map out a process before you can automate it.
00:22:07
Speaker
And I know this because there you know you deal with a lot of AI companies that are looking for use cases to apply to. And a lot of organizations want the AI, but they don't actually understand the functionality how their business works.
00:22:22
Speaker
So I think that's why we're seeing a lot of like weird friction when there should be like, hype cycles this, but then real life implementations at this level. And you're wondering like why that's, that's the case is that you can spend the money to buy the thing, then you bought it and they're like okay, cool. How do we make this work?
00:22:41
Speaker
work. um And I think that that kind of takes me to where I wanted to ask you about where should we begin education on this stuff? Right. Because I think when I think back to my elementary school days and the excuse of technology education that I got back then, which was like, here's your your Mac systems and everything's on a network and it's integrated. And I think the first time I ever I ever did ah a DDoS attack on someone, wasn't even DDoS, it was, I nuked them, I nuked their IP address by just pinging it a ton of times.
00:23:14
Speaker
And like, that was like my entry into like figuring out how to do like kind of red hat-y stuff. um I don't know if the education system now is equipping our children who are in schools, elementary and secondary school level before even going into post-secondary or like university.
00:23:34
Speaker
I don't think they're giving them the right foundation to be able to thrive in the AI enabled or post AI digital world. Where do you see the biggest issues in that, that kind of thing right now? Like where's education system failing our kids and how can we bring about the most immediate change or where can we begin to change things at home so that the kids can then start giving themselves a better chance of

Education's Role in a Tech-Driven World

00:24:03
Speaker
surviving. Because I see ah major, um we'll say segmentation in the future workforce where the kids that grew up tech forward, educated in this stuff, understanding it, knowing how to live and exist in it,
00:24:18
Speaker
understanding how to utilize it in their workforce and in their efforts, they're going the ones getting all the high-end jobs, getting hired right away. And the kids without that knowledge, they might be smart and talented as hell, but they're just not going to get the opportunities. And I see this becoming a major social mobility problem.
00:24:36
Speaker
What do you think about it? Yeah, I couldn't agree more. And this hits close to home. I have two little ones and and we're navigating this ourselves. um I think the... the the most important skill set that you can develop, especially in the early ages. um And I'll go ah through the progression here, but at at the early stages, I think a really sound grounding in philosophy.
00:25:05
Speaker
ah and And the humanities is is ah provides a huge advantage because ah the way z ah that you interface with these systems, that you engineer context with these systems, these are intelligent systems or or they're progressing towards ah towards that, depending on how we define it. but When you're engaging with ah with a system like that, it really helps to be able to ah to formulate good questions, ah to to ah be able to understand ah his historical context, context around ah ways of thinking, modes of thinking,
00:25:50
Speaker
ah those Those kinds of skills I found in and staffing our AI rating practice um and and you know and and and our build as well, ah those things go much further even than ah your typical sort of tech skills or cyber skills.
00:26:12
Speaker
um It's a a way of clean thinking, clear thinking um can really be hugely advantageous. you know i I'm reluctant to to let yeah our kids have access to so anything that's gonna have you know ah long-term privacy implications and things like that.
00:26:34
Speaker
ah So I'm kind of an interner intermediary when ah when they want to prompt. I actually i work with them to to develop ah they're ah the right questions, ah to be able to develop a prompt,
00:26:50
Speaker
that would not only work with a an artificial intelligence system, ah but also would work with with human intelligence systems, right? and And helping them to to think through and and and bolster that, I think, is a huge advantage.
00:27:07
Speaker
In terms of getting hands-on, you know i think in the later years ah and in the educational journey, I think is it' really important. We're seeing a huge transformation in the types of education-specific tooling around this.
00:27:22
Speaker
ah where where these systems can really where students can really engage with these systems and learn the the nuances and ins and outs. But I would really caution folks around ah privacy implications ah around their use. And I would look to heavily vet in local systems, those kinds of things where ah where you can really maintain some some modicum of control around you know your innermost thoughts and and things along the way. and the educational journey, right? It's like ah being educated and public in public in this new way.
00:28:00
Speaker
um So yeah, I could go on for hours about this. I'd love to follow up. Music to my ears is the social science proponent and humanities and philosophical,
00:28:13
Speaker
ah I guess, hobbyist. But yes, I agree because the way you think
00:28:23
Speaker
helps AI systems become a force multiplier for you rather than a crutch, I think is what it comes down to. And um i heard a really interesting segment on the most recent episode of Hard Fork where they had previously asked students to call in, you know, tell us how you're using these systems.
00:28:39
Speaker
Of course, these quote unquote systems tend mostly to be LLMs. and and And there are some who are just kind of going to rely on it to do their homework.
00:28:50
Speaker
I would challenge teachers. you're We're going to have to develop different models of testing, understanding, you know, the whole like go home and write something is done essentially. But one of the students ah said that, you know, she takes notes on Notability, which is a software app on an iPad, but she's using a stylus. So she's taking handwritten notes that are backing up to Google Drive. And then she used Gemini to basically walk through ah vibe coding session of like, how can I take these notes and then tie them in into Gemini to create quizzes for herself as to whether she understood the topics?
00:29:27
Speaker
And I was like, Well, one, that's a really, like, this student came to that on their own. But they came to it because they had a way of analyzing their own thinking, right? Like, how am I going to test myself? they They had the presence of mind to kind of like, this is what I want out of the education.
00:29:46
Speaker
I'm still taking the notes. How can I use these systems? And in this case, it was literally code the integration that she needed to make it work. But the ability to imagine that, that to me struck me as like,
00:29:59
Speaker
a perfect example of to George's point, we need to sort of like, let them understand these are the tools today, but we also may need to let a fair bit of experimentation because they're likely to come up with things that you and I as geezers can don't even think of. Like, I don't know. I just think that there's, yeah, I appreciate the balance of like, let me introduce you to some things. Here's some background knowledge, teach you critical thinking, and then kind of let you experiment. But let me,
00:30:31
Speaker
Maybe put my hands around that a little bit so it doesn't go off the rails. I'm not a big fan of querying LLMs for knowledge, like tell me about the Renaissance, because we we haven't solved the hallucination problem yet, right? Right.
00:30:46
Speaker
Yeah, I think that's a really helpful way to think about it. you know And and that's how a lot of yeah that's how I tend to use it. and And a lot of my team tends to use these systems is as as ah a coach or consultant for different types of processes and those kinds of things.
00:31:05
Speaker
And and I think you're you're hitting on something that's really foundational to the whole educational pursuit, which is, you know, really the why by getting to, you know, staying focused on learning as opposed to, you know, ah producing artifacts for homework or those kinds of things. And I think I think that that these systems well managed through that experimentation process, you know you get a feel for how to use them, ah can really be a a powerful intellectual and and educational partner.
00:31:43
Speaker
ah But it's it's so there's a lot of risk there if you know if a learner doesn't have learning as the goal, which is, I think, far too common in our you know state of education today.
00:31:58
Speaker
um And if it can really shift more towards ah the learning outcomes as opposed to you know just you know the grades or or or or the artifacts and things like that, ah then it will be really lays the groundwork for them to be able to explore in a really productive way where they can start to uplift ah their their journey and and their and their outcomes. And we're seeing really positive results in this area and ah where where it's approached that way. And it could be the case that yeah maybe you have to put some guardrails in place in order to be able to achieve that ah consistently.
00:32:41
Speaker
ah But yeah, I mean, this that this gets to the heart of intelligence and and learning and truth and education. And these are all philosophical ah concerns that I think we need to do a lot more work in researching.
00:32:58
Speaker
Yeah. So I i got to ask then, Phil,

AI's Societal Impact and Future Potential

00:33:00
Speaker
like based on your work and based on your research for the book and everything, On a mainstream level, and considering that, you know, the current administration's policies have removed a lot of the, we'll say, developmental guardrails, even though a lot of legislatures in different jurisdictions have imposed very strict ah compliance um mandates on on the use of ai Where do you see the future of commercial AI going?
00:33:31
Speaker
And when do you think this dream of a truly like, ah what's the word I'm looking for? It's not but AGI, but the... Super intelligence.
00:33:46
Speaker
Yeah, the super intelligence. Is this going to happen like within the next five to 10 years realistically? Or is this something that's probably more a decade plus down the line?
00:33:58
Speaker
um Because I think how we start doing our calculus in terms of you know planning our family lives, planning our personal lives, figuring out like where we're going to be in five or 10 years. And I'm not talking about for the sake of a business. I'm talking about you yourself. As a human citizen. yeah sure What am I doing with my life in five or 10 years?
00:34:21
Speaker
Where do you see AI influencing that decision? Yeah, that's there's a big question. you know my My personal take on the trajectory is that at least I can start from the technical and maybe walk into the the social impact.
00:34:40
Speaker
um you know I think the trajectory is steep. These systems are improving so you know at at a rate that um you know we've not really seen in technology previously.
00:34:53
Speaker
And these are really meaningful ah upgrades in terms of capability set, in terms of you know their ability to ah to solve you know all all variety of ah ah problem sets and their generalizability around around you know across different domains and those kinds of things. so um you know I think we're on ah on a fast track to really powerful ah systems that, and you know the definitions are really important in this context. And and i think we have to have a much longer conversation around that.
00:35:30
Speaker
But i think at a high level um that we'll see, ah you know will see the the capabilities around vertical LLMs vastly exceed ah the the professional ah human capabilities and in a lot of areas.
00:35:52
Speaker
and you know And that's going to have, i think, a significant impact on society. I'm not one of the people who who thinks that ah you know, that there's going to be mass displacement, ah you know, ah overnight or anything along those lines. I think it's going to be much more gradual than ah than a lot of people are are are suggesting.
00:36:18
Speaker
I think we'll see ah very specific vertical focused LLMs, improve in capabilities significantly, um and and we'll see competing LLMs in the market. um you know and her Touching on on the earlier part of your question, Jordan, I think,
00:36:43
Speaker
The market as it's developing, I think is really good mechanism to help people ah implement some discipline around around the you know the the the ah rift the risk space around this.
00:37:08
Speaker
I think the the regulatory issues around this are ah regulation tends to move really ah pretty slowly and and we're finding that um because of how these types of systems are are flourishing in the marketplace, that firms are already starting to implement a lot of controls ah voluntarily um because it's good for their customers and what's good for the product and the customer, and yeah they they tend to do better as a business.
00:37:43
Speaker
and And I think we're seeing more and more of that. And ah and I'm optimistic that ah that we'll be able to navigate this really successfully. I do think that ah that it will ah this technology class will just proliferate. We'll start to see embodied AI. and things of that nature. And it's going to be, think, you know, it's on the surface level, a very different world in that way. But, you know, I think we'll have as humanity, think we'll have lot of influence in that direction.
00:38:18
Speaker
and And I'm really optimistic around ah the the general technology's ability to ah to uplift and and improve efficiencies and and really optimize our work and lives used
00:38:36
Speaker
our work and lives um used well right um and this kind of goes back to the phil of the philosophy uh question and and the foundation uh you know is is if if uh people have a good way of thinking about problems and how to how to attack them and and and that sort of critical uh critical mindset uh you can uh you can leverage these tools uh at ah whatever grade uh we we encounter and
00:39:09
Speaker
and navigate the future effectively that way. Nice. All right. Well, Phil, thank you so much for your time and attention. That's all the time when we have, but we really appreciate you sharing your insights and the benefit of your experience with us.
00:39:26
Speaker
Yeah, thank you guys. And thank you for what you do. i I'm a big fan and I'll look forward to ah continuing to listen to your bots. Great. Thanks. Yeah. Hope to run into you soon.
00:39:38
Speaker
Yeah, likewise. All right, we'll see you.
00:39:44
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:39:57
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. It helps others find the show. We'll catch you next week, but until then, stay real.