Introduction: Mind and Market Mastery
00:00:04
Speaker
The voice of growth, mastering the mind and market.
AI Security Breach Anticipation
00:00:10
Speaker
We're still kind of waiting on that one big breach.
Understanding Language Models and Physical World Interaction
00:00:14
Speaker
Can these language models actually understand the physical world?
Risks of AI Over-Reliance and Necessary Safety Measures
00:00:19
Speaker
So reliance is one of the big risks.
00:00:24
Speaker
One of them, certainly, turn it off.
Convergence of AI, AGI, and Quantum Technologies
00:00:27
Speaker
Combination of AI, AGI, quantum kind of all colliding. um And when it does, it's going to break all of our codes.
AI Security Urgency: Canary in the Coal Mine Metaphor
00:00:36
Speaker
but You're essentially the canary in the coal mine for Skynet.
AI Adoption Pressure and Responsible Use
00:00:47
Speaker
Josh, tell me why ai security is so important now more than ever. ah Great question. So certainly we're we're seeing AI everywhere. Everybody wants to adopt it.
00:00:58
Speaker
CEOs are telling their organizations ah that they need to start coming up with solutions that include AI. um So the pressure is there. The the momentum of of adoption is there.
00:01:09
Speaker
um But you can't go extremely fast without brakes.
Understanding AI Vulnerabilities and Compliance
00:01:13
Speaker
um So that's where AI security comes in. ah So with the proliferation of ai everywhere, um we also need to be thinking about the consequences of AI. So ah what are the vulnerabilities that exist within ai ah What are some of the compliance pieces that are coming down?
00:01:31
Speaker
um And what can you do to help ah responsibly use AI and responsibly bring AI into your organization?
Responsible AI Use in Organizations
00:01:38
Speaker
Yeah, I think that word right there is is very important, responsibility.
00:01:43
Speaker
yeahp we have um We've had a couple of other guests and we've talked with other AI experts about how important that is in not only in the organization, but as individuals.
00:01:56
Speaker
And i don't think people have an idea of how or what can happen.
Current AI Security Challenges and Vulnerabilities
00:02:03
Speaker
So give us an idea, Josh, of... What's the worst that can happen?
00:02:08
Speaker
The worst. Okay, so we we're still kind of waiting on that that one big breach that kind of wakes everybody up. um So while we're waiting on that, there's there's been you know a lot of things that have that have happened. so um you know So some examples like I think it was ah ah Canadian Airlines hadn't had a situation where a customer used their chatbot, was able to get a refund for a flight um that was not actually authorized by the airline, but was authorized by the chatbot and they had to eventually pay that out.
Computer Vision System Vulnerabilities
00:02:40
Speaker
um We had a ah che Chevy dealership um that decided to connect ChatGPT directly to to the outside world and let customers engage um with ah with them as kind of like a sales associate.
00:02:54
Speaker
um People quickly hacked it, turned it into a Tesla dealership, turned it into a Toyota dealership. um Somebody actually ah made it ah write an agreement to sell them a car for a dollar that was, you know, quote unquote, legally binding.
00:03:09
Speaker
um So there's these kinds of things. um There's also, you know, see, these these are some generative AI examples. There's also, you know, AI is is is all sorts of flavors. um Computer vision is one flavor.
00:03:20
Speaker
um So I've done work when I was with MITRE and the
MITRE Atlas: Mapping AI Vulnerabilities
00:03:24
Speaker
DoD. um where we were fooling computer vision systems. um So systems that are designed to you know detect you know things on the ground like tanks or or things in the sky, planes and things like that.
00:03:36
Speaker
um But you can fool them into you know hallucinating you know a bunch of of ah vehicles in the scene um or actually evade detection.
00:03:47
Speaker
um So there's there's these patterns that you can actually place in the scene that allow you to do that. um That's just one method, but there are there are others
Exploiting AI System Vulnerabilities
00:03:54
Speaker
out there. So um so the worst is sort of sort of yet to come, but we know about, you know, these vulnerabilities um from things like when I was at MITRE, we stood up MITRE Atlas, you know, that kind of has a whole whole ah landscape of tactics, techniques, and procedures that these adversaries are using ah to to full AI or could use to full AI.
Guest Josh: Background and Role in AI Security
00:04:14
Speaker
um as well OWASP and some other places where we can see, um you know, what are the possibilities for for actually, ah you know, taking advantage of these AI systems? um What are their vulnerabilities? What are their weaknesses?
00:04:28
Speaker
Yeah, so you and I met about 24 years ago. twenty four years ago Give or take. but's right We worked at a company, National Instruments. We worked alongside each other from a test and measurement perspective.
00:04:42
Speaker
ye And I ended leaving, I believe in 2003 timeframe.
Josh's Entrepreneurial Journey and Fire Mountain Labs
00:04:48
Speaker
two thousand and three timef frameme Give us a little bit of a story. You know part of this podcast, Josh, as, you know, our listeners and viewers understand is understanding the story behind these entrepreneurs, these folks that are making some big change.
00:05:03
Speaker
And you and I kind of lost touch briefly. And then, you know, we reconnected recently. So give us an idea, Josh, of yeah sort of let's let's actually um work backwards.
AI Security Focus at Fire Mountain Labs
00:05:14
Speaker
Okay. and we're goingnna We're going to challenge you to work backwards. I I know that you're the CTO of Fire Mountain Labs. Give us an idea of what that role is. Yeah. And then let's work backwards. Let's try that out.
00:05:26
Speaker
Yeah, this is good. I haven't done this one before. ah So, yeah. So CTO, ah founder, ah co-founder of ah Fire Mountain Labs. We started this up in April. um Really focused around, ah you know, AI security and AI kind of maturity.
AI Security Landscape and Research Innovations
00:05:42
Speaker
ah you know, AI readiness, ah consulting and services. So ah we kept running into in our past ah people quite not understanding AI, quite not understanding how they're goingnna going to bring it into their organization, um how they're going to secure it, how they're going to use it responsibly, how do they train their employees, sort of question marks across the board.
00:06:01
Speaker
And so we created this this company to do exactly that. So um in my role, ah you know, I'm kind of, I've kind research background. we'll We'll get to that since we're going we're going backwards.
00:06:12
Speaker
um But that's that's kind of what I'm looking at is, you know, what's the landscape of of AI security? Where is, you know, all of this headed? What's the landscape of ai What's next? You know, obviously we have agents, agentic AI, all of that's coming.
00:06:25
Speaker
um But what's what's beyond that? um How do we you know continue to innovate within this space? And then looking at um applying to you know research grants and research ah ah you know institutions ah to continue some of the work that we've been doing in the past.
Beyond Agentic AI: Future Insights
00:06:43
Speaker
That's very exciting. So let me ask you a question before you notch it back to the next ah career step. Yeah. When you say beyond the agentic sort of... Yeah.
00:06:57
Speaker
AI, i can't imagine what could be there. I mean, the the whole idea, and correct me if I'm wrong, i'm um'm ah I'm a very novice student of this domain. So um correct me if I'm wrong, an agent or agentic ai is when you have these quote unquote agents or like avatars that can kind of act on their own. They can make their own decisions within a certain set of guardrails so that you're not having to be working step by step to give them instructions and all that. Is that correct?
00:07:29
Speaker
That's exactly right. So really the the simplest definition I usually give is, I mean, you understand what an LLM is. um Now give that LLM access to tools. And so once once that LLM has that autonomy, that level of access to tools, um they can do things for you. Very, very positive.
00:07:47
Speaker
um I use them. ah However, ah you do want to, as you mentioned, ah put constraints around them. What can they not have access to? What do you need to be involved with as far as decision making and things like that? But yes, that's.
00:07:59
Speaker
but When you say LLM, you mean large language model. Correct. Yep.
Managing Large Language Models' Autonomy
00:08:04
Speaker
So what's beyond that? I mean, I can't imagine, like, what's what's there? It's like the Big Bang over here, you know? what's what Yeah, yeah. so What happened before?
00:08:12
Speaker
There's several researchers that I've been following for for many years. A couple of them come from computer vision backgrounds, which is my background, so... I've known them for a long time. Like Fei Fei Li is one, obviously everybody knows that name.
00:08:24
Speaker
um She just released an article with some folks ah that really lays out um why large language models aren't going to be the path to AGI, for example.
00:08:35
Speaker
um And so that's artificial general intelligence. um So that's that's where a lot of the the money is kind of being invested right now on in the AI space. So you see moves, huge moves at Meta, huge moves at OpenAI.
Speculation on AI Technology Evolution
00:08:48
Speaker
you know All these organizations are trying to attract talent um to build out these foundational models. However, ah you know some names like that, Fei-Fei Li, Yan Le Koon, others um have been saying for a long time that these models that are purely based on language, no other no other data uh really can't get us there and that's not how we learn as humans that's not how learning sort of takes place in the natural world um so there are other architectures that are being designed right now um that will definitely replace llms on the march for agi um however that's kind of a different conversation ai is here to stay in some form or another um so we will see
00:09:28
Speaker
ah continued development in the LLM
Development of Multimodal Models and Edge Devices
00:09:30
Speaker
space. We will see continued development in the multimodal model space um where, you know, audio, ah ah video imagery, all that kind of stuff is kind of ah pulled into these models.
00:09:40
Speaker
And then we'll also see a march towards smaller versions of all of these models. um So edge device capabilities where the models can actually fit, you know, on your phone or even smaller devices. Yeah, it's remarkable. I, um,
00:09:54
Speaker
I have a kind of a funny story regarding at least ChatGPT. um This is about three years ago when it was just beginning to proliferate, right?
AI's Role in Education and Responsible Use
00:10:04
Speaker
Very early on, maybe two and a half years ago.
00:10:07
Speaker
And my son had a English assignment that was due. yeah And um i happened to walk in the living room when he was doing his homework and he very quickly put his phone away.
00:10:18
Speaker
And immediately i knew what was happening. And I went up to him and I confronted him about but what he was doing. And he eventually admitted he was using ChatGPT. So on one side, i was very disappointed because this is literally an English class and he's having to learn foundational components of English.
00:10:39
Speaker
but But on the other side, I was proud because he was using what is really just a tool.
Strategies for Responsible AI Integration
00:10:45
Speaker
yeah um Do you see a lot of... of issues with people sort of relying too much on the large language model tools?
00:10:55
Speaker
Yeah, absolutely. So reliance is one of the the big risks. So so when we're when we're working with clients and we're we're talking about, um you know, what are the risks within your organization? What are the risks with AI generally?
00:11:08
Speaker
um Over-reliance is is definitely up there. that's That's one that we have to discuss. It's often overlooked, Um, but if you've experienced this where you're using AI for something and it happens to go down for half an hour and you can't continue because what you were doing is locked into this chat, you know, that you've created, um then you, you know exactly what this is. Uh, you, you can't, you can't go on, ah you can't even see the things that you've seen, you've done with this chat, uh, in the, in, you know, the past ah half hour or whatever.
00:11:37
Speaker
um so the over-reliance is, is a big issue. However, ah yeah There are mitigations to that.
Exploring AI's Darker Side: Responsibility Beyond Security
00:11:43
Speaker
So the mitigations are things like, you know, how to use it responsibly. How how do you use it without over-reliance?
00:11:50
Speaker
um How do you use it in specific cases? So the the English assignment is is a great one. i actually have a good friend here in San Diego who's a professor at ah SDSU and and, you know, writing.
00:12:02
Speaker
um And she, you know, two years ago was very you know resistant to using AI and and pretty much the whole ah university was very wary of it um using the, you know, these these ah sort of essay checkers and things like that to see if AI was was somewhere in there.
00:12:18
Speaker
um But now they're really encouraging the use of AI in very specific situations. So, for example, um they're given a topic, um they don't know much about the topic. Cool. Go use it to research.
00:12:30
Speaker
You know, don't necessarily use it to write your your paper, but use it to do things that will help you write your paper. um And so there's there's kind of forward-looking ways of of using it responsibly.
00:12:41
Speaker
Yeah, one of that that's an interesting use case. That's something that I do. um We created... it's probably gonna be out after this podcast airs, but we created a podcast called Thriving Through ai Hell, basically. yeah yeah And it's the it's the darker side, if you will, of not necessarily the specific security issues you were just talking about.
00:13:02
Speaker
yeah But one of the things that we did is, um and
Josh's Role at Cranium and MITRE's Influence
00:13:06
Speaker
I did in particular, is I was putting stuff on on paper, right? In Word, just getting my ideas out. And I wasn't worried about the grammar.
00:13:13
Speaker
wasn't worried about, the necessarily the punctuation or i mean, and I still kind of did, but I'd laid everything out because I wanted to get it from my head onto the paper. yeah Once I did that, I'd cleaned it up using ChatGPT and I sent it one um to one of my team members.
00:13:29
Speaker
She looked at it and she added her own pieces but it's a different voice. i talk I tend to talk in a very syncopated kind of way. yeah and And her voice is not as very direct, even not so melodic. I'm not saying it's melodic necessarily, but hers is different than mine.
00:13:46
Speaker
yeah But we were able to smooth it out for our audience yeah for this document we're preparing. So that's great. yeah So let's let's go back. ah Let's take it back a notch. So before Fire Mountain Labs, yes where were you...
00:14:01
Speaker
there? Yeah. So ah before that, I was ah the chief of AI security at Cranium, which was a kind of a new role. i don't think I don't think anybody had had that specific role before, but it was to kind of call out ah the importance of AI security that's that's certainly coming you know for for industry um and and government and academia kind of ah all you know all across.
00:14:24
Speaker
And so in that role, um really the the goal there was twofold. One, ah lead internally. So lead R&D, lead the AI engineering staff um and the AI security staff.
00:14:39
Speaker
So lead them in a direction of what are we going to build out product-wise? What are we going to ah try to um approach from a market perspective? You know, AI security is massive.
00:14:50
Speaker
um Just like cybersecurity, there's you know just too many things that one organization can can do kind of at once. um So what do you focus on? Data security, do you focus on vulnerability management, understanding the risks of AI, um threat intelligence, you know how do you how do you understand where threats are coming from, that kind of thing.
00:15:09
Speaker
um Monitoring, you know actually like you know looking in the ecosystem live, trying to detect ah things that are happening. And so, um you know really trying to set the stage, what's our roadmap, you know what are we actually going to build here? So that that's kind of internally my my role.
00:15:25
Speaker
um And then externally, um you know continuing that thought leadership. So um you know I was on a panel ah for trying to augment copyright law, for example, so that they would allow red teaming for foundational models, which is very important.
00:15:40
Speaker
um If you're trying to understand the security of these models, um but you're trying to get past the fine print that says you cannot red team my model. um So trying to get some headway there.
00:15:51
Speaker
um Trips to Congress, for example, to to meet with committees. um Try to augment ah bill language, for example, um that would allow more funding for AI security that would push ah the language in the right direction.
00:16:05
Speaker
um Educate the folks that are there on what AI security is. um continue you know publishing articles, publishing papers, things like that. I run a conference at SPIE that I created two years ago around assurance and security of AI-enabled systems.
00:16:22
Speaker
So continuing to do those kinds of things. So um those were kind of the main the main parts of that. um And so that's, and and yeah I guess I could go backwards, kind of how I ended up there is I was at MITRE for four years,
00:16:35
Speaker
um leading a department around AI security and perception. And that's when I met you know some of the folks at ah um ah Black Hat. um So 2023, summer met them.
00:16:48
Speaker
um was fascinated by kind of, you know, their story, their their kind of goals. um And that's where I made kind of that jumping off point. But MITRE is really kind of what set the stage for me personally for for kind of the cybersecurity, AI security stage.
00:17:04
Speaker
I was doing some of this work before MITRE, but MITRE is very known for cybersecurity. You know, everybody um kind
Addressing Real AI Fears vs. Exaggerated Scenarios
00:17:11
Speaker
of appreciates that. And then internally, yeah, they really are some of the best and and brightest folks that I've ever worked with in this space.
00:17:18
Speaker
So you're essentially the canary in the coal mine for Skynet?
00:17:24
Speaker
Yeah, it's funny. That does come up a lot. Skynet comes up, you know, obviously the Terminator ah scenes. Howl, you know, obviously comes up. um Yeah, it it's we actually had a marketing video back at Cranium ah that was funny.
00:17:39
Speaker
um it went, you know, it went through some of those scenarios, you know, what what should you be afraid of? um And then we used, you know, some sort of a chatbot, you know, like the Canadian Airlines example. Like, this is what it actually looks like. Like, this is what...
00:17:51
Speaker
the AI fears, you know, you think are, and they're really just, you know, you know, it's actually a chat pod that's going to release all your customer data to the world. That's the real fear. As you were saying these things about your experience with cranium and and miter, I, yeah I had a ah random analogy in my mind and I tend to have these sometimes.
00:18:12
Speaker
So in a matter of speaking, you are a bit like an anesthesiologist in the OR. Let me give you an example. Okay. I didn't know this, but the person in the OR r that rules and makes all the, can pull the plug, can stop the surgery, can do whatever is the anesthesiologist.
Comparing AI Security Roles to Anesthesiologists
00:18:34
Speaker
They have, they're sitting at the head, they're monitoring the body and they're You can have a ah brain surgeon. you can have a heart surgeon. The world you know world's most famous or most prolific heart surgeon.
00:18:47
Speaker
if that anesthesiologist sees things that he or she doesn't like, they can pull the surgery. Okay. And it's happened. I have a buddy who's an anesthesiologist and he said sometimes he's pulled it. Yeah.
00:18:58
Speaker
And this person's, and the the surgeon has been pissed. Okay. Because it guess what? If the anesthesiologist sees you're decomposing and you're de decompensating, yeah then they're going to pull it in and that's it. So in a matter of speaking, you're you're watching the body. You're watching what's going in, what's going out, and you're you're a bit of a sentry for that. Does that make sense to you?
00:19:20
Speaker
It does make sense. I see your point. So um another another person, ah actually somebody a cranian mentioned once, Okay, so you're you're telling us that we have cancer, you know, where there's there's security problems with this AI that we're that we're trying to, ah you know, ah and enable and like, so you know, invest tons of money in.
00:19:42
Speaker
um But you're not telling us like how to get rid of it. You're just telling us that you you have the cancer. And so that's that's kind of, I think, where we were you know in the early days of this. you know we We know there's problems. We know there's vulnerabilities. we've We've exposed them.
00:19:55
Speaker
ah We can attack your systems and show you show you that you know these things are real. um But what are the real mitigations? And so that's that's that's the point. So one of them, certainly, turn turn it off. That's that's a classic cybersecurity solution, ah certainly in an incidence response kind of ah plan.
00:20:12
Speaker
ah Where something does go really wrong, you want to be able to to pull the lever and and turn the system off. um Sometimes people don't plan ahead. They don't know how to do that. They don't know where they would actually pull those levers.
Business Pitfalls and Forward-Thinking in AI Security
00:20:25
Speaker
um So that's very important in in the kind of architecture phase of cybersecurity around ai um But to your to your point, yeah, I think the it is interesting seeing kind of everyone ah starting to experiment. You know, this has been going on for a couple of years now.
00:20:42
Speaker
um and And it's getting even more steam now um where people are are, you know, maybe they have a sandbox where they're doing things and they think they're doing them safely, but then they bring a data set in ah that actually has some customer data or might have some proxy data that they think is not close enough to their customer data, but it actually is or or something like that. um And then ah problems can occur. So then data leakage can happen.
00:21:06
Speaker
um If they're training models on that data, that data is now in that model. um So we recommend, you know, things like retrieval augmented generation. like So there's there's ways around kind of the problems that we're that we're seeing.
00:21:18
Speaker
um But kind of that first step is, to your point, the first step is, no, there there are things you need to be thinking about. There are problems that you need to be aware of. that's ah That's very well said. I think when you look at the potential pitfalls that occur can can occur in business, wow um I think a lot of people don't give enough credence to these sorts of things.
Emergence of Chief AI Officers in Organizations
00:21:42
Speaker
yeah You know, rewind the clock 10 years and cybersecurity was... just make sure you have a good virus scanner on your computer. Yeah, that's right. And now the the level of sophistication that's been required to maintain ah safety in your own organization against cyber thugs yeah is just enormous.
00:22:04
Speaker
And have friends that run cybersecurity companies and, you know, we talked briefly and and it's just the fact that Those didn't exist 15 years ago. That's right. Now there's entire teams that come in to make sure that you're, you're legit.
00:22:17
Speaker
Do you foresee a time when you're going to, every organization is going to have a chief AI officer of some sort? Yes, certainly chief AI officer. Um, I think that is coming. You're already seeing some organizations adopt that,
00:22:31
Speaker
um It's interesting, you know, where where is this going to fit in? That's a little bit unsure. People are still trying to figure out who owns AI right now without that role. So somebody is going to have to kind of wear that hat. um of who Who is your AI expert? Who is your AI advocate within the organization?
00:22:49
Speaker
um who's going to be thinking about not only the positives, the the things that we want to do with AI, ah the the things that we want to experiment with and and innovate on, um but also the security
CISOs' Role and Knowledge Gaps in AI Security
00:23:02
Speaker
side. And the security side right now ah is kind of ending up in the CISO's lap, for example.
00:23:07
Speaker
um So the CISO organization, ah they're they're you know kind of commonly obviously in charge of the security side of the house, but they might not be educated
Establishing AI Policies and Councils for Safe Adoption
00:23:18
Speaker
on AI. They don't know these AI specific vulnerabilities.
00:23:21
Speaker
um Maybe they they think that the tools that they currently have are going to handle these AI security vulnerabilities. um Right now, they they very much are are not. um So, you know, what does that future look like? And I think for sure, chief Chief of AI officer, that's definitely one that's coming. We may see some more roles like that kind of round out the the C-suite.
00:23:42
Speaker
Okay. Let's go back one more level in your career path, and then I want to really dive but more into what you're currently doing. So what happened what was were you before MITRE? Yeah, so before MITRE, I'll give two because they're related. um i was ah i was at SPAWAR, which now is called NYWC Pacific. It's a Navy lab here in San Diego.
00:24:02
Speaker
i was doing a lot of ah research around AI, trying to you know make everybody aware. Back then it was called machine learning. Nobody was attaching AI to to what we were doing.
00:24:12
Speaker
um Now that's kind of taken over. um But we were really trying to convince everyone that machine learning was an area that needed to be paid attention to and invested in um for particularly DoD applications. um So deep learning was exploding back then. um i had had gotten a scholarship from UT Austin. That's how I ended up there.
00:24:33
Speaker
um and continued some of that research on real-world applications of of machine learning and computer vision. um So things like, you know, maritime ship detection. So extremely hard problem.
00:24:45
Speaker
ah You know, the the background on that is water. That water is constantly changing.
Industry Perspectives on AI Risks and Adoption Attitudes
00:24:49
Speaker
There's nothing you can really do to kind of subtract that from a scene. um The objects are very small, typically, in video and in satellite.
00:24:56
Speaker
um So detecting these things is very is very difficult. um You don't have a lot of training data. So that that was kind of the research. So I led so several research projects back then around image quality, video quality, ah detection of maritime assets, um tracking, ah all all sorts of things, augmented reality. you know So like lots of different aspects of of the computer vision problems ah that sort of were in academia, but were kind of more difficult to solve in ah in a Navy and maritime setting.
00:25:28
Speaker
What year was this? This was 2012 to about 2019. Okay, so um you're like one of the OG AI guys, sounds like. yeah Yeah, back back before it was cool or or before it was even called AI.
00:25:44
Speaker
um And then just to follow up, ah briefly was at Shield for a year, Shield AI. um who, ah you know, they're they're still in around, they're doing great. um But back then we were building a quadrotor very much based on ah some of the work that that we were doing at NYWIC PAC with some other folks that also moved to a Shield from there.
00:26:03
Speaker
um So that was a really interesting time applying computer vision, to you know an actual quadrotor vehicle, um loading these models directly onto the vehicles themselves.
00:26:14
Speaker
And the whole point of that was for them to autonomously navigate through ah buildings and things like that, map them out, you know find ah people, other objects that are in the scenes, and then bring that back to the operators.
00:26:26
Speaker
Interesting. That's really interesting. And before that, you were at NI, roughly? Or you went to school, you got your PhD. Exactly. So PhD at UT, but before that, definitely, yep, National Science.
00:26:38
Speaker
but A lot of our audience that listen to the podcast are really across the board. We've got business leaders, business owners, entrepreneurs, solopreneurs.
00:26:49
Speaker
You got rank and file folks that are just doing their job. you know You've got retirees. It's a whole smattering. Yeah. What would you say as a message, whether warning or whether whatever it might be, what message would you give yeah our audience regarding ai security?
00:27:09
Speaker
So I would say ah do your due diligence. So AI is here. um It's here to stay in some form or another. um What we recommend for first steps when you're when you're not sure what to do, um do do the following. So ah set up an AI policy. So do you have an AI use policy in your organization? what What models are you allowing people to use?
00:27:31
Speaker
What things do you strictly forbid? um How are you going
Skepticism and Ethical Considerations in AI Development
00:27:35
Speaker
to enforce that? how are going train your organization on that policy? um But first and foremost, have the policy. um Step two, and the and these can be you know swapped out with each other, um but build an AI council or an AI committee, steering committee of some sort, um bringing leaders from your organization. So across the organization, so legal, hr Data science, ah you know, so people from your CTO organization, ah you need people from the security organization, bring them together, ah talk about, you know, how you want to innovate with AI, talk about, you know, who who is going to be in charge, who's going to take the ball when it comes to um some of these topics that we've that we've been discussing.
00:28:16
Speaker
um And then ah start start innovating. um So what are you going to allow in your ecosystem? of You know, but you know, maybe the CEO said every department is going to have to have at least two AI initiatives over the next six months.
00:28:32
Speaker
um So how do you organize that? How do you track that? You know, how do you measure or ah ROI and and things like that ah risk, ah for example? um So start start kind of building out your your company's approach to those things.
00:28:46
Speaker
the yeah Those are very well said, Josh. How would you react to the, you know who Jeffrey Hinton is, of course. Yes.
00:28:59
Speaker
how do How do you react to the fact that he said that there is a 20% chance that AI will basically eliminate the human race in the next 30 years?
00:29:13
Speaker
Yeah, so I think it's similar to what we've seen in the past with other huge leaps in technology. You know, we saw this with ah nuclear energy, ah you know, nuclear weapons, obviously.
00:29:25
Speaker
um We've seen this before with with other types of technology. um So i think it's I think it's good. It's a healthy conversation
Potential Realization of AGI and Future Implications
00:29:33
Speaker
to have. It's good to have skeptics. um I think right now ah we don't have the kind of AI capabilities that would cause us that kind of harm.
00:29:42
Speaker
But we need to be thinking about it for sure. I mean, Isaac Asimov forever ago um had his laws of of robotics, right? So we need to be forward thinking. We need to be thinking about how ah we train these systems, how we use them responsibly, um ethics, you know, how, you know, we haven't really talked much about ethics right now.
00:30:01
Speaker
ah during this podcast, but I mean, that's a huge topic. So, um you know, what what's the divide right now, you know, technology divide from, you know, the wealthy nations to to ah the poor nations is massive. Is AI going to help that or is it going to ah completely exacerbate that divide?
00:30:19
Speaker
um You know, right now, probably the latter. um So, you know, we need to be thinking about these things. So I'm i'm not kind of, I'm not really looking at the kind of doomsdayers as,
00:30:29
Speaker
as ah you know, oh, they're crazy, you know, whatever. There's there's there's nothing like that happening. It's more like they are warning about the real AGI that's coming.
00:30:40
Speaker
So that that real AGI will happen.
Stoicism in AI Evolution: Thoughtful Responses
00:30:43
Speaker
um It won't be with, you know, the current technologies that we have, but we will see some some major leaps in the next decade or so ah that will bring us closer to that.
00:30:52
Speaker
um And so what do we do when it does come? And so I think, you know, he's making important points about being prepared for that. um So our other other folks out there, Ray Kurzweil is another one that's been talking about this for a long time with his his book on the singularity.
00:31:06
Speaker
um So, you know, i think I think it's a healthy conversation to have and we should be having it. um But some people tend to latch on to those comments as that's the now, that's that's the reality right now. And it's it's not quite there.
00:31:19
Speaker
Yeah, we see that as well. You know, the foundation of one of this, one of the foundation pieces of this podcast has to do with stoicism. The idea that the world is going to be the world. You can't control it.
00:31:31
Speaker
You can only control what is in this world. And as business leaders, we have a sometimes a sense to want to be ah reactionary. instead of more respond.
00:31:44
Speaker
And um you know I've had several conversations with my peers. you know I run a company, IR Labs, and I have friends that we have casual conversations about AI.
Varied Business Responses to AI Adoption
00:31:54
Speaker
And I've seen from one extreme to the other.
00:31:59
Speaker
I've seen a buddy of mine, i won't really talk about the industry because you'll know who he is probably, but he's like, I will never use FNAI. yeah I'm going to dig my heels in. I'm never going to hire or bring on an AI person.
00:32:15
Speaker
I'm never going to have these agents. um And we're just going to do everything old school. yeah That's one side. Then I got ah other friends on the other side of the coin that are adopting it as as fast, as quickly, and as widely as possible.
00:32:30
Speaker
yeah And I've got a bunch in between. What do you have to say about that theater I just put in front of you? No, it's it's so interesting because I have exactly the same the same thing. um You know, the the folks that don't want to adopt it, I get it. You know, there's people that that still don't have smartphones. um There's people that still don't have any social media accounts.
00:32:49
Speaker
um So we're always going to have that. We're always going to have people that are they're not... not even resistant necessarily, more like agnostic. I just don't, I don't care about it. Like that's not something I want to bring into to my world.
00:33:00
Speaker
And depending on their business, um they may be able to do that and be successful and and be just fine. and um We're not really seeing a lot of businesses these days, you know, obviously the people we were talking about ah we're talking with are very technology driven.
00:33:16
Speaker
um So AI is going to touch their space ah at some at some point, even if it comes to a third party. um I think that's what a lot of people don't realize is that um if you're working with third party at all, chances are they either have AI in their systems or they're working on it now. And it's going to sneak into your ecosystem.
00:33:33
Speaker
um which is kind of something we haven't talked about, but shadow AI is is is is something to to be aware of. So I would say, you know, to to that side of of the house, um you know, good on you if you if you can if you can do that and that's that's what you want to do.
00:33:48
Speaker
have no problems with that, obviously. i'm I'm trying to help people that are trying to adopt AI um and just be aware of of their vulnerabilities and and adopt it safely and and securely.
AI Psychosis: Warning Against Over-Dependence
00:33:57
Speaker
um On the other side of the house, you have ah even further than what you said, you have the AI so psychosis. So the people that have used it so much um that now you kind of have ah people thinking that they've solved really important you know world problems um with just them and their chatbot.
00:34:17
Speaker
um And you know you get this ah this this soundboarding effect. um where you're the greatest thing in the in the world and you've solved this huge problem. Like, oh, I know, this is amazing, let's go to the next problem.
00:34:29
Speaker
um And so they're just in this little this little cave with them and their AI um and and they've kind of you know it separated themselves from the world. You're seeing a lot of this. There's been some research um about this.
00:34:40
Speaker
So I think there's power users that are very effective at using AI. that know how to integrate it into their workflows and how to take advantage of all the all these technologies. um And then there's people that have taken it a little further and to some little scary territories that we need to be aware of.
One-Person Billion-Dollar Companies and AI
00:34:57
Speaker
The following comment that I'm going to make, tell me what you think about it and tell me if you think it's going to happen. And if so, when? Okay.
00:35:09
Speaker
Do you believe that a billion-dollar company Will come to fruition with one employee yeah in the future? Yes.
00:35:21
Speaker
How far away are we from that? So we've already seen one sell for over a million um with one employee. So that was that was just a couple months ago, I believe. um So we're probably, the way things are going right now with funding and and you know the the excitement around AI, VCs are are still excited to fund AI companies.
00:35:41
Speaker
um We may see a little bit of a slowing down about that. i've got ah I've got a paper where I talk about you know ai winters and and how those kind of work. um So I would say with the current trajectory, maybe two to three years, something like that, billion dollar one person company.
00:35:59
Speaker
That's ridiculous. what are you using Whether or not it's actually worth a billion dollars, that's that's a different thing, but it will get valuated at a billion dollars. you know I'm sure. Yeah. with that With a whole cacophony of agents doing the work right with ah some some magic tech stack of
Josh's Advice to His Younger Self and NVIDIA's Rise
00:36:17
Speaker
some sort. Yeah. It's going to be interesting.
00:36:19
Speaker
Yeah. Here's another question. Imagine you and I are back in Austin, Texas in 2001, 2002. Yeah. yeah Hanging out at the Salt Lake, drinking a beer out of a nice chest because, of course, it's a dry county.
00:36:33
Speaker
And you get a phone call and you walk away from the crowd. And on that phone is yeah Josh Hargis, 2025.
00:36:45
Speaker
Yeah. yeah Right now. What message would you give... that 24-year-old kid there at the yeah ah Salt Lake.
00:36:59
Speaker
Besides put all your money in NVIDIA, because that's an obvious one, um I actually was at ah a 2005, I believe, a supercomputing conference in Austin, and NVIDIA showed you know very early days of what they were building hardware-wise and also their CUDA that they they had just...
00:37:18
Speaker
um you know so I think they made it had maybe had an alpha or something like that. And i was like, wow, that's really interesting. you know I was working on the supercomputing side at TAC on some of their machines, and this was a completely different approach.
00:37:31
Speaker
um So definitely that. um Also, and I've said this ah in some other and some other places, but I'm from a computer vision background. you know I knew as soon as deep learning hit, you know ah neural networks we're goingnna were going to take the the main stage and and really...
00:37:48
Speaker
drive a lot of the innovation going
Shift from Computer Vision to Language Models
00:37:50
Speaker
forward. um That was very clear. ah Computer vision led the stage for a while, so I was very happy about that. um that was That was, you know, the data-hungry side of the house was definitely imagery, video, that that that's, you know, massive amounts of data. And so we were able to make lots of progress there.
00:38:08
Speaker
What I didn't see coming is that language would be the place where we would see the huge leaps um that we saw in 2022, 2023. um in this space.
00:38:18
Speaker
um So I think I was blind blindsided by that. some other Some other folks, you know, in my field were definitely blindsided. Not that we didn't see the leaps coming, just that's not where I expected
Debate on Language Models' Physical World Understanding
00:38:29
Speaker
it to come from. I thought it was going to be maybe a combination of text plus imagery or or something like that.
00:38:35
Speaker
um And it really was just pure language models. And I think that was that was pretty surprising. So i I would definitely tell young Josh, like, pay attention to NLP, natural language processing, pay attention to some of these things.
00:38:46
Speaker
um Interestingly, UT, when I went back to grad school, um I wanted a neural nets class, and they didn't offer it um because neural nets had died.
00:38:58
Speaker
So neural nets were no longer in favor. We couldn't figure out how to utilize them. It really came down to like computing power, um you know, not not getting enough data into them to train them appropriately.
00:39:10
Speaker
you know, there were all these kind of reasons why white people, you know, got got away from neural nets. And so I said, okay, i guess I guess I'm going to have to learn about those later. and And sure enough, yeah, everybody did.
00:39:22
Speaker
Yeah, that's remarkable. that I didn't think language either. i mean, as these things started to come up, language seems so difficult and nuanced. yeah I think that you know machine learning, you're you're dealing with pixels, you're dealing with intensities, and I'm not obviously in that domain, but I know enough.
00:39:40
Speaker
yeah To understand that it's very, it should be simple compared to language. Language is crazy. Because I could say something in two different intonations or two different, you know, you flip a word and it completely makes the meaning opposite.
00:39:55
Speaker
Right. It's crazy. Right. yeah and another another piece because i've been involved with some of the interesting uh research in the past like there's this muri these are multi-university research initiatives um that was talking about kind of scene understanding and common sense understanding from scenes and so one argument that's going on right now and this is also in the feifei li article that's recent um is can these language models actually understand the physical world can can it understand you know the world that we really live in.
00:40:25
Speaker
And the research so far says definitely not. um And so we'll need things like computer vision. We'll need like embodied AI where, you know, you're actually, ah ai is actually in the world, you know, interacting with the world in order to to get there.
00:40:40
Speaker
um And so that, you know, there's still a lot to to be done. Obviously, we're we're at the early stages of this. But yeah, agreed. I think I'm i'm still surprised how much how far we've been able to get with pretty much language alone in the past couple years.
Future Technological Advancements and Impacts
00:40:55
Speaker
Yeah, that's for sure. I'm going to take that question. I'm going to take it a slightly different direction. I'm going to hand you this magic phone and you're going to call Josh Hargis when he's 75 years old.
00:41:08
Speaker
Yeah. And he's got a long beard and he's... what What question, what would you want to know from a 75-year-old Josh? Yeah, that's a good question. Okay.
00:41:19
Speaker
um I would want to know, I mean, same kind of thing. What's the next NVIDIA? Where should I put my money? think that's an easy one. um But I would also want to know, you know,
00:41:30
Speaker
ah what you know what's coming? what are the What are the really interesting, nuanced things that are coming that I'm not thinking about right now? um You know, what I missed the fact that ah that it was going to be language that was going to make a huge a huge lip leap. um Whereas, ah you know, I was very focused on computer vision um aspects of of machine learning and ai um So what are what are the things that I'm missing? what what's what's ah What's coming on the horizon that I'm not quite seeing?
00:41:58
Speaker
um i think I would definitely ask, um how has quantum affected your world? ah Because that's one that we haven't talked about yet, but that is coming. It's coming.
00:42:10
Speaker
it's so it's gonna it's It's definitely in the background because we don't we don't have anything like we have AI where but we can point to and say, there's the use case. This is how it's going to affect your world. um Here's how would you would use it in business.
00:42:23
Speaker
Quantum is in the background. it's It's going to sneak up at some point. um And when it does, it's going to break all of our codes. It's going to break you know cryptography. It's going to break security. um There's going to be obviously positive aspects of it as well. Computation is going to go through the roof.
00:42:39
Speaker
um ah Cost-wise, you know it's going to be reduced. you know There's all these promises, um energy as well. um So I would definitely want to know like you know where is that tipping point for quantum and like how has has that affected your life?
00:42:53
Speaker
um and And then there's the combination of AI, AGI, quantum kind of all colliding. so Like, what is your world like now? Like 70, 75 years. So that's, you know, I'm like ah i'm not going to say exactly how old I am, but yeah, that's few decades away.
00:43:08
Speaker
um So is the world drastically different now based on these technologies or is it pretty much the same? And these things, you know, kind of just became integrated into your life and, and you know, we're all this well.
Balancing Technology and Nature in Future Life
00:43:20
Speaker
Yeah. That question I ask most of my guests. Yeah. And um every time I ask it, I also ask it in my own head. yeah about my own life. yeah And obviously we, uh, we collect more information as we go along in life.
00:43:36
Speaker
And one of the the beauties of, of having this podcast and interviewing folks is that i my own world expands even bigger. So yeah I asked myself that question recently and I'm, I'm always getting, I'm coming back to almost a, um, a reversal situation in that I'd just love to to go live in a mountain lake town and just, you know, farm and drink wine and smoke cigars and eat prosciutto and not worry about um a bigger bigger world happening in
Connecting with Josh and Fire Mountain Labs
00:44:10
Speaker
Yes. So, Josh, as we wind wind up today's podcast, tell our audience how they can find you, how they can understand how to get touch with you and so forth.
00:44:22
Speaker
Yeah, absolutely. So linked LinkedIn, ah so I've got a profile there. Hargis is my last name, H-A-R-G-U-E-S-S. That's the easiest way to get in contact with me. um We post pretty regularly there from our Fire Mountain Labs account as well about you know thought leadership, things that are coming.
00:44:38
Speaker
um We've given some recent talks at B-Sides Las Vegas. um We've got a couple of talks coming up, one at Camlis, which is ah a research company. ah conference around adversary machine learning, also a NATO symposium around AI security and assurance.
00:44:54
Speaker
um So we've got some things ah cooking that will be out there. um And, you know, i have a conference, SPIE a conference that's called Assuring and Securing AI-enabled Systems. So that abstract ah ah call for abstracts is open right now. So if you're interested in submitting or or learning more, um you know, find our call for papers.
00:45:14
Speaker
um But yeah, pretty much LinkedIn is the easiest way. That's great. I'm really glad we reconnected. 45 minutes just flew by, and I think we could easily talk for another three, four hours.
00:45:26
Speaker
Agreed. It'd be nice to have a little Shiner Bach to but that'll be for the next time. Perfect. Well, thanks again, Josh, for your time, and we'll be looking forward to learning more about Fire Mountain Labs and yourself on your LinkedIn.
00:45:39
Speaker
And with that, we'll talk to you later. Awesome. Thank you so much, Manny. Cheers. Cheers.