Introduction to the Podcast
00:00:12
Speaker
Hello Basement Programmers and welcome. This is the Basement Programmer Podcast. I'm your host, Todd Moore. The opinions expressed in the Basement Programmer Podcast are those of myself and any guests that I may have, and are not necessarily those of our employers or organizations we may be associated with.
00:00:29
Speaker
Feedback on the Basement Programmer podcast, including suggestions on things you'd like to hear about, can be emailed to me at tom at basementprogrammer.com. And I'm always on the lookout for people who would like to come on the podcast and talk about anything technology related. So drop me a line. And now for this episode.
Meet Kenneth Hendricks
00:00:49
Speaker
Hey, welcome to this month's episode, the podcast. In this month's episode, I'm talking to Kenneth Hendricks, Solutions Architect for Kalint. Kenneth also happens to be one of the folks that I go to for AI and data related questions. So he's an expert in my book. Welcome Kenneth. Your audio is... It's great to be here. Thank you very much.
00:01:14
Speaker
So let's start with some introductions. Who are you? What do you do? What's your favorite color? No, don't answer that because it's probably a security question someplace. Yeah. That's interesting. A separate side note is that anytime I have those security questions, I always make up some crazy random answer and have to store that in my password manager because there are too many people that know some of those details of my life. Like, what's the first city that you were born in or whatever it is? I do the same thing.
00:01:44
Speaker
Yeah, so it's great to be here. My name is Kenneth Henricks. I'm a senior customer solutions architect at Kalent. And what that means is that I'm focused on the pre-sale side. I'm probably not skilled enough to get my hands in and build anything, but I understand how to translate business problems into technical solutions and then equip our teams to be able to go and sell those things.
00:02:05
Speaker
I've been in and around technology for over two decades. I've done everything from bare metal to browser. I've spent some time in executive leadership. And today I'm very happy in the role that I'm in, being able to talk to multiple businesses about their problems and see the cool things that technology is able to do to solve these problems. Cool. So one of the things I've learned over the last year is in order to be involved with AI and data, you have to have an amazing beard. True or not?
00:02:33
Speaker
Well, I think that's left up to the beholder, I believe. But in my opinion, it is definitely a plus. All right. So let
The Potential of Generative AI
00:02:45
Speaker
me ask you this. What is it that excites you about AI?
00:02:52
Speaker
The potential that we have untapped, I think that AI is solving some really cool problems today. I think it's really elevated the customer experience and also freed us as humans up to focus more on value propositions and less on the nuts and bolts of things.
00:03:08
Speaker
But I think what excites me most is just the untapped potential. You know, I think that we're on the verge of probably a revolution and not an evolution with this generative AI thing. I think we've already seen tremendous value, but I think the power of traditional analytic AI plus this new generative AI is just going to blow us away in the immediate future. Cool.
00:03:28
Speaker
One thing that I always come back to as a child growing up in the 70s and 80s was science fiction then is everyday real life today. Anything that hasn't come true that you wish there was?
00:03:44
Speaker
A few things. I think it would be amazing if personal flight was a little more accessible if we didn't have to go through the pains of an airport, but I also understand that logistically that would be a nightmare just based off my own experiences in traffic in an automobile.
00:04:03
Speaker
Cool. So the topic today's conversation really is generative AI hype versus reality. So let's start with a baseline. What is generative AI really? And how does it compare to boring old regular AI like C3PO?
00:04:21
Speaker
Yeah, so it's a question that if you ask me again in a week, I'll probably give you a different answer because it seems to be an evolving topic. But I would really distill it down to say that generative AI is there to take some sort of a command and turn it into some content. And that content could be in the form of text. It could be in the form of images. It could be in the form of audio, et cetera, where your traditional AI, or what we call analytical AI,
00:04:47
Speaker
are the things that are using existing data and then making predictions about that data. And so from a use case perspective, generative AI is able to take a lengthy document and summarize it. It's creating a summary that didn't previously exist to where your analytical AI would be more like sales forecasting and taking maybe the last five years of your sales data and being able to predict spikes or trends in your data to show you what the future may hold.
00:05:14
Speaker
Cool. So, generative AI is just creating new things out of stuff it's seen before in theory, correct?
00:05:25
Speaker
Yep. And interestingly, I think that there's a lot of people in the creative industry that are feeling very threatened by this because rather than just simply predicting what's going to happen based off of data, it's able to create content where I think we consider that those highly skilled people before that had the ability to write or to create art.
00:05:49
Speaker
So why do you think it is that generative AI has captured the attention of pretty much everybody on the planet in the last year?
AI Adoption and Practical Applications
00:05:58
Speaker
There's probably two key things from my perspective that have really increased the awareness. The first is that so generative AI has been around for a while. It's not as brand new as most people think that it is. It's just that it's only recently become basically involved in every conversation that everybody is having. And I think the reason that is, is because of chat GPT because chat GPT took the third iteration of their
00:06:24
Speaker
their LLM and they made that accessible to the general public by their chat GPT chatbot on the web. And so now you have these business leaders that probably weren't traditionally in the weeds of technology, and they really didn't have any hands-on experience with how the company was leveraging analytical AI to solve problems. They just knew that those problems existed and that some magic thing was solving those problems. With chat GPT, it gave these non-technologists the ability to go see something tangible
00:06:50
Speaker
where without any sort of effort whatsoever, they could go in and say, here's a paragraph, summarize it, or here's an email, pull out for me the flight number and the flight time for this particular flight. And they're able to actually get value with very little effort out of this thing. And I think the lights just went off for everybody and they understood that if they can get some value this quickly and easily from this, they understood that they were probably just barely scratching the surface. And I think that's what really increased the effectiveness of this.
00:07:19
Speaker
Now, at the start of that, you mentioned LLM. Can you just give a brief, what is an LLM? Sure. Yeah, an LLM stands for Large Language Model. And in general, what the value is behind these is that these companies like OpenAI have taken these LLMs and they have trained them on massive corpuses of data. They've ingested information from the internet. They've read books. They've done all the things that they can to put information into it.
00:07:48
Speaker
And what they're doing is teaching this LLM how words are associated with each other so that they're able to understand a prompt and then formulate a good response back. It costs millions of dollars and takes an awful lot of GPU compute time to be able to train these. And so the value proposition that they have is they've already gone through that exercise of spending all that money and all that time to create them. And then you can drop in and you can start to leverage them. You can use open AI as APIs to integrate into your applications and your workflows.
00:08:16
Speaker
Amazon has got similar capability. They've got Amazon Bedrock. They've got several different LLM models available that you can use. And you can drop into different ones depending on what your particular use case is. And you're not having to spend any time or any money training these things. They're already ready to have conversations from the start. Cool. Now, I know in some of my conversations I've had, it seems like a lot of customers say they want to do generative AI.
00:08:46
Speaker
but they aren't really sure what doing generative AI really is or what it means. What are your thoughts there?
00:08:54
Speaker
I think a very common thing that we see is people thinking that generative AI is chat GPT. And so they associate chat bots or intelligent digital assistants with generative AI. And so that's typically the use case that they're thinking exists is the ability to build some sort of a chat bot that's either internal or external facing in order to take some sort of information about the company and have intelligent conversations about it. It's a very common use case.
00:09:22
Speaker
I think a misconception is that some people tend to think that generative AI is replacing analytical AI and in reality it's just an evolution from. Analytical AI has been around for a long time and it's here to stay. I mentioned earlier an example of forecasting sales. Those are things that generative AI is not designed to be able to do.
00:09:39
Speaker
it's designed to be able to understand words and create new words understanding images and creating new images and things like this it's not understanding data and making predictions and so there's been a number of cases where customers come to us with business cases that when you actually double click into it and take a look at it they're asking you to solve a problem with
00:09:58
Speaker
Analytical AI, not generative AI. And the exciting thing about that is that analytical AI has been wrong for so long that it's tried and proven. And so these are easy problems to solve. They're not quite as bleeding edge as generative AI. And there's not quite the amount of uncertainty around the quality of outputs that you would get from gen AI type models.
00:10:24
Speaker
And that kind of leads into my next sort of question was that, you know, generative AI, it seems like it's almost being seen as a Swiss Army knife. You know, one size fits all solution, it's the AI that rules them all. What are some of those situations where it's not, and you mentioned some like sales forecasting, but what are some of the other ideas where it really isn't the right Swiss Army knife to solve the problem?
AI in Banking and Security
00:10:53
Speaker
Yeah, for sure. And I think it can be part of a Swiss Army knife, but it is not a Swiss Army knife. There are tools like Amazon Bedrock Agents and Lang Chain that allow you to stitch together several of these models so that from a customer-facing perspective or a customer experience perspective, these things are all in the background and it's one process.
00:11:17
Speaker
You know, if you think about an example where a user logs into their bank account, for example, and they want to know things like, hey, how do I apply for a new checking account? They can ask a chatbot this question. Well, this chatbot uses an LLM to be able to understand the question.
00:11:34
Speaker
and then it uses some vector store or some other LLM to be able to answer the questions. So the bank probably has some documentation index that talks about the process of opening a checking account, and it's able to go retrieve that information and not just spit back a link and say, here, click this, good luck, but actually summarize the steps for the customer in a more conversational manner. And then when the customer says things like, what's the trend of my savings account balance over the last three years,
00:12:00
Speaker
That's where agents or Lang chain can come in and say, hey, I understand this request. And it has nothing to do with a chatbot or generative AI. But it does have to do with a SQL query. Let's go query the accounts table. Let's get their balances. Let's aggregate it by month. Let's figure out what the trend looks like. And then let's spit the answer back out to the user. So there's an example of where you're using both generative AI and analytical AI models together in one seamless user experience to the end user.
00:12:26
Speaker
The customer has no idea that there's this context switching going on in the background using analytical AI versus generative AI, but to them, it's a very enhanced experience that's very personalized and very personable for them. Cool. What are some of those other areas where it's like customers saying, oh, this is generative AI, but it really totally
00:12:52
Speaker
No, you need to go someplace else. Any of the other use cases you can think about that really should stay in that traditional AI sort of area.
00:13:03
Speaker
Yeah, there was a customer we talked to that uses security telemetry data to predict incidents and things like this. And their platform was collecting telemetry on incidents that were labeled P1 through P5. P5 is the lowest severity. It's something that may be suspect, but probably not a problem. More than likely, it's benign, but they weren't entirely sure.
00:13:27
Speaker
And so we talked to them and they were wanting to use generative AI to essentially build a security analyst, a digital security analyst to go through and analyze these security events and to determine which ones were in fact problems and which ones were not problems. And so when we dug into that use case, what we discovered is that 95% of the use case could actually be achieved through analytical AI and only 5% of it was necessary for generative AI.
00:13:52
Speaker
And so just to go into that solution a little bit more, what it looks like is as these P5 security incidents are streaming across, you would use an analytical AI algorithm to track the frequency of those that are coming across and using those plus historical data to detect if and when there may actually be a problem.
00:14:12
Speaker
Once a problem is discovered, you could use technologies like Lang chain that use llms to go through and fan out to different systems to gather more information. And so in this case, we would run SQL queries against several downstream source systems to go gather more of that telemetry information if it wasn't already collected.
00:14:30
Speaker
And once we had a holistic picture of the incident and all of the logs from the related systems, then we would use generative AI to go read through that information and summarize it and create a ticket in their help desk to then turn it over to a security analyst to go look at. Once they had some confidence in this thing's ability to accurately detect these and accurately summarize them, the next step then would be to go automate some of the actions that would take place as a result of it. And so now they don't just have to
00:14:56
Speaker
ignore these P5 security incidents and just let them log off. They have the ability now to identify the half a percent of those that are actually suspect and probably aren't benign and to start digging into those. And when their analyst gets online, they don't have to go through and do all the research. They're able to read a summary. They're able to go look at all the attached logs and they have all the information that they need to make a human decision. Those human decisions then get fed back into a model and can ultimately train it to determine which ones are in fact a problem and which ones are not.
00:15:25
Speaker
That sounds really cool. Definitely a huge time saver. I can't imagine diving through all those logs for millions of failed logins attempts or whatever it may be.
00:15:39
Speaker
Yeah, for sure. And honestly, that's what charges me up the most in my role is being able to talk about use cases like that. Because if you just zoom out a bit and just think about the business implications there. So at the current state, they couldn't even keep up with these, which means that now, because of some very simple technology to deploy,
00:15:57
Speaker
they've got the ability to keep up with these alerts that previously weren't being looked at. This means that if they want to grow their business, now they've got a value proposition for their customers. And we look at all incidents, even P5 incidents, flood us and we'll still determine what's good and what's bad. And they can scale their company as they grow more customers. They're not worried about having to scale their analysts with people that are spending their time just to your point, looking through all of these logs at what is more than likely not even relevant to look at.
00:16:23
Speaker
And so it gives better higher job satisfaction for the employees that they have, makes it easier for them to scale, and it increases their value proposition to their customers. Cool.
Generative AI in Energy Sector
00:16:36
Speaker
So let's talk about some of the spots we're seeing that Gen AI is really the thing that fills these really great gaps. Any cool examples there that we can dive into?
00:16:50
Speaker
Yeah, I think digital assistants are really cool. We see those typically in two classifications. You've got internally facing and externally facing. Internally facing seems to be much more common than externally facing. But whatever they are, the idea is that you would take the power of the existing LLM to have conversations and then you would augment its knowledge with your own domain information.
00:17:15
Speaker
And oftentimes that looks like an internal knowledge base, for example, so that instead of scanning through your wiki and searching for keywords and finding links and then scanning through that document to find the sentence or the paragraph that may answer your question and then reading the paragraph and then creating the answer on your own,
00:17:35
Speaker
These digital assistants have the ability to index that information. When you ask a question, it's able to go back, review all the documents that it has indexed and synthesize an answer for the user. As an example, a company that we had talked to was in the private energy business. They're generating private energy for large organizations that have warehouses or whatever it is that would be out in the middle of nowhere and maybe not on the typical grid inside of a big city.
00:18:03
Speaker
they have these massive power plants that are generating electricity and they have technicians on site that are managing these power plants. And anytime an engine has a failure or something has a failure, there's a warning code, there's an error message. And a lot of these are all mechanical systems that they're not like IoT connected devices and they're not super intelligent from a user perspective. And so you get an alert or you get an error and you have to go through a manual and you have to find what that alert or that error means. And then you have to go through some process, some troubleshooting process
00:18:32
Speaker
It's a lot of thin things in a large PDF, right? And so what they're doing now is they've ingested all these PDFs across all the different power plants. And when a particular error code comes up, the technician is able to say, Hey, I'm working on this type of a motor, this type of an engine at this particular plant, I'm getting an error code that says this message across the screen, please tell me some things that I ought to try in order to restart this motor or
00:18:54
Speaker
were able to troubleshoot and clear this particular message. They're able to get a really fast response from this without having to go scan through tons and tons of PDFs and find the right thing. It's making it faster for people to be able to support and ramp up on supporting these things without the need for always having a specialist that knows everything about all of the machines. That sounds really cool. It's like avoiding RTFM at massive scale.
00:19:20
Speaker
That's exactly what it's doing, right? And there again, it's automating the task of retrieving and summarizing the information so that the human can spend more time doing valuable things such as acting upon that information or validating that the information is the right information to be using. Sure.
Risks and Ethical Concerns of AI
00:19:41
Speaker
But some of that has risks involved too, doesn't it, with like what happens when things go wrong?
00:19:50
Speaker
Yeah, absolutely. And that's exactly why I mentioned earlier that the internal use cases are more common than the external use cases. And that's because generally the risk is lower in allowing your internal employees to interact with a digital assistant or chatbot than it is your external people.
00:20:07
Speaker
So either way, whether you're doing this for internal or external use cases, it's very important to monitor quality. There's a number of ways that you can do that. It's important to understand that these LLMs appear to be magic and they might actually be just that as just magic, but they're not perfect. And because they're not perfect, you have to be very careful. They could give the wrong advice out to your customers. They could be just flat out incorrect in their response or their answer.
00:20:33
Speaker
And so it's important that you go through typical testing exercises when building these to ensure that they understand your corpus of information so that they're able to make good responses. But additionally, it's important to monitor those responses as they come back. Something that we often do is if we go back to the knowledge base
00:20:54
Speaker
use case, we would not only respond to the user with here's what you ought to do, but we would also say here's the document that we got this information from. So the user has the ability to go back and vet that answer and make sure that it's legitimate. And with that you can collect feedback. Was this good or was it not? And then over time you can check that feedback trend and see if you need to go through and do any sort of tweaking or tuning to increase the quality of the output of the particular model. Cool, so
00:21:22
Speaker
I mean, in general, I think AI has got a lot of edge cases and areas of being legitimate concern. Can you talk a little bit about some of those areas?
00:21:37
Speaker
Yeah, I think so. There's a few things to be wary of. So there's a lot of copyright type concerns going around right now. And there's some headlines that you should stay plugged into. Like there's quite a few writers that have found their books were used to train LLMs and they're claiming copyright infringement for these things, understanding their books and being able to
00:21:58
Speaker
recite them or to summarize them. So I think we're going to see a lot more cases like that pop up in the courts and it's going to be very interesting to see where that falls. When it comes to copyrights, if you're using generative AI, you have to be concerned about two different perspectives. The first is where you're asking an LLM to generate new content that may infringe on somebody else's existing copyright.
00:22:22
Speaker
The second is where your information that is proprietary may be used to feed back into a model and train that model. As an example, if you were to go to chat GPT in a web browser and paste in your source code and say, help me troubleshoot this Python, your source code has now gone back to OpenAI and they can use that to further train their model.
00:22:42
Speaker
If you're using private instances of models such as the way that Amazon Bedrock hosts them, your information stays private and it's never used to go back and train that core LLM. And that's a very key consideration to make to ensure that your employees aren't inadvertently doing that.
00:22:57
Speaker
The next thing is people using these things for bad. I mean, obviously with great power, there's going to be bad actors that are interested in using the technology too. I believe we're already seeing and probably are going to see an explosion of the use of LLMs for phishing type emails.
00:23:16
Speaker
you know, today it's pretty clear when I get a phishing text or phishing email because it's broken English, it just doesn't make any sense or maybe the sender email address just looks absolutely terrible. Things like this are dead giveaways. I think some of those dead giveaways are going to go away and it's going to be harder for us to discern what is an attack versus what is legitimate. So those are some things to keep an eye on.
00:23:41
Speaker
Another thing is that oftentimes the new shiny thing becomes the center of solutions. And so generative AI is all the hype today. And there's a lot of people that want to build solutions around generative AI. And I would strongly encourage people to focus more on the value proposition to the customer and then using generative AI to plug into that.
Balancing AI Potential with Real World Needs
00:24:01
Speaker
And as an example earlier, I had talked about the user experience where they're communicating with a chat bot and asking how to open a checking account and then asking what their balance is. Generative AI, if it was the center of all of that, then you would only be focused on what you could do for your customer with generative AI.
00:24:17
Speaker
But instead, if you focus on the customer experience as an example, a one-stop shop for being able to ask questions like, how do I do XYZ? What is the health of my account? What does this look like in my history? Now, you're able to use generative AI as one of many different tools, again, back to that Swiss Army Knife type idea, and solve these problems behind the scenes and transparently for the customer so that they're getting a really good experience that's not limited by what gen AI can and can't do.
00:24:46
Speaker
I'm just thinking about your example there of asking an LLM your bank balance and hallucinations and the funny effects that come there. Yes, I really do have $10 million. Better spend it quick before it disappears. Yes, certainly. So what do we do when the LLMs get bored and decide to entertain themselves at our expense?
00:25:16
Speaker
sit back and enjoy the show, I suppose. Yeah, I suppose that could be it. All the science fiction movies that deal with AI taking over, that could get dangerous.
00:25:31
Speaker
It could and I know there's a lot of fears and concerns and I'll express my opinion. My opinion is that computers are never going to be smarter than humans. I usually meet that with resistance. The reason why I say that is that
00:25:49
Speaker
A computer is probably not going to know anything that a human does not, although a human, a single human could never know everything that a computer knows. So I think it's important that we always compare a computer to humans as a whole and not as an individual because I think that's the right comparison. Can these things take over the world? I mean, sure.
00:26:11
Speaker
Could we also wake up tomorrow and pull a cable out of our neck and discover that we were on The Matrix all along? Absolutely. There's a realm of possibilities. But my bets are that they are not going to take over the world, although they may be used by bad actors to take over the world. I think there's always going to be a human at the heart of evil, if you will.
00:26:39
Speaker
Well, I think, yeah, as we've kind of been alluding to, I think that with any kind of technology, there's the great potential to misuse it and do really nasty things. And AI is no different. So.
Future of AI and Personal Experiences
00:26:58
Speaker
Where do you think, if you were to put a guess out in the next, what do you think we're going to be talking about this time next year when it comes to AI and generative AI? Yeah, I think that what we're going to see is a lot more of these models melding together, if you will. Because today, if you want to generate text, you go to one model. If you want to generate images, you go to another model.
00:27:25
Speaker
Typically, when you're trying to generate images, it's not as straightforward as just describing what you're looking for. You've got to be fairly specific and understand some of the required input parameters and things like this. I think we're going to see a blending such that instead of trying to figure out which one of the types of models you need for the different modalities of output, it's going to be more of a
00:27:46
Speaker
you'll be able to describe what you're looking for and then the system can determine which model and which types of models to use behind it today that can be done but it's something that's more brute force or manual putting things together using technologies like bedrock agents or like lane chain but i think that's going to be a lot more transparent in the future i'm also
00:28:06
Speaker
a little excited by OpenAI and what they're doing with this marketplace and enabling people to be able to easily and quickly build generative AI applications. I think that this concept of a platform as a service to build products that leverage generative AI models, I think this is going to be very exciting to watch and see what unfolds over the next year. Cool. So do you have any little pet projects that you're doing with generative AI?
00:28:37
Speaker
Yeah, I wouldn't call it a pet project so much as I'm absolutely using it in my everyday life.
00:28:44
Speaker
You know, I create a lot of presentations. And in those presentations where I'm finding value is the ability to quickly generate an image that I can't find on the internet if I want to add an illustration to a slide. There are some tools available that can take descriptions and turn those into architecture diagrams or workflow diagrams, for example. And so you're not spending so much time painting. You're just describing what you're looking for and letting the system generate those types of visual artifacts.
00:29:13
Speaker
I'm using LLMs to summarize content. I'm using LLMs to write executive summaries and things like this. And it doesn't always get it right out of the gate, but what I've found it very powerful in being able to do is to wordsmith things. It's very eloquent in the way that things are worded and sometimes gives me a really good launching point and I can go in and
00:29:38
Speaker
make some slight edits and I'm off to the races. And while it's not saving me tons and tons of time personally, it's able to shave off minutes and distractions.
00:29:47
Speaker
And I found it extremely useful. My son, who's 12, is using generative AI to create images also for his YouTube channel's profile. He came to me one day and said, here, put this on my profile for me. I want this to be the image. And I was like, man, this looks incredible. How did you make it? And he says, I don't know, some automatic thing. I described what I wanted, and it created the image for me. That's fantastic. So maybe me and my son both are somewhat using generative AI.
00:30:16
Speaker
That's cool. Maybe I could spruce up the graphics of my YouTube channel doing that. Now, one of the things you do quite commonly is briefings for CTOs and business people. How does that go? How do those things go a lot?
00:30:37
Speaker
Yeah, I've done several of those events and they've all been just a little bit different in what occurred. There are some audiences where they're all really fresh and they're just not sure of
00:30:49
Speaker
what generative AI can do and how it fits with their business. And there's been other events where customers are already experimenting and they just need help taking things into production and things like that. So I've talked to all sorts of audiences about this. I think that the common theme that I'm seeing is that businesses understand that this is not likely a shiny object that's going to diminish in the next few months, that this is something that is on the verge of, like I mentioned earlier, revolution and not evolution.
00:31:18
Speaker
And I think that they're excited at being able to enhance their value proposition, but they're also very worried about the potential for disruption and innovation from competitors. And so they've got their ear to the ground a lot more than usual because they're worried about what's going to happen because this is all such uncharted territory that we just don't know.
00:31:40
Speaker
You know, there's like the story of Chegg, which is an online tutoring service. They do online tutoring. They have sample tests and answers for students and things like this. And when some of these models hit the streets, there were students that were using chatbots to become tutors.
00:31:57
Speaker
And their stock prices tanked overnight. And there's an example of this insane disruption that had a monumental impact on this business overnight. And there was no way they could have seen around that corner and saw that coming. And I think stories like that are what are really keeping some of these C-level executives up at night, and rightly so. And so because of that, most of our conversations are less about the technology and more about how it can actually help a business.
00:32:24
Speaker
How do we improve or increase our value proposition? How do we make what we're doing today more effective or more efficient so that we can scale better? Those are the things that are coming up in conversations these days. Very interesting. So any final thoughts on generative AI that we haven't talked about that people should be aware of?
Encouraging AI Experimentation
00:32:52
Speaker
mess around with it. You know, this stuff is really accessible. There's the Dolly models for creating images. There's, if you've got access to AWS, you can go in through the console in Bedrock. There's a playground where you can mess around with image creation. You can mess around with text creation. Like it's, it's fun to put your hands on it and to see how these things go. If you're wanting to use generative AI, I would, um,
00:33:19
Speaker
I would strongly encourage you to build your solutions so that you can change these kinds of models behind the scenes. Because today, if you look at the ecosystem of generative AI tools, you'll see that a lot of them are open source or are free for use kind of thing. And I think that we're going to start seeing a lot of these technologies be bought up by bigger industries. And when those purchases happen, sometimes the direction that the company is taking with their product could shift.
00:33:48
Speaker
And so when those disruptions occur or whenever somebody else comes out with a new LLM that's bigger and better than the last one, these are things that you want to be able to shift to quickly. And so I would make sure that your architectures are ready to be able to encourage and embrace that type of technological disruption that is absolutely going to happen.
00:34:12
Speaker
Cool. And I will ask you one more thing. The bass guitar. How often do you get to play? Yeah, I play it weekly in town and that's not nearly as often as I used to when I lived in the Dallas area. We played all over the Dallas and Fort Worth area. So not as often anymore.
00:34:33
Speaker
I often tell people, similar to my video game habit, I do more collecting than I do playing. There when I need it. The podcast is audio only, but Kenneth always has a six-string bass in the back. Actually, the first time I saw that was in my interview because Kenneth was the one who decided to hire me.
00:34:59
Speaker
So, yeah. It was a good choice. Blame him for everything that goes wrong. Well, it's been an absolute pleasure talking to you about generative AI and artificial intelligence in general. Have a great rest of your evening and hopefully have you on the podcast again sometime. Yeah, thanks a lot. I really appreciate the invite and I enjoyed our conversation today.
00:35:26
Speaker
Thank you. All right, take care.