Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
/AI at scale: the hidden costs of the cloud image

/AI at scale: the hidden costs of the cloud

The Forward Slash Podcast
Avatar
49 Plays13 days ago

What does it really take to run AI at scale? In this episode, cloud architect James McShane joins James to explore how Kubernetes became the backbone of modern tech, why the cloud isn’t always the cheapest answer, and what most people miss about keeping AI systems healthy after launch. From GPU scarcity to “vibe coding” gone wrong, this conversation pulls back the curtain on the choices shaping the future of software.

Recommended
Transcript

Introduction and Guest Overview

00:00:00
Speaker
you have to scale inference, and you have to observe inference because the lifecycle doesn't stop when you've delivered the software to production.
00:00:28
Speaker
Welcome to the forward slash podcast where we lean into the future of IT by inviting fellow thought leaders, innovators, and problem solvers to slash through its complexity. Today's guest is James McShane, a cloud native architect at Portworx by Pure Storage.
00:00:41
Speaker
James has spent the last decade deeply involved in cloud ecosystems, shaping solutions and guiding organizations through their DevOps journeys. Though he'll be the first to tell you that he got into DevOps by accident.
00:00:54
Speaker
Originally from South Bend, Indiana, James spent 10 years living in Minneapolis before settling in Cincinnati, where he now lives with his wife and three kids.

Innovative Teaching Methods in AI

00:01:03
Speaker
His professional path is as unique as it is inspiring. He began in math and music.
00:01:08
Speaker
It seems to be a common pattern among us, uh, software engineers. Uh, and during graduate school at Indiana university, he even started a small business focus on flip teaching. We're going to hear but more about that. That sounds actually pretty cool.
00:01:21
Speaker
Large scale math classes, the entrepreneurial spark ultimately steered him towards software development and eventually cloud architecture. Welcome to the forward slash James. Thanks, James. I'm really happy you to be here and love to talk about this pathway through the cloud ecosystem and how we all get to the point that we're at right now.
00:01:41
Speaker
Awesome. Yeah. totally So first I got to hear about, I've never heard of this term flip teaching. What what is that? So flip teaching was definitely emerging in the late 2000s, early 2010s as a model for students to engage with material before before then encountering the professor to then use the ability of the professor to diagnose, train, and guide students from initial encounter of the material to a deeper understanding. So flip teaching is, let's start with here, give you the outline, give you the basics.
00:02:14
Speaker
and then use that professor's deep deep knowledge to take you to the next level and identify places of weakness and turn them

Kubernetes and Cloud Evolution

00:02:21
Speaker
into places of strength. So it started as a tutoring platform and then turned into a place where we could we could enable students to attempt a ah different mechanism of learning rather than having the professor do the baseline.
00:02:35
Speaker
We could let the professor do the the deep exploration and use that as a mechanism to get better outcomes in class. Cool. It might ah dovetail a little bit into like the modern day AI assisted classroom. I'm seeing some really cool tools that are showing up, popping up in classrooms now that where we can focus on, you know, individualizing the ah attention that the the students get with seeing where they're struggling, that sort of thing.
00:03:01
Speaker
Yeah, this is this is a linear pathway, right? You start with with models like this, and then you turn that into something that happens consistently, right? this The education ecosystem is something I've stepped away from, but what's going on today is still attempting to, you know, the the goal is still the same, right?
00:03:19
Speaker
High quality student outcomes driven by, you know, hands-on experience and guidance in places where they're falling off the track. Very cool, yeah. All right, but we're not here to talk about education today. That is not necessarily your your your big thing. I know you as like, you're the you're the Kubernetes guy. That's kind of how my brain compartmentalizes you. Is that fair? Is that a fair assessment?
00:03:43
Speaker
It's a fair assessment. it's It's where I've been living for the last 10 years. And I had an opportunity to dive in with an organization starting in 2016 with Kubernetes. I was working right right at the point where it was turning from ah product that was really isolated in the cloud. and you know know the clouds Google had had produced it, Red Hat had just switched over to using Kubernetes.
00:04:07
Speaker
And so I got to be early on the train and learn through some interesting transition times with with a number of products developing for this. and then be at the you know be on the train as that that exploded in their you know early 2020 covid period really felt like a time when it transitioned from you know ah side project that many organizations were trying to the baseline way that organizations are delivering infrastructure so i was really grateful to have the opportunity to start in it so early and learn some of the lessons that the broader community did to then be a see it grow to where it is today it's been really exciting
00:04:43
Speaker
Now the COVID thing, it's interesting. There's a lot of things happened during COVID. i don't know if it was coincidence or, you know, it was causation, right? What do you think? Do you think the the isolation we felt, but did that contribute to the cloud kind of taking root but you know better at that time?
00:04:59
Speaker
You know, having having seen a number of different organizations, right, I've been a consultant for the last, I was a consultant for around six years. I got to see a number of different organizations all around that same time period.
00:05:11
Speaker
And, you know, keeping in touch with folks that I've worked with over, you know, over my entire career, COVID forced acceleration and delivery on a specific subset of organizations that may not have had that same pressures beforehand, right?
00:05:27
Speaker
Healthcare, care health insurance, ah responsiveness for drive-through and retail. And so that responsiveness to customer needs that were evolving really quickly I think drove organizations to cloud, drove organizations to cloud native.
00:05:42
Speaker
And then in the background, I think there was an undercurrent of you know open source and a developer first sort of mentality. And all those things I think came together to really mature Kubernetes ecosystem very quickly.
00:05:54
Speaker
And I'm glad i say Kubernetes. I've heard people say Kubernetes, but

Impact of Broadcom's VMware Acquisition

00:05:58
Speaker
you know, you're the expert. So that's in my brain you are. So you say it the way I do. I i do know that it's not the way Flavor Flav said it. I believe he see he called it Kubernetes.
00:06:09
Speaker
What is Kubernetes? but Well, you know what? And and i thought I think he is the the foremost expert in our field. So I think we'll just have to follow him. We need to go with the Kubernetes. Yeah. don't know if you guys have seen that. He did a was it Cameo? Was that the platform? came Yeah, he had a Cameo where it somebody asked him to explain Kubernetes. And it was, yeah, it was funny.
00:06:31
Speaker
Anyway, ah he is the foremost expert, right? He did save, wasn't he the one, was that the guy who saved Red Lobster? So, you know, we are indebted to him. He did. he didn't save red lobster.
00:06:44
Speaker
So we've, you know, the the cloud thing, you know, obviously, as you were saying, it's it's kind of taken root and, you know, Kubernetes isn't just a toy, right? I mean, it's it's driving humongous workloads. It's, you know, it's running entire data centers, all of the of the awesome things these days.
00:07:01
Speaker
um So we're ah we're a little <unk>re a little more mature these days. and so but But that brings with it some different maturity models and and different ways of using the cloud. And so so let's talk about you know kind of how do people, how is this, how we're using the cloud evolving right now?
00:07:19
Speaker
So I think there's two there's two key directions that we can that we can discuss here. right The first you mentioned is Kubernetes driving the data center. And I think the disruption of Broadcom purchasing VMware has caused organizations to rethink their kind of baseline data center virtualization strategy. And Kubernetes has opened up a pathway there that is aligned to the second, you know, the second wave that we see, which is the the evolution of the AI ecosystem alongside the Kubernetes ecosystem, right?
00:07:52
Speaker
And so those two things happen, are happening to somewhat differently. ah along the same timelines. The Broadcom disruption, although the purchase happened in 2023, many organizations we see reevaluating their virtualization strategy right now.
00:08:08
Speaker
And then ai is also in this emerging state where folks are trying to determine how are they going to operationalize it Is there a maturity level in the cloud versus the cost ah cost allocation appropriately for it, right? i'm If I'm doing experiments, I don't want to rack up seven-figure, eight-figure bills with those experiments.
00:08:30
Speaker
But I also... I also need to be able to get AI experiments off the ground effectively. How do I actually make this operational in my organization? And so these two things are coming together, right? Kubernetes runs the data center now.
00:08:44
Speaker
Kubernetes runs these AI workloads, right? All these Neo clouds that are coming up are offering Kubernetes platforms as their baseline substrate. And so these things are happening at the same time. And it's like,
00:08:56
Speaker
how How do i run these workloads effectively? How do I place them in the cloud versus on-prem? And the discussions are all happening at once. And I think it it puts Kubernetes at a really interesting focal point right now. That's why love talking to you, James. You use words like substrate. You know i mean? I just feel smarter talking to you.
00:09:16
Speaker
yeah ah if I do want to ask about maybe back up a little bit, the Broadcom thing. Some folks may not be familiar with kind of why that was so disruptive. Why did that, why did it kind of shake the, the I mean, it kind of shook everybody, you know, to its core almost. It was a big deal. Why why was that so so disruptive?
00:09:34
Speaker
It was a big deal because first of all, VMware ruled the ecosystem, right? They had a 96, 97% market share for virtualization across you know Fortune 2000, across the United States and and across the world, right?
00:09:48
Speaker
Everyone had VMware, it was it was the default. And the key element that changed right is Broadcom's aggressive pricing strategies came to the next renewal period.
00:10:01
Speaker
And the key element that happened was Broadcom didn't change a number of prices all at once. They said, you can't do these smaller subsets of our product. You have to buy our full product portfolio, VCF and and the whole and whole baseline there.
00:10:17
Speaker
Many organizations were using VMware just for kind of a baseline virtualization, ESX, vCenter, and things like that. And so to be forced to go from that subset to the full product catalog, yes, some organizations are like, yeah, you know, I love the whole catalog. I'm i'm already invested in Tanzu. I'm already doing a lot of these things.
00:10:38
Speaker
And so this makes sense to me.

Infrastructure Evolution: Cloud to On-Premises

00:10:41
Speaker
a lot, a larger percentage, I think, were just using that baseline capability. And so they see this as 50%, 2x to 3x to even 5x uplift on their current pricing.
00:10:53
Speaker
And so now there's this business risk that's been injected into how your business is being run. right this is This is the like the foundational aspect. right we We went through a transition in 2000s to the early twenty ten s from physical infrastructure to virtual infrastructure, that was driven by the maturing of the VMware platform. yes everything Everything they could do. and now we're in ah And so it's like now everything's on virtual and you look at, and the ecosystems there, right especially when you look at specific verticals, right healthcare, where you get their your products delivered to you for for VMware in this format.
00:11:36
Speaker
And so now organizations are looking at this price hike and some technical things that are changing. right there's There's clear technical walling off that's happening in the Broadcom product product portfolio in a way that it wasn't walled before the acquisition. And so all these pieces happen at all these changes happen around the same folks renewed, you know organizations renewed their their contracts in 2023. VMware is such a central part of most of these organizations' infrastructure.
00:12:07
Speaker
We see a lot of three-year, five-year renewals that happened right at the time of the acquisition. And that that puts folks in this place in 2025 to say, okay, now to effectively deliver new strategies,
00:12:21
Speaker
i have to I have to begin now to be able to migrate by the end of 2026, early 2027. And there's a number of pathways, right? There's other traditional virtualization providers, there's the cloud providers um offering you know VMware-like services.
00:12:37
Speaker
Even that is getting some risk injected into it. And then there's you know transitioning into a model of the future. And that's where I think we see containerization as the next transition from physical to virtual to containerization on physical, right? There there were many times I've seen like a ton of organizations deliver, you know, Kubernetes platforms on virtual machines on physical hosts.
00:13:02
Speaker
And today, that's just, that's not a necessary layer. We can get the best performance, we can get the best density, can get the highest throughput from container you know containerized applications. It's just a process on that bare metal, on that physical iron, and you're going to get the best results out of that and be able to manage a broader scale platform.
00:13:22
Speaker
And the the Broadcom, that the that that transition of them, it's not <unk> that it's anything nefarious. that That's more of them like repositioning themselves and focusing on ah on a um kind of a subset of the market so they can support them better, right? That's kind of what they're doing, right?
00:13:39
Speaker
Yeah, they they've stated that that goal directly. They want to refocus. they They said this in the partner ecosystem. They said this in the marketplace. They want to refocus on their top 50, top 100 customers and provide advanced high-level services and capabilities for those top customers.
00:13:57
Speaker
But the marketplace had a had like a layer of VMware across, you know, not and not like the top 50, not the top 100, but the top, you know, 5000 organizations. right And so so that long tail and but let's be clear, right, those top 50 organizations, some of them wanted to invest in the broader, you know, high, you know, high value product portfolio that Broadcom offers.
00:14:23
Speaker
and others of them don't. Right. And so that that mix, I think, drives, OK, Broadcoms identifying where their top customers are and then there's a reallocation of the rest of the market.
00:14:37
Speaker
OK, well, thanks for the that context. So I do think that's it was for me for a while. I didn't quite understand what was going on because I know everybody's up in arms about the Broadcom thing and you kind of dig into it.
00:14:48
Speaker
It seems nefarious at first, you know, it's like, they're screwing us over, but it's It's just a really, ah they're focusing their efforts and you know, a lot of businesses do that. Yeah, and I think too, I come with a very particular perspective of you know Kubernetes being able to solve a lot of these challenges, but in the end, it's aligned to what we see as a broader business category, you know you know business practice as a whole, right? like you need you you You're gonna succeed with delivering like high quality, high outcome services without focusing your your product portfolios and offerings.
00:15:22
Speaker
Now, I think what what we see is, hey if you have something that is this baseline you have to you have to balance those things effectively and and maybe identify like hey the the the choice that's being offered today now is who are the partners that you're going to work with effectively to deliver your you know these needed business capabilities right for some for some organizations They see Broadcom as the partner to go forward with.
00:15:51
Speaker
Some organizations see the cloud providers. Other organizations are are digging in with vendors like Red Hat, with our cells at Pure Storage, with other you know with other providers that are that are reshaping how they're running their data center, or reshaping how they're running their cloud.
00:16:09
Speaker
The question is just who like who do you want to partner with to to deliver these capabilities? And there's a number of options on the field today. And i think that the decision landscape is happening and evolving really rapidly right now.
00:16:24
Speaker
So, yeah, no, I totally agree. And like, you know, as we were talking about earlier, we've kind of matured in the cloud. We've kind of grown up with the cloud and the cloud is a thing now. We know we realize it's, you know, it's just other people's computers, that sort of thing. But and so it used to be back in the day, if I want to start up a company, you know, i might buy a big old server and put it in a closet somewhere in my house and run my my business off of it and stuff.
00:16:50
Speaker
Um, is that different now, now that, now that we have this familiarity and and the offerings from kind of these, these big hyperscalers that are out there, you know, bootstrapping a new company, is that, is that different than it was in the past?
00:17:02
Speaker
Yeah, starting getting off the ground, absolutely, especially when you think about how AI has reshaped this, right? It's not just, you know, even even you know changing from five years ago. Five years ago, it's, yeah, I'll spin up workloads in the cloud. I'll start there and and go forward.
00:17:17
Speaker
Now it's, okay, yeah, who's my AI solution provider that I'm that i'm putting next to these workloads that are running in a cloud provider? And then how do i scale both of those how do I scale both of those aspects of my business? right The AI solution might have to evolve into a fine-tuned model as my business grows and scales and adapts as alongside the cloud computing model. right As the business grows, then the economics changes. right we can all you know yeah Workloads for startups is can start in the cloud.

AI Solutions: Cloud vs On-Premises Decisions

00:17:50
Speaker
But we've seen many examples recently of data center repatriation as you look at more you know stable run rate businesses where you can run your experimentation in the public cloud.
00:18:02
Speaker
But for effective cost-effective solution delivery that, hey, we've established a market here, building a data center and building those capabilities in-house and bringing, you know, once you have that sensitive data, bringing AI alongside of that starts to become, there's there's an inflection point there to move workloads from where you started in the cloud to now bringing it back on-premise and starting to scale there.
00:18:27
Speaker
But for a bootstrap and repatriation, that's, you explained it, but it's, the idea is really of bringing it back in your home country, but, you know, think of it in in countries, but bringing it back home to your own data center that you already have.
00:18:42
Speaker
But for bootstrap companies, is it kind of like, as you said, I'm establishing my market, making sure I have product viability and those sort of things in the public cloud. And then that capital expense later, maybe when I get another funding round and I can actually afford to to stand up my own infrastructure.
00:18:56
Speaker
is that with Is that the pattern you're seeing predominantly? Is that people will go back on-prem from the cloud? for How does that work? Yeah, I think there's there's two avenues we see today is that for you know for companies that are that are in that maturing phase or finding the product market product market fit, they they do start in the cloud. They start maybe with a with a specific provider that they're investing in, whether that's one of the the big three. you know there's There's a number of startup programs now that are offered by each one of those cloud providers.
00:19:29
Speaker
But if they're more niche, then we i do see like an evolution today of more niche AI solution providers, GPU solution providers. but The NeoClouds I mentioned, you know the top three I see in that space are like CoreWeave, Lambda Labs, and Crusoe. there's There's other companies competing in that space as well.
00:19:50
Speaker
And so the question you kind of have to ask ask yourself there is like, who am I going to bootstrap with? Because they'll give you money. They'll they'll have you they'll have that funding to to bootstrap you. And and then there's there's a whole separate set of questions as as you grow beyond that, right? as you Like, okay, now we need to deliver, you know, multi-region capabilities or multi-cloud capabilities, right? this is This becomes business dependent, right? what What industry are you operating in? And I think those those kind of regulation concerns, the financing concerns become the next phase after you've gotten off the ground.
00:20:24
Speaker
And then the kind of that repatriation decision, is it, is that really just a big company? Like we already had a data center, we pushed some stuff out in the cloud and we're realizing costs us more to run particular workloads and we're come back to our own data center that we already built.
00:20:39
Speaker
Or is it, we make a decision to build a data center maybe go into Colo or something like that, or you know, and make that capital investment to, to buy a bunch of infrastructure. What, what does that decision like line look like? How do you, how do you make that decision?
00:20:53
Speaker
Well, I think there's a great case study that 37signals has published recently that they're going through this journey right now where they've invested heavily in AWS. They're doing things in They're doing things in AWS cloud services.
00:21:10
Speaker
and and did do that capital expenditure to move compute to the data center. And just recently they've moved their their data storage from s three in the cloud down to a flash blade running in their and their colo.
00:21:24
Speaker
And you know this starts to then stratify as you get from 37 signals is a midsize, they've been running for a long time, versus of course enterprises with an existing data center footprint.
00:21:36
Speaker
but with that story looks more like, right, all those organizations have a data center. they They've had a data center forever. They shifted some set of workloads to the cloud. And now it's more of a balancing question, right? It's a question of where should these workloads land in the long term?
00:21:54
Speaker
And how do I, how do i like lay out my costs effectively from between a CapEx and OpEx model based off the the funding you know that my business is offering me here. And so there's what we see is there is an evaluation of that of that run rate in the public cloud at an application by application level that that FinOps mentality that developed in 2223 that really, i think, matured over the course of a couple of years is now letting folks make more mature
00:22:25
Speaker
repatriation decisions in the enterprise because they can see that unit cost and exposing that unit cost to things makes it more clear when when it makes sense to to bring things back on-prem.
00:22:36
Speaker
And is it, is it all workloads? Like, it it's kind of a curve, right? So it's a curve of your cost for in, in the public cloud versus, you know, if I have already have a data center and I'm investing, there's a certain point where they cross and it makes it, then it makes sense to repatriate. Does that always happen? Is it, is it just later for certain workloads? Is it, is it just an inevitable thing?
00:22:57
Speaker
What, what does that even look like? What's that scaling factor look like? So I really see two elements here. I think there's a data gravity element and there's like a network ah connectivity bandwidth element as well. Right. Because because of network, we we kind of view the network as free and the data center, of course, yeah it's it's not free. Right. you've You've invested in a certain level of network infrastructure that's more challenging to change in the data center.
00:23:22
Speaker
But it's the sort of thing where the the network is run as a you know it's run as a register on the cloud, right? It's it's always ticking against you versus in the in the data center, you know as long as you're not over consuming that capacity, that becomes an an element where you you can be cost optimized when you bring a ah high network bandwidth consuming application back in the in the data center.

Developing Internal AI Solutions

00:23:46
Speaker
And what consumes that bandwidth is that you know element of data gravity, right? if there's an application that requires a lot of dance data transfer from you know across multi-region for from one service to another or from the data center up to the cloud and then back out right that is that is a place where i see a lot of applications moving back in to say i want to run right next to where my data is is stored you know whether it's mainframe applications or large you know large databases that are running on-prem, then that can be that can become the first set of out of like workloads that get repatriated back into a data center.
00:24:22
Speaker
Gotcha, gotcha. Okay. We talked a little bit about, so far, about like kind of AI. I mean, it's funny, every episode talks about AI. because it's kind of a big deal right now. You've heard of it, right? AI, artificial intelligence.
00:24:34
Speaker
It's- You know, we all have to, right? yeah yeah I think it's required in a podcast to talk about it. So, people are now, you know, a lot of the industry started off with, you know, ah just consuming products that were AI driven products. They had AI based features and that sort of thing.
00:24:51
Speaker
But now I think we're kind of evolving a little bit and and people are starting to kind of build their own AI based workloads and those sort of things. What does that look like for companies as they're investing in their like, I want to make my product now have a driven features or or even my internal product for our for our enterprise and we we want to but make AI powered solutions.
00:25:12
Speaker
What does that look like? what are those decisions? You know, what's what's the pathway for for delivering those? Yeah, so I think there' there's a couple aspects of this, right? the first The first problem that organizations have to tackle to deliver their own internal AI solutions starts with a data quality and a data management approach, right? i see I see like very mature organizations who understand that they can't just throw out AI slop and have it associate to their...
00:25:41
Speaker
to their brand, they understand that the first thing that they need is an effective delivery of, you know, what is their data model and how is that data maintained, tracked, governed, and delivered to, you know, inputs to their AI system and then ongoing, you know,
00:25:59
Speaker
observability and tracking of how that AI is being is being grown. And so that that first step is happening in a number of different places, whether it's Databricks solution or the public cloud where they start to put put that you know data model together or larger scale data lake solutions that I see I see, you know, being more, you know, maturing more and more on-prem. So it starts with the data, right? And then we have to figure out how we operationalize the AI infrastructure itself, right? How do i make my AI solution engineers effective in that, in the AI development cycle, which I see as kind of a double, the the AI development cycle is more of like a double loop sort of development model where you have
00:26:41
Speaker
the ai training and in the ai training life cycle starts with you know an individual developer you know doing experimentation that goes into a set of training workloads training is you know high demand high throughput very intense workload right so the question is know I can probably do my experimentation rounds in the cloud at first, but as I get these high demand workloads for training, the question is what tooling am I going to use to make that effective as I start to scale my training to larger and larger you know parameters, to larger and larger data sets, and how do i how do i run that as cost-effective solution in my organization?
00:27:24
Speaker
We see the developing you know cloud solution providers developing solutions around that. I think Google's DRA, Dynamic Resource ah Allocator, shows you some of the ways that Kubernetes struggles to solve solve this problem. right DRA is all about scheduling workloads to then be able to pick up, like say, hey, I need this type of GPU configuration to run and this period of time. I want it in this priority.
00:27:51
Speaker
And dynamic resource allocator combined with Q is an approach that the clouds are taking to be able to say, hey, these high margin boxes, right these most critical you know compute environments that I need, how do I get access to them in the cloud in a cost effective manner?
00:28:08
Speaker
So like Kubernetes and Q has this ability then to scale up to take that demand and then scale down when that's when that's done. that's That's flipped on the head of the traditional high performance computing model offered by a product like Slurm, which has been more established in the data center.
00:28:27
Speaker
Slurm asks, how can I prioritize effective, you know, a a limited set of compute and storage resources? How can I then schedule workloads into that?
00:28:38
Speaker
Whereas Kubernetes was kind of built always on the scalability model, right? It's a distributed system that can scale up. So there's problems of AI infrastructure, and then there's also problems of you know the AI software development lifecycle, right? Because after you deliver that model into inference,
00:28:54
Speaker
you have to scale inference and you have to observe inference because the life cycle doesn't stop when you've delivered the software to production and and i think this is where devops you know the devops ecosystem can actually be a helpful informer onto how we can do this software development life cycle for ai So this cycle comes back to, hey, I have to observe this environment. I have to look for drift in the models and then and ensure that my team is aware of that and can retrain appropriately from the data that we're seeing in the wild.
00:29:27
Speaker
So the you know the DevOps and observability capabilities that we've been developing for our baseline applications, it's actually super helpful when it comes to how we run AI in a large organization as well.
00:29:39
Speaker
mean, you brought up something. It's just, it's just reminiscent of, you mentioned scheduling and I'm thinking back to like, you know, hear all the horror stories. I had all my punch cards in my, in my little basket on the front of my bike and I'm running over to the data center for my time slot that I had available.
00:29:54
Speaker
Are we back to that? Like, are we like scheduling time, with little time windows that we have now? When we think about like these really high cost, you know it's it's why Nvidia is through the roof, right?
00:30:07
Speaker
These high value you know compute devices. it it In the end, we do have to be able to then schedule those resources effectively. In the cloud that we scheduled effectively because these are high cost machines um that that we're that we're running against.
00:30:24
Speaker
In the data center, there's limited availability and there's a number of workloads that are running against them. And and we we I think we have all the lessons from those early days. We have the lessons from distributed systems.
00:30:35
Speaker
And now it's a matter of putting together a tool set that works to then allocate those resources effectively. So, you know, I think when it comes to whether it's Q or a project like Slinky that's bringing some of the intelligence of Slurm into the Kubernetes ecosystem,
00:30:52
Speaker
they These projects are really ah evolving very quickly to take the lessons that we've learned from these other industries and the the history that we have here and bring it to bear in kind of how the data center and how the the you know cloud solution providers are building this as well.
00:31:08
Speaker
And it was, you know, obviously we don't schedule time with computers anymore. but We got past that and that was just a based on, think it was more of a a matter of scarcity. Is that where we are right now? Is that just those those things, those GPUs, they're very scarce. There's just so much demand that it's there's just they can't keep up. Is that what we're experiencing right now?
00:31:27
Speaker
They're scarce and they're expensive. Yeah, absolutely. Like you you have a limited set of GPUs and you have a lot of demand, right? and the demand The demand is high because whether it's fine tuning or training or even scaling inference, you need access to those GPUs. So they're they're high demand devices that are being shipped at a limited capacity time.
00:31:46
Speaker
Everyone's talking about AI, everyone's doing this this kind of baseline AI, these baseline AI workloads. And so the the supply and demand curve has now pushed down the you know push down this this price point, right, where it's a really high price point and limited availability because you can only produce so many GPU devices.
00:32:06
Speaker
devices at once.

AI Infrastructure Challenges and Innovations

00:32:07
Speaker
And then there's a final piece, which is we're seeing, you know, the evolution happening so quickly, right? Whether it's the Blackwell chips coming out, that the AI you know chip manufacturers are also evolving very quickly.
00:32:20
Speaker
And so the best AI comes from, you know, the best physical hardware, which I think, you know, five, 10 years ago, we weren't as worried about the physical hardware things were moving on. you know We shifted more to to a com but a commodity compute perspective. right that's That's shifted back again. yeah there There's differentiation when you're running on you know an H200 or a Blockwell chip. right It's why those things matter.
00:32:47
Speaker
And there's only so many of them in the world today. So you brought up inference and and and I don't know if a lot of people understand these these terms. Let's make sure we we baseline some folks here. So when it comes to AI, when you're, when you have these models, you're talking about large language models or whatever type of model and in AI and machine learning, there there's that, a training phase as you were talking about. And that is like, I've got a plan, a training plan. I've got a bunch of data that I want to just cram through this network and let it adjust itself and learn that's what ah It's called machine learning.
00:33:16
Speaker
But at some point we we we cram all this data through over and over and over repetitive and we get to the point where where we've tested the model and we're like, you know what? this working the way we want, right? This is giving me the right answers, this is doing the right stuff.
00:33:28
Speaker
And then we operationalize that model. We take a snapshot of it basically. And now we deploy it to a point where we're using it and within our applications. Yeah, just like you hear like, oh, GBT5 came out. They were spending months cramming data through these things, right?
00:33:41
Speaker
But at some point they took a snapshot of it and then they said, this is GBT5, go use it people. Same thing with enterprises and and and companies who are using AI workloads. what How does that differ as far as to how do you deploy? Because as you said, this is a high volume, high throughput when we're doing training, but inference, it does require some of the same machinery. We need those GPUs because you're still dealing with but large linear algebra constructs, matrices, vectors, blah, blah, blah, blah. bla So what what does the compute look like? How does that differ? How do we deploy these things differently?
00:34:13
Speaker
Yeah, I think for for inference, we see you know there is a lower demand on the individual GPUs, but we have to think about the the challenge of scaling inference as we think about the interaction model, right? Is this is this something that is being you know pushed to an app that's being interacted with by millions of users? Is this something that's in the lifecycle of you know a business process that's running internally based off of data that we're seeing out there. we're so I'm seeing the evolution of quite a bit of computer vision applications that are being deployed on manufacturing floors. That's a use ah use case I see evolving very quickly right now.
00:34:51
Speaker
So what that requires right is it's It's actually, it looks like the cycle of you know web development of the like late 2010s where you have some of the scalability concerns, right? How do i scale and deliver? you know it's It's an API, right? Inference is, how do i deliver this API? How do I ensure that it remains healthy? right These devices, like the GPUs have an ability, like we'll say they have a depreciation cycle that's faster than your commodity compute.
00:35:22
Speaker
And so you need to you need to ensure that the GPUs are healthy, um che you know get those memory corruptions and all those errors that can happen in the system, make those visible and schedule workloads away when that occurs.
00:35:35
Speaker
I need to be able to do time slicing effectively. right the in The individual inferences might not need a full GPU, but I might want to deploy a a number of different models to be able to test.
00:35:47
Speaker
maybe Maybe you want to have a ah main primary production model, but then have a secondary testing model and inference as well and and split the traffic between the two of them. That's why we see these kind of agentic gateway products but coming out here as well, because you might want to send your send your inference to two different models at once, test those results, or even you know send those back to and to an internal observing system.
00:36:12
Speaker
Right. and so the There's a number of challenges when it comes to, okay, how do i get this really close to where the user interaction is, right? Do I need to deploy this if it's a user application, you know, multi-region in the U.S.? Do I need to deploy it globally?
00:36:27
Speaker
And these are these are big models. These are very large-scale things. And so, you know, the failover cycles and and and the capabilities there to being able to operationalize it effectively. it's It's why you probably, when you're running ChatGPT, sometimes it sits there and spins because maybe you've been connected to to a device that has to fail over run in a new model somewhere else.
00:36:49
Speaker
So that that same problem exists in the enterprise and you have to figure out how to how to build that as an operational excellence capability that you have internally. Man, you just blew my mind. See you here all along, what I was assuming was going on was because that happens to me with GPT, chat GPT.
00:37:06
Speaker
I'll ask it something and it'll sit there and spend. And what I assumed was I was asking such a hard question because I'm so smart that it was having a hard time answering it you know i mean? That I'd stumped chat GPT. You're just telling me it's like a failover scenario is what's really going on. and Is that true?
00:37:23
Speaker
I mean, look, you probably stumped Chappie to Chippie-T a few times, but there's there's probably also a time that you're getting rescheduled to a new device. you get it you're in You're in a queue of processing inference. You're behind a few other people who maybe have asked some harder questions.
00:37:36
Speaker
um But it's it's all about cycling that through and getting, you know, it's a data it's a data transfer problem, right? You've asked the question. It's gone through a cycle. It's sent the data somewhere to then be processed by an individual machine.
00:37:49
Speaker
And so it's it's in that life cycle somewhere that you're you're getting ah you're getting paused, you maybe get rescheduled, and then the question gets answered and sent back up to you. what it could be now that I'm thinking about it, you you brought on a good point.
00:38:02
Speaker
It could be that it's realizing I need to talk to a smarter machine to be able to service this guy. Yeah, that's right. You get put in a special, get put in a really special queue. That's okay. And there's only so many of those. That makes perfect sense. Yeah, that's starting to, yeah, yeah, yeah.
00:38:20
Speaker
All right. This has been ah fantastic. as I always learn so many things. Now I've learned I'm not as smart as I think I am. So that's that's been a realization for for this episode that I'm struggling with right now.

Vibe Coding and Tech Trends

00:38:32
Speaker
um We have to transition to our next phase of our show. This is what we call the ship it or skip it. Ship or skip. Ship or skip. Everybody, we got to tell us if you ship or skip.
00:38:45
Speaker
So ship it or skip it. The idea here is we we bring up a topic and you kind of give it your ship it, meaning, yes, I like that. Let's do that. Or skip it like, nah, I don't like it.
00:38:57
Speaker
First one. I mean, we got, we have to throw this out there. It seems like every, every episode of the people, people have some really interesting opinions and it's kind of cool. Cause it's a hot topic in the industry right now is this, this notion of like vibe coding.
00:39:08
Speaker
Um, You know, what what are your what's your what's your hot take on this this thing? what What's been your experience? Do you think here to stay? is it you know Should we go back to another direction? What do you what do you think?
00:39:20
Speaker
Well, as someone who's primarily running you know operational capabilities, I have to be a skip it for for Vibe Coding, right? um you know it's I understand, right? Vibe Coding is great. you You throw a bunch of AI agents together, you build something, you throw it out there.
00:39:36
Speaker
But we see the downsides of that, right? there there's those great There's that great set of t tweets about a guy that was that said he you know he'd been Vibe coding his application for a month, and then all of a sudden it deleted the production database with no backups and no ability to recover it.
00:39:51
Speaker
This is a great it was a great story. And the guy like literally like live tweeted this thing get you know this thing Vibe deleting his database. So um you know the it's maturing, right? vi Vibe coding...
00:40:04
Speaker
is what we'll say the AI slop to the the the you know, flipping that on its head, right, of like vibe effectiveness, right? So once once the vibe is a little bit more in tuned with ah with what what else is going on out there, then then I think there's there's potential for vibe coding, but you still got to be wary. Yeah.
00:40:23
Speaker
Yeah, I think, and and it's it's tough to kind of put a concrete definition on the vibe coding thing, but like the... I think the what what most people kind of key in on is the kind of the, ah how to phrase it, the most but politically correct way, the the untrained, I guess, ah working with these AI models and just having, you know, slinging code and and and then just, you know, i always get to say like YOLOing it to prod, right? Like but without any discipline involved in those sort of things that that we have to learn over time. And I've deleted a production database myself.
00:40:57
Speaker
I have. Who hasn't? It's a part of life. I didn't need AI to do it for me, though. I did it myself. I did these screw-ups the old-fashioned way on my own. um But like yeah, I think that's that whole notion of like, yeah, let's just take people. Again, i love the democratization. I love opening up our field and and software engineering to more and more people, more creative folks to work.
00:41:20
Speaker
to express themselves through code. Love that idea. However, when it comes to putting products that people are paying money for, putting it in front of folks and asking them to pay money for it, I do think there's, there's a little bit of rigor that that you owe your customers to and and you need to do that.
00:41:35
Speaker
Well, yeah, and I think in the end, right, you're absolutely right. There's a point for learning, training, experimentation, right? And and putting that together with something that says, okay, how do I take that learning, training, experimentation? We we need we need to do that in our lives, right? Yeah, I, you know, accidentally deleted a production, you know, all three instances of an OpenShift master cluster with Ansible because I maybe, you know,
00:42:02
Speaker
you know, if we had this word back in 2017, maybe I vibe code a little too much Ansible there and, uh, ran that out there. So, you know, we've we've all made mistakes and we've all learned, right? And so the question is, how do I take that learning cycle and use it effectively with AI? yeah and maybe maybe vibe coding is the first step of that. And so for that, to to that effect, it's great.
00:42:23
Speaker
And we'll just combine that with a number of you know, well, let's all mature that together and we can get to a place where more and more folks can use the tools that we have available to us. Yeah, I think i was listening to the All In podcast and they talked about, there was an episode just recently kind of about education and and how could AI disrupt like higher ed, you know, because we have this awesome tutor available. and And I think that's it's that's what it's coming down to is vibe coding is really just a specific type of
00:42:53
Speaker
tutoring. i'm I'm teaching you how to code. i'm I'm showing you what code looks like. if i If I say this English stuff, this is what the code looks like that does what you just said, right? So it's kind of a it's a tutor. So treating it in that way and not like a production code mechanism, probably maybe a better way to think about it. um But I, again, I use it for teaching me all sorts of stuff. And then I do find it very helpful.
00:43:15
Speaker
Yeah, and that's great. Like, I think I was just working through just building out, you know, an Ansible library for a few things that i had to do regularly. And I took, you know, my products, you know, Portworx products docs and combine it with, you know, a set of code that I had already generated and then, you know, said i wanted to build a few things.
00:43:36
Speaker
and And it got me 90% of the way there, right? And and if that's vibe coding, like, hell yeah, I'm in. Sign me up. Oh, yeah. Oh, yeah. Right. It was just, you know, there was a second cycle that happened, right? A testing, learning, you know, making sure it worked in a number, you know, testing the, the you know, failure scenarios and and then turn it into something that I could use.
00:43:56
Speaker
I had this realization yesterday and I'm going a little off script here, but like we we had something for an internal product that we needed done. And I was like, well, we could just get one of our people, you know, to take some time on in their evenings to do it and blah, blah, blah. That's how it always happens. you know, your internal tools, you know, the they never get any love, right?
00:44:13
Speaker
But I was like, well, hold on, let me just try some agent stuff here. And and i was able to get it done very quickly. And the realization that came to was like, man, this is kind of cool because my apprehension for doing those things is like, okay, got to open up all these things. I got to get my mind down into this thing again and understand it and and then figure out how to change it.
00:44:32
Speaker
The computers can do that very quickly. And I'm thinking like all my side projects now that I'm very apprehensive about opening up and and tinkering I can do that now with these these ah these tools. so That's a cool byproduct.
00:44:45
Speaker
I was talking to a good friend of mine who I used to work with closely who's at a new organization now. And I think one of the things that we've seen is what we see something that could in the past be, okay, we need months, we need a few developers on this. It has go through all these cycles.
00:45:05
Speaker
be go through all these cycles You can do like you can make it happen if you use your AI tools effectively. They were talking about, hey, I need to go SDK for this part of our product. Right. And they're like, well, we'll get, you know, we'll get two developers. We'll pull them off this thing. We'll put them over here and we'll do it.
00:45:21
Speaker
And their CTO ended up like vibe coding it over the weekend. And like we said, right, it gets 90 percent of the way there. But the ability to say, no, I can't do this right now because i you know because I'm blocked by all these other things.
00:45:37
Speaker
Look, you can get these things out the door. It's just a matter of having the focus and discipline to make that something that's effective, like effective software delivery with AI. And that takes another cycle.
00:45:49
Speaker
But you can't say, no we can't do this. Right. Right. right Like we you can do it with the right tooling in place, with the right capabilities. and you just have to you have to put those tools.
00:46:00
Speaker
You have to get the good you have to get good information as inputs. Right. Because if you have bad information, right, that's where you get, you know, making up functions. You have to you have to have all those inputs to the ai system. Right. I was working with when I was, you know, this example I was saying of my like code generation I was working on.
00:46:18
Speaker
I had to go and find, okay, hey, use this as a source and use this as a source. Get all those sources in place. Because if I admit if id missed one of those, it would start to invent a few functions that didn't exist. And then i was like, all right, I got to go go back and rewrite all these because this isn't the right syntax.
00:46:35
Speaker
So you got to do a little information gathering, but but you can do things. yeah It's just a matter of making it happen. I think we're, ah you know, the us, us folks who've had to do it manually for so long are going have to retrain ourselves because my knee jerk is like, well, I just don't have the time to do that. Well, my, my calculus that goes on in my brain of the time thing that wasn't very good in the first time. I'm not a good estimator. I could never, software engineers are never good estimators, right?
00:47:00
Speaker
Um, we're not we're awful, but even that flawed calculus that my brain would do, whatever that is, um That has to be retrained because the it does affect the timing. i was able to, for i think it started at like 10 o'clock and I had something ready to go to production by noon to implement this whole new feature for our internal tool. and I was like, okay, well, I guess I said do you have the time for that.
00:47:23
Speaker
So now I don't have any excuses. I just admitted it on air. Great. Now I got to do all this stuff. All right. Next, next ship it or skip it topic. And I don't know that I have a um I'm informed enough. So I want to hear your answer. And I don't know that I even can lend ah my own on this at all.
00:47:38
Speaker
But you mentioned Slurm earlier. There does seem to be, is this a slurm versus Kubernetes? Like we're going to do one or the other. Are they combining together or, you know, are they at odds with one another? What is this? Is this like the greasers versus the social? Like, what are we, what are we up against here?
00:47:55
Speaker
What a great analogy. i think what I think what we're up to right now is what I'm going to ship is these things running into each other and building something that's better than what what either one of them brings to the table.
00:48:07
Speaker
there's There were some murmurings about, you know actually, like SCADMD and some folks delivering Kubernetes infrastructure, working together to deliver ah a slurm Kubernetes integration. and you know Slinky as an open source tool has has been emerging there. And there's there's other projects that are trying to do the same thing. I was just looking at, you know, Carmada is another tool that I've seen in organizations use as a multi-cluster scheduling algorithm.
00:48:36
Speaker
So what I'll say is I am shipping these two tools coming in together even more. what What I'd really like to see is more of the knowledge and understanding that the HPC ecosystem has bringing that more effectively into Kubernetes. And this is happening at the edges, right? there so There's groups within the Kubernetes community that are doing this.
00:48:59
Speaker
and And I think we're going to be even better as an industry as the as the capabilities from Slurm come into and and the knowledge that those organizations have ah bring that to bear in the Kubernetes landscape. Because what we see is the AI tools all run on Kubernetes.
00:49:18
Speaker
But we need the we need more of what Slurm has and to come to bear to make effective AI you know infrastructure for us. So I'll say i will skip those two though i'll skip those two being the ah the sharks and the jets.
00:49:35
Speaker
and i will And I'll ship those two you know that it was's the like the holding hands meme. um i'll ship those two as the holding hands me Okay. Of like the Arnold and, or not no, no, it was Arnold and Carl Weathers, that one where they're like muscles. Exactly. exactly Exactly. Now, that's what i've it's it is an interesting if you had to put a label and and I used greasers and so from, was it, the outsiders and you're using the the jets.
00:50:01
Speaker
Which one would you say is the greasers? Would that be Kubernetes crowd or is that the slurm crowd? Oh, man. I don't know. i um the We'll say that the Slurm crowd is maybe the more old school, more more well established. um And and the the the Kubernetes are like the new kids on the block.
00:50:22
Speaker
um I don't know. how how does How would that map to the analogies is that we've put together? know if it cleanly does. and don't know if they cluster together in that way that... that that That's not the the linearly separable boundary that we would use to to draw a line between them.
00:50:38
Speaker
ah Yeah, it's like it's like they're they've moved in from... a like the The neighborhoods got like ah got merged together. They moved from across the city, and now they're they're next to each other on the east side or something, and and they they've got ah they gotta to learn to live next to one another. We're going have to write an all-new play or whatever.
00:50:56
Speaker
We'll get it on Broadway soon. i love it. Yeah. Anyway... All right. So that's it for ship it or skip it now. OK, so this is and I don't know if you you're a listener of the podcast, but this is yeah if you are and the audience, they know this is the most important aspect of the show. This is what, you know, all of this stuff was just kind a warm up.
00:51:16
Speaker
Right. This was just us stretching out and getting ready for the real a bit of the show that that people really tuned in for. This is our our lightning round.
00:51:28
Speaker
It's time for the lightning round. Rapid fire, don't slow down. Hands up quick and make it count.
00:51:38
Speaker
In this game, there's no way out. It's time for the lightning round. And this is where the answers do matter, you know, to steal a, you know, a phrase from what is that? Whose line is it anyway? They always say um these answers do matter and they're very important. So, you know, don't take them, you know, don't just give a glib answer, you know, and and then just kind of off the cuff, like you you need to give real answers here because it it does matter.
00:52:04
Speaker
I'm ready. Yeah. And um right now I'm just honestly stalling so I can find my rapid fire questions that I have. ah so But I do usually lay it on thick, but this this is the important stuff that that people are really hankering for.
00:52:19
Speaker
That's a word still they say. Do people say hankering? Is that a thing?

Lightning Round and Community Engagement

00:52:22
Speaker
<unk>m I'm not sure. Okay. Yeah. You're like, i don't even know what the word is. So yeah, nobody says it. ah All right. Cake or pie?
00:52:31
Speaker
um'm I'm a cake guy. That's, you know, it might be problematic. there's There's certain pies that can like elevate, but I feel like to me, they elevate to the realm of cake. So it's like when when you have a really good pie, it's like, oh yeah, this pie is as good as cakes I've had. And that's, but like really good cake, it's it's a level above.
00:52:52
Speaker
So it's it's got to be cake. yeah cake cake never strives to be pie is what you're saying that's what i'm saying yeah yeah all right it's like a really good pie can be like oh man this is as delicious as this uh you know this delicious robust chocolate cake i had the other day it's like well you could have just had the chocolate cake and and had it had it perfect and it would be a level above yeah because if you really had mean if you really looked at like cheesecake i it's more like a pie Well, that's the problem. yeah it's like what So the the question is, where iss where is the actual line? Because it's like, Boston cream pie, is that is that a pie? Or is it a cake? And cream cake, yeah cheesecake is more like a pie than it is a cake.
00:53:35
Speaker
but like but what is But that's my point. is what is It is clearly a pie. Cheesecake is clearly a pie. It's got pie crust. and But it calls itself a cake. that It is striving to be cake.
00:53:46
Speaker
it want It wants desperately to be a cake. Okay, so i'm I'm with you on this. I think that's like that's a great answer. ah You get 72 points for that one. Good. Oh, man. yeah that's that's like off to really arc Yeah, you're off to a really, really good start.
00:53:59
Speaker
um Have you ever seen a kangaroo in person? A kangaroo in person? I don't know that I can definitively say that I've seen a kangaroo in person. It would only ever be at a zoo.
00:54:11
Speaker
And I'm just trying to think about if any of my zoo experiences have, have included kangaroos. It didn't make a big imprint on my life. So I, I have to say, i um, I'm leaning to, to know, and I might need to go find a zoo and go see a kangaroo in person just because I missed out on that experience.
00:54:30
Speaker
Okay. Um, Do you Instagram your food? No. Oh, gosh, no. I don't Instagram much of anything, so I might not be the person to ask that.
00:54:42
Speaker
The only thing I would do is I would, like, text my food to my wife. And that the problem with that is that if i If I'm out, you know, I travel a good deal for work, i go to a number of dinners.
00:54:54
Speaker
If I text my food to my wife, that means I then have to come home and take my wife to a dinner that I've texted her, you know, equivalent level of. And so that's that's very dangerous. So if I Instagram my food, I think I would have that even further um demanded of me. So no, I don't Instagram my food.
00:55:11
Speaker
Okay. I like that. Yeah. So you're kind of setting, it's almost like a promissory note, that text of food to your wife almost. It's dangerous. yeah You know, you get some really good Brussels sprouts, you get a steak. It's like, I'm going to come home and we're going to find a time to go get a steak and Brussels sprouts.
00:55:28
Speaker
Yeah. I can see that. That's quite the conundrum. Um, scale of one to 10. How good are you at trivia? I am, so this answer is going to be biased by the fact that I worked closely with a guy who writes crossword puzzles and won an episode of Jeopardy. So I'm like a five because have seen what Neville can do with a round of trivia and I'm just not in the same ballpark as him. I love trivia.
00:55:58
Speaker
i love trivia But I absolutely know that like those people that are really good at trivia, I'm just not in their realm in any sense. Do you find that people who are good at trivia, like you wouldn't necessarily label them as academics?
00:56:15
Speaker
No, what he was just really good at gathering information. He like he he wrote LA Times crossword puzzles, he wrote New York Times crossword puzzles, and he just like he could just put things together and and connect pieces of information in a way that was unique.
00:56:31
Speaker
Of anyone that I've ever met, right? You see people in different domains that are that are uniquely able to do that. And he just could could connect information and gather information and store it in his brain in a way that I don't think anyone I've ever seen can do. That's fantastic.
00:56:48
Speaker
All right. What temperature do you like your thermostat at? um I think I like it to be comfortable and appropriate for the season. So right now the temperature of my house is 72. Like I'm not a cold person at night.
00:57:04
Speaker
You know, we have a ceiling fan, you know, I'll, I'll bring it down, know, to the 68 range. In the winter, it's like not too hot, right? In the winter, I'll have it 68. you know I'll let it adjust to the season a little bit because you don't want to go outside and have it be completely absurd, right?
00:57:19
Speaker
I used to live in Minnesota where our first winter there was the coldest winter since 1978. It was you know got down to negative 26. um it was oh It was miserable.
00:57:31
Speaker
Our car definitely struggled to start out in that. So I like to adjust it to the seasons, but you know, that range of 68 to 72, I can be comfortable anywhere in there, but it's gotta be cold at night. That's the. yeah Now, usually ah a part of this conversation usually evolves around or it devolves to like, well, my wife likes it here and I like it. ah Do you find yourselves incompatible with the thermostat thing?
00:57:56
Speaker
You know what, I think after being married for 13 years now, we have we have found ourselves in harmony and we're further evolving to be to be closer to one another.
00:58:10
Speaker
we We both like it cold when it's when it's cold out. We'll both throw the windows open. you know We had that ah first bit of fall here in Cincinnati where you know the highs were in the the upper 60s and we you know windows open, get a breeze going through. so i think we' we've got ourselves on ah on a good page here and we don't have that's not a conflict in our relationship.
00:58:32
Speaker
Well, if she if she does listen to this episode, that was a fantastic answer. And and enjoy your steak and Brussels sprouts that he's going to be taking you to. I apparently I'll add one of those to the tab. Yeah, exactly.
00:58:44
Speaker
um Which animal adds more joy to the world? Would you say squirrels or llamas? I feel like squirrels have an element of chaos that, um you know, llamas yeah llamas have chaos, but it's like funny chaos, right? You see a you see a video of a llama spitting on someone. on I think I'm going to go a llama.
00:59:03
Speaker
um ah What I want to do is go ask my six-year-old this question. he He watches Wild Kratts religiously and absolutely loves animals. He can tell you anything about, you know, you pick an animal, he will tell he'll start telling you all sorts of facts about it. I can't even start.
00:59:20
Speaker
So I think he is the one that should answer this question. And just knowing him, the answer that he's going to give is that llamas give more happiness to the world. I could i got to go either way on this one. I don't feel like there's a right or wrong answer. I feel like i squirrels or llamas could could bring joy. i don't It's a tough one. It's a toss up for me.
00:59:40
Speaker
I just, I don't know that I, I don't know that I've met a squirrel that I've liked. Like they've always been, they've always been like a nuisance, you know, com yeah. Competitor of some sort of like, yeah.
00:59:52
Speaker
Like I don't want to feed a squirrel. I want to like feed a llama. Okay. So I can see that. Yeah. On a scale of one to 10, how much do you enjoy garlic? Oh, like 9.8. nine point eight Okay. Um, you know, there can be too much garlic, right? So you got temper from the 10, but man, good, lay it all good garlic sauce. Oh, lay it, lay it on.
01:00:14
Speaker
Yeah, absolutely. Um, that steak and Brussels sprouts is, is lathered in garlic and it's, it's, it's a necessity. I don't know how they make this stuff, but they used to eat at this place up in Detroit.
01:00:25
Speaker
um And it was like a Mediterranean place, I think. But they had this, it was like a butter, but it's garlic yeah with oil and it's like it's like an emulsion and it's it's it's fantastic.
01:00:38
Speaker
When you have the garlic, like in the garlic clove that you can they can butter out of that. Yeah. Oh, 100%. ah hundred yeah Like that is that is the the class of the class right there.
01:00:49
Speaker
All right, and finally, i do a lot of food questions. i don't um Actually, you know what? Let's go Super Mario Bros. or Zelda for you. Oh, man. I'm not a big not a big video game guy, personally. But you have to have an opinion. i'll say You know what I mean? like if i If I had to have an opinion, I'll go Super Mario Bros. Just because of like you know I'd play them in Smash something like that. But I've never gotten into Zelda, personally. So it's it's just...
01:01:18
Speaker
i don't have I don't have a strong opinion here, but if I had to choose, it would be the Super Mario Bros. I feel like the you know the teamwork element of the two of them together, like positive reinforcement between the two of them, I gotta i got to give it up to Super Mario Bros. So you're all about collaboration. and Now, is Link not, yeah when when you're playing Smash Bros., is Link not a good character? He would seem to be very very spry.
01:01:40
Speaker
Link, you know, I'm not good enough at any of these games to to know where where I'm at, but I feel like i feel like the i've never played I've never played Link when I was playing Smash. It was't it was never my go-to.
01:01:53
Speaker
I feel like I was still more on the base you like ah so the baseline characters when I was playing out there. That makes a lot of sense. All right. Well... um Is there anything, you you did a great job.
01:02:05
Speaker
ah we'll have We still have to run the scoring through the the tallying machine. with It's an algorithm. You know what I mean? There's a lot to it. course We can't do that in real time. We will respond back with your score.
01:02:19
Speaker
That'll be ah offline. ah But just kind of feel like you did pretty well. You started off really strong. i felt the The kangaroo thing might have thrown you on. I'm not not quite sure. Not ever seeing one or even if you saw one, it didn't make an impact in your life. That that could hurt you.
01:02:35
Speaker
i don't know. Yeah. I, you know what? I, I feel hurt by the kangaroo. i'm going to have to go look up where I could, where my near nearest kangaroo is. think we have something. I mean, we have to have Cincinnati zoo.
01:02:46
Speaker
We have to, if we do, if we do, I've probably seen them too, but I might have to go this weekend. Yeah. mean, we have the Komodos and we've got some great stuff at our zoo. i mean, we got Fiona.
01:02:57
Speaker
I mean, yeah We have, we have, everybody knows that's if you'd asked me if I've ever seen a hippopotamus, it's like, of course yeah yeah, of course. I actually wanted one for Christmas at one point. Just kidding. Um, any closing remarks, anything, any parting, words, thoughts, uh, anything you got coming up you want to tell us about, projects or are you speaking anywhere soon? Uh, your blog posts, articles, books, anything what's going on?
01:03:23
Speaker
Yeah, you know, I think the thing I really like to promote is just the ability to get together in person. I think in the in the post-COVID world, there's, you know, we can do so much remotely. We're doing this remotely, right? But there there's ah an aspect of collaboration and growth that that happens because of that in-person conversation. So my upcoming things, I'm helping run the Cincinnati CNCF meetup, which we're doing a monthly fall series right now.
01:03:48
Speaker
trying to develop and build that community. I'm also speaking at Red Hat Summit Connect events throughout the Midwest. I'm going to be in Chicago, Minneapolis, and Detroit over the course of the month of October.
01:03:59
Speaker
You go check out the Red Hat Summit Connect event page for more information there. I'll also be at KubeCon. I'm running workshops for the VMs on Kubernetes Day.
01:04:09
Speaker
And we'll be doing a number of events out in Atlanta here in November. So the the thing that I'd like to promote is, you know, whenever you see me out and about, when you have the ability to get together in person, i think there's just there's a unique aspect of collaboration and growth and learning that can happen in person. And don't hesitate to reach out and say, hey, don't don't hesitate to hit me up on LinkedIn and and see where, you know, say where you'll be going.
01:04:35
Speaker
And I'd love to to get together and grow community that way. we you know We all have, I think there's unique challenges in our jobs these days of you know growing you know internal community within our, you know where we all work.
01:04:48
Speaker
But there's an aspect of ah of the broader technological community that that has grown through in-person interaction. And that's what i'm that's what I'm looking forward to over the course of the next two months.
01:05:00
Speaker
That's great. Well, thank you ah for joining us here on the podcast. I know whenever I speak to you, I think, you know, this guy knows this stuff. I need to dig into this and and educate myself more in this area.
01:05:14
Speaker
But I feel like even if I do, and the next time I get together with you, it's like, well, he learned more things now. And now it's like, I'm never going to catch it. You're like you're like ah Matthew McConaughey's like 10 years future self kind of thing. Like I'm always striving to be at your level of knowledge on this stuff.
01:05:31
Speaker
we're all learning too, right? We all have our areas of expertise and growth and and that's where I think we're, you know, I'm really grateful to be in this ecosystem and in this, you know, broader community because We all are have our areas of expertise and we call learn from one another. And it's, it creates this great opportunity for all of us to learn and grow together. That's fantastic. Well, thank you again for, for being here on the, on the podcast with us. I really, really appreciate you coming.
01:05:54
Speaker
Thanks James. It's been great to be here. All right. Well, if you'd like to get in touch with us, drop us the line at the forward slash at Caliberty.com. See you next time. The forward slash podcast is created by Caliberty.
01:06:05
Speaker
Our director is Dylan Quartz, producer Ryan Wilson, with editing by Steve Berardelli. Marketing support comes from Taylor Blessing. I'm your host, James Carman. Thanks for listening.