Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Nodeless Kubernetes - Optimizing costs with just in time compute image

Nodeless Kubernetes - Optimizing costs with just in time compute

S3 E17 · Kubernetes Bytes
Avatar
1.6k Plays1 year ago

In this episode of Kubernetes Bytes, Ryan and Bhavin sit down with Madhuri Yechuri and talk about all things Nodeless Kubernetes and how users can leverage the concept of Just in Time compute provisioning to prevent wasted spend on their cloud bills. Madhuri talks about LUNA and NOVA - a couple of Elotl products that help users run a nodeless Kubernetes multicluster platform for their containerized applications. 

Join the Kubernetes Bytes slack using: https://bit.ly/k8sbytes 

Ready to shop better hydration, use code "kubernetesbytes" at this special link https://zen.ai/apaSnaIFOuee5jScqZ28a03tKKvQiqkyz8mtm9wipoE to save 20% off anything you order.

Interested in attending Boston DevOps Days?

Timestamps

  • 00:30 Introduction 
  • 05:18 Cloud Native News 
  • 12:48 Interview with Madhuri
  • 56:20 Takeaways  

Cloud Native News: 

  • https://tech.eu/2023/09/04/rig-dev-first-open-source-baas-platform-on-kubernetes 
  • https://d2iq.com/blog/dkp-2-6-features-new-ai-navigator 
  • https://securityboulevard.com/2023/09/pluto-finds-deprecated-kubernetes-api-versions-3-questions-from-users/ 
  • https://www.businesswire.com/news/home/20230906419666/en/RapidFort-Launches-Runtime-Protection-to-Automatically-Monitor-and-Secure-Kubernetes-Workloads  
  • https://www.businesswire.com/news/home/20230906393254/en/InfluxData-Announces-InfluxDB-Clustered-to-Deliver-Time-Series-Analytics-for-On-Premises-and-Private-Cloud-Deployments  
  • https://blocksandfiles.com/2023/09/04/storage-news-ticker-4-sep-2023/  
  • https://blocksandfiles.com/2023/09/07/storage-ticker-7-september-2023/    


Show links: 

  • https://www.elotl.co/ 
  • https://www.elotl.co/multi-cluster-podcast 
  • https://docs.kubefirst.io/aws/faq https://kubefirst.io/slack
Recommended
Transcript

Introduction to Kubernetes Bites Podcast

00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management. My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.
00:00:29
Speaker
Good morning, good afternoon, and good evening wherever you are. We're coming to you from Boston, Massachusetts. Today is September 11th, 2023.

Bob's Woodwork and Personal Updates

00:00:37
Speaker
Hope everyone is doing well and staying safe. Bobbin, I see there's been a big change behind you. You're doing something. I feel like it's just because I've been giving you flack about it. I'm not just you, dude. Like, yeah, there are multiple people that give me flack about it.
00:00:53
Speaker
For those who are listening and not viewing on YouTube, Bob and his background has always just been a white wall. So we were like, put something up. And there's some scaffolding of wood behind him, which we were DIYing over the weekend or something. Yeah, that's it. I think my wife eventually got fed up of this blank wall.
00:01:14
Speaker
to plan for something and do something so she designed the whole thing and then this week and we spent like going to Home Depot a couple of times getting the wood and cut it to the right size and then just put it up. I had to borrow a nail gun which I love a nail gun dude like it makes things so much simpler like hammer a nail
00:01:36
Speaker
The first thing that we put the liquid nails thing, just glue it together and see how it works. I don't believe in liquid nails anymore because the wood was just like, yeah, I don't want to stick to the wall. And then I got a nail gun and like bang, bang, bang, done.

Patriots Game and Weekend Activities

00:01:54
Speaker
Yeah. Well, I mean, the other thing is like wood's imperfect, right? And so is your wall, right? Your wall's probably...
00:01:59
Speaker
different. It looks great. I mean, I'm excited to see the final product. It looks pretty cool. This is phase one. So we'll get some caulking done, choose a color, maybe paint it, paint the wall. Yeah, I don't know how fast I'm going to work on this, but yeah, it's a work in progress. It might be that way for a while. Okay. We'll see the evolution. Yeah, I know. Every other episode is like, oh, we have something new in the background. Pretty nice little Saturday at Home Depot.
00:02:27
Speaker
Actually, that was my weekend like and this project and NFL being back like oh that's a perfect weekend
00:02:34
Speaker
The Patriots lost though, but I don't know. It still gives me some hope. Like the offense did look good. Defense was great. So I'm excited. Like Eagles are a good team. They went to the Super Bowl last year. So like losing to them by five points, isn't that bad? Five points in the end, right? Because they were kind of down 16 and 0. Welcome back, Tom Brady. We're going to make you a team Hall of Fame, but you know, we're going to talk to you today.
00:03:00
Speaker
I know. I was glad that we scored some points before the halftime when they brought Tom Brady on stage because that would have been really terrible. Like, yeah. Yeah. You're like, this is not how I envisioned it was a graining too. So it's kind of a melancholy day for Tom Brady to show back up.
00:03:19
Speaker
I think they made that announcement maybe last minute like no no no this this seems shitty right now but we have a official ceremony planned next year in June so yeah come back for that we'll sell you more tickets yeah that's literally and they will because I'm ready
00:03:36
Speaker
I'm not a Patriots fan, but, you know, I appreciate the sportsmanship coming. I hope you're not a Giants fan, dude. They lost. I'm not. I'm not. No, I have. I grew up a Redskins fan. Well, the commanders, you know, like I never really followed them that closely.

Cloud Native News Highlights

00:03:54
Speaker
It was just like my father was a fan. I became one. Now I just like, I just like watching football.
00:04:00
Speaker
More of a baseball guy. So October, September, October, playoff baseball. That's where I'm in. Even though I'm a Yankees fan and, uh, they suck this year. So how was your weekend? It was good. It was good. I got outside mountain biking and did some other stuff. So you saw my Instagram, I was having some fun on one of my, uh, on my bikes for a training coming up. So I can't complain too much other than, you know, the weather wasn't great. If you're in this area, you know what I'm talking about.
00:04:29
Speaker
It was so humid. Just being outside for an hour at any point, I just want to die and go inside. I don't know. It was like swimming when you're breathing. I was watching NFL Red Zone, so watching multiple games at once. And then whenever they switched to the Patriots game, it was raining and it was humid. And then when they switched to the LA Chargers and Dolphins game, it was sunny and awesome. They switched to a different game and it was sunny.
00:04:55
Speaker
Yeah, yeah, we didn't we didn't make out weather wise in the East Coast. But yeah, can't complain. And we got this episode first thing, you know, on the Monday morning, which you know, it's a good way to start the day. Yeah, speaking of starting. We do have a great topic about Nodeless. We'll introduce our guest shortly. But we have a little bit of news, not too much. Why don't you kick us off?
00:05:18
Speaker
Yeah, sure. So first thing, a new funding round, a pre-seed funding round from I think a Danish startup. They raised like 2 million euros to work on building or help developers build scalable backends and cloud infrastructures on Kubernetes faster.
00:05:34
Speaker
And that's about the extent of knowledge that I have. That's what I could find about them. I'm sure this is pre-stage stage. They're still figuring out what they are going after. But I'm just glad to see there's still movement in the community, in the Kubernetes ecosystem. And there are still these early stage startups trying to solve challenges, like issues with the overall Kubernetes experience. So I'm all in for it. Yeah, me too.
00:06:01
Speaker
And then next up, I think I have D2IQ. They announced a new version for their D2IQ Kubernetes platform or DKP. The one thing that caught my eye was they built a chatbot because everybody has to do AI in some form.
00:06:16
Speaker
The reason I'm highlighting it is it's not just generating kubectl commands. The way they have trained the model on the backend is instead of going out and looking at the public repository, they have trained it on their internal knowledge base. Everything that they have in their history, I'm sure from Mesosphere days, that's what the model is trained on.
00:06:38
Speaker
of finding information, doing troubleshooting exercises really quickly if you are an E2IQ customer. So I don't know. I love that idea, honestly. I really do. I think a lot of companies should do it, especially really large companies, because if you work at a huge company, there's so much sort of tribal knowledge.
00:06:56
Speaker
that you have to learn along the way. And it'd be great if you just had to chat, but I'd be like, you know, what, what, where do I find this piece of information? This person, this project name, whatever it may be. Cause otherwise you wind up going down this rat hole of like spider web of like getting to the right information. I love that idea.
00:07:12
Speaker
If your product has a UI, it helps to build something like this inside the product itself rather than, because if I'm a customer, not everybody in my organization will have a customer account. I don't want to share passwords and finding information that's behind a paywall might be tricky. So having something in the product that's easy to find, easy to locate, I think that definitely helps. Cool, cool.
00:07:35
Speaker
And then the final thing that I had was an open source utility by Fairwinds called Pluto. Initially the name caught my eye, but then it actually solves for an interesting use case, as we all know through the different Kubernetes releases, right? There are different APIs that get deprecated.
00:07:51
Speaker
Even not just in Kubernetes, even with Helm, there are some changes to the way the API works. This open source project or utility is command line based. It does a few things like it checks your real time installation. So like runtime looks at live Helm releases, it looks at your cluster resources that are running inside your Kubernetes cluster.
00:08:14
Speaker
and gives you a list of all the apis that will be deprecated in the next release and then it also does a scan it can also help you do a scan against your infrastructure as code repository so if you have terraform files that are deploying resources it can scan that and let you know that dude this might break if you just go up one version so this helps like people catch things earlier in the life cycle rather than
00:08:36
Speaker
trying to troubleshoot why something isn't working that used to work before just after a community's update. So that's why I think I just wanted to share this utility that I found. Yeah, nice. I'm all for it. I also found something I wasn't familiar with and I kind of really like that aspect of it because they're just like these companies or projects that just come out of nowhere. The one I found was, I don't know if you've ever heard Rapid Fort? Nope.
00:09:02
Speaker
So neither have I, but they came out with an announcement and then sort of coined a term, which is why I kind of keyed in on this, called software attack surface management, SASM.
00:09:13
Speaker
Um, so their security really just another acronym for us to learn, or at least they, they, the article says that the startup is pioneering this term. Okay. Uh, but there's a lot of happening in security. As we know, we always have, you know, um, a ton of projects, ton of new things going on and it's a hugely growing space. Anyway, the whole idea behind this, uh, runtime protection is basically.
00:09:37
Speaker
The company, as far as I understand it, do the typical CICD pipeline stuff, but also the active running software. So it hits both ends of it, giving a bigger strategic view into catching malware vulnerabilities or those kind of things. So anyway, I thought it was pretty interesting. I wanted to put a checkbox on the keep track of this new term, SASM. I mean, I'm going to call it SASM.
00:10:07
Speaker
wasm and schasm and then we have some S poems and salsa. The other one was from influxDB who we had on show really early. Yeah, last year.
00:10:20
Speaker
Yeah, exactly. I forget which episode it is, but they came out with InfluxDB Clustered. So, their whole InfluxDB 3.0 re-architecture, I think started with their enterprise version and their hosted version. And so, InfluxDB Clustered is sort of their on-prem version.
00:10:40
Speaker
So if you wanted to run it in private cloud or hybrid cloud, this would be the version that you use. And it comes along with a lot of the benefits that they claim in 3.0, which is like a hundred times faster queries and stuff like that. I know we were big fans of what they were up to even last year.
00:10:58
Speaker
The naming seemed a little weird because I expected almost like because it was named Clustered, it was this new cluster architecture or something. But from what I understand, reading about it, it's sort of just their on-prem denotation.
00:11:15
Speaker
Okay, I'm just like, still stuck at the 100x improvement that you just mentioned. 100x, like, okay, that's great. On high cardinality data. Okay, like how slow was it before, like the FDNC? 100x improvement. Yeah, they do claim a lot of stuff. Performance wise, they kind of
00:11:37
Speaker
talk about how 3.0 is really centered on performance as one of the big takeaways. I've used it briefly, but I know they're up to a lot of stuff there. I didn't put it in the news here, but I think they had some really great revenue growth too,

Unpacking 'Nodeless' with Midori Yakuri

00:11:52
Speaker
but that's a different story.
00:11:54
Speaker
That's the two main stories I had there was a couple other links that we'll put in the show notes here from the blocks and files Folks where they have a storage news ticker. So we do talk about storage given our backgrounds quite a lot So if you're interested in those there's some interesting things about what IO mesh is up to and some other stuff in the data Protection and backup world. So okay check those out if you must That's it
00:12:19
Speaker
Now we're ready to get into today's topic. So I know let's talk some Nodeless. Yeah, some Nodeless. So we're going to talk about Nodeless today. Uh, if it's a term that's not familiar, um, I I'm in the same boat. So we're going to learn all about it. Um, we have Midori, uh, Yakuri, the founder and CEO of Lottle. Um, they're up to a bunch of cool stuff. They have a couple of things called Nova and Luna that we'll talk about as well as just the whole, uh, Nodeless concept. So let's, uh, let's get Midori on the show.
00:12:47
Speaker
We'll be right back after this short break. As long time listeners of the Kubernetes Bytes podcast know, I like to visit different national parks and go on day hikes. As part of these hikes, it's always necessary to hydrate during and after its turn.
00:13:04
Speaker
This is where our next sponsor comes in, Liquid IV. I've been using Liquid IV since last year on all of my national park trips because it's really easy to carry and I don't have to worry about buying and carrying Gatorade bottles with me. A single stick of Liquid IV in 16 ounces of water hydrates two times faster than water and has more electrolytes than ever.
00:13:28
Speaker
The best part is I can choose my own flavor. Personally, I like passion fruit, but they have 12 different options available. If you want to change the way you hydrate when you're outside, you can get 20% off when you go to liquidiv.com and use code KubernetesBytes at checkout. That's 20% off anything you order when you shop better hydration today using promo code KubernetesBytes at liquidiv.com. And we are back.
00:13:57
Speaker
Hi, Miduri. It's so glad to have you on the Kubernetes Bites show. Before we dive into all things Nodless and what you're up to at Elodil, please introduce yourself and tell our audience a little bit about yourself. Yeah, sure. Thanks, Ryan. Thanks, Marvin, for having me on it. I have been in this ecosystem since 2015, I believe, when I worked with you, Ryan. Yeah, little straight Q days. Good old flocker.
00:14:26
Speaker
I just said last week or two weeks ago, I was at this Kubernetes meetup in Amsterdam and someone said, you know, the best thing about Lust HQ were your flocker t-shirts. Yeah, they were pretty awesome. We can thank Michael Ferranti, I think, for helping design those. He always picked the nicest material.
00:14:42
Speaker
Yeah. Love all things containers and compute in particular and have been very interested in this ecosystem since the beginning days when Kubernetes was I think just a glimmer in the eye of some product managers at Google. Yeah. And Lexus and Darkest Warm were the coolest toys on the block. Yeah, I feel like if you were in this ecosystem,
00:15:11
Speaker
anywhere starting like 2013 to 2015, you saw this quick transition to like, what's the, what's the greatest thing right now, right? It's, it was, it was, you know, Meso, it was Docker and Docker. Everybody had Mesosphere. I worked at a company and a lot of companies just out on that board. And then it was like, oh, Kubernetes is actually ready for production. So let's go that way. It didn't include OpenStack in this list.
00:15:36
Speaker
Yeah, well, I'm sure there's plenty of folks who have that experience as well. Yeah, I skipped the OpenStack time frame completely. Like I went from VMware to directly to Kubernetes and I was like, yes. I mean, those were great conferences, great technology. And I know they pivoted. I didn't get involved. I didn't stay involved, I should say, after they started building Kubernetes on OpenStack. But, you know, it's all fun, I guess, at the end of the day.
00:16:06
Speaker
Yeah, it's like one of our products currently is named Nova. And during one of my prospects said, Hey, you know, is it related to OpenStack Nova? I was dating yourself. PTSD for that prospect. I even think of that when I was reviewing all this, but yeah, that's a good point.
00:16:29
Speaker
Okay. So Madhuri, again, thanks for joining us, right? I want to get started with like the most obvious question. And like, that's something that I asked Ryan when he introduced this topic to me. It's like, what is multi-cluster but Nodeless Kubernetes platform? Like what is that? Yeah, yeah, for sure. So fundamentally, the idea of Nodeless originated when back in 2015, we were thinking about transitioning from running monolithic apps on
00:16:54
Speaker
and data centers in virtual machines to running them inside containers, whether it be swarm or Mesosphere or Nomad or whatever, as microservices where we had this whole idea of we need to start thinking about apps as not pets but cattle where they are replaced by
00:17:12
Speaker
and then modular. So the idea of Node-less is when we have transition to cloud native deployments where your compute itself is not something that you have to plan for with purchase orders and you have to upgrade at a yearly cadence, you can get a
00:17:28
Speaker
of any shape of any, at any cost price point from any cloud provider at an API call and release it with an API call. So why are we still thinking about compute backing your infrastructure being as this hand-managed pet, right? When you transition your application from pets to cattle. So the idea of Nodless is just-in-time compute for your containerized application when the app comes up and the compute goes away when the app goes away as well.
00:17:58
Speaker
Okay. At least, okay. That, that helps me. Right. Because I was like, how can you run a cluster without nodes? So it's just that just in time, the just in time compute provisioning definitely like helps paint a clearer picture. Yeah. I like that idea of, you know, we, we, as an industry got away from treating an individual server as a pet, but now this is more abstracted to like a pool of compute, right? That thing as not a pet too.
00:18:27
Speaker
Now, I guess this is for my own benefit because I don't really know the answer to this and that's perfect because you're here. I've heard Nodeless be compared to serverless or just-in-time compared to serverless, so I've even seen articles that say, well, Nodeless is just serverless. I'd love to hear your perspective on that.
00:18:47
Speaker
Yeah, first of all, both Nodeless Domain, which includes us and a couple of other folks, as well as Nevertheless folks, we have been criticized as in, we should not define your domain or the thought leadership with respect to what is missing. You should actually, that's kind of a negative way of looking at it. So we are guilty of that.
00:19:08
Speaker
The idea of serverless is event-driven, just-in-time applications, which needn't be containerized. It's just-in-time application that's coming up in response to the user-driven traffic. The compute backing the application could be always on. The whole idea of just-in-time everything would be serverless at the top and nodeless at the bottom.
00:19:38
Speaker
The pod comes up just in time in reaction to the increase in the web traffic or your Black Friday load or whatever, and your pod is scheduled on a just-in-time provisioned compute using Node-less.
00:19:55
Speaker
So we, in fact, had a talk at KnativeCon last year, I believe, where we did Just-in-Time all the way, where Just-in-Time app was being provisioned on Knative using serverless paradigms. And the pod was running on an EKS or any Kubernetes cluster that was running in Nodeless mode with Just-in-Time compute. So that is true Just-in-Time end-to-end, basically.
00:20:23
Speaker
I like that distinction too, because we, like Bob and I have had episodes about serverless on here and it's just a terrible name for what it actually is. I think we've come to that conclusion, but it is what the industry has picked for what it is. But yeah, the distinction of serverless is really more like application.
00:20:42
Speaker
and know this, I like that now. That clears it up for me quite a bit. Again, I think that definitely helps. If you look at AWS Lambda, your apps can come online, but AWS always has that underlying infrastructure running. If I'm running Kubernetes clusters on my own, I still need to worry about the just-in-time provisioning of the infrastructure itself, and this is where Nautilus fits in.

Luna: Just-in-Time Compute Tool

00:21:04
Speaker
Yeah, and actually Knative project enables you to run your serverless functions on Kubernetes clusters. You already have that glue layer, so if you use serverless functions and Knative Kubernetes beneath it and underneath that nodeless, you have just-in-time stack all the way.
00:21:25
Speaker
Okay, so the idea of just-in-time is awesome, right? I know even in the manufacturing thing, like Toyota did the just-in-time manufacturing processes, but what are some of the benefits or challenges of having this just-in-time or node-less architecture? One of the things I can clearly see as a benefit is cost. I don't need to have a full-fledged cluster running all the time, but then does that mean I have to wait for the spin-up time every time my app wants to run? So can you share a bit more around that?
00:21:51
Speaker
Yeah, for sure. So cost is the thing that comes up right away, because right now there is a huge market for post-mortem cleanup of wasted spend of your cloud bills. So just in time actually prevents wasted spend, so you don't have to track. But beyond cost, the biggest value proposition is actually DevOps time and energy in trying to keep track of
00:22:17
Speaker
hundreds of on-demand shapes and spot shapes and your various other container-as-a-service shapes like Fargate. And cloud providers release better, cheaper instant shapes at pretty high cadence every three months or so. So the decision that the DevOps engineer made today might not be the optimal decision three months later.
00:22:39
Speaker
It's an ongoing, like, grunge work. And to answer your question about the spin-up time, what we have noticed is the spin-up time varies based on the compute shape that was picked, whether it's on demand spot versus, like, CAS, like, Fargate. And depending on the app, the spin-up time either matters or doesn't matter. So if you're running, for example, QA workloads, it really doesn't matter if your QA test starts now or two minutes later. That's the math side.
00:23:08
Speaker
And all your cloud providers guarantee eventual consistency for your spin up of your compute node. So it is going to come up. Of the hundreds of prospects we've talked to over the past seven years, there's only been one prospect for whom it actually mattered, which was a chatbot application.
00:23:27
Speaker
When someone types in a question, you expect the chat bot to respond in like three milliseconds or whatever, right? So for that, in more or less platforms, there is, you can have pre-warned instances. So you are keeping a couple of instances that are, you know, even having the hard-coded number was sufficient to meet the SLAs. You don't have to over-engineer for startup time to begin with.
00:23:51
Speaker
Okay. And like one last question, right? I'm still trying to wrap my head around this. Do I have like a minimal three node footprint when it comes to Kubernetes and then add more nodes or am I spinning up a new worker group when my application needs it?
00:24:05
Speaker
Right. So Nodal is actually smart enough to figure out whether your app needs to be bin-packed or bin-selected. Bin-packed would mean the app resource footprint is so low that it can actually, it doesn't make financial sense to spin up a separate compute node for it. It can actually.
00:24:25
Speaker
that's bin packing. Bin selection would be for workloads like GPU, machine learning, these kind of workloads, or your lift and shift of your giant JVM application that was running in a VM. And you're calling it a microservice or whatever. You're 20 gigabyte microservice.
00:24:47
Speaker
So for those things, bin selection one-to-one is the, I know this is smart enough to actually figure out whether your app should be bin part of in selected. Yeah. Got it. Got it. Yeah. I could imagine, I just think back to Mike, my Jenkins days of just like you have this sort of pet of pool of servers that seemingly everything goes wrong on when you're doing QA. And you're right. Like it probably only adds a matter of minutes, right? To like a full.
00:25:15
Speaker
pipeline run or something like that, I could imagine. Yeah, that's super nice. So yeah, it does then boil down to, you have to think about what the application is and then does it fit nodeless and for, I imagine a lot of the times you can figure out sort of like you said, a hybrid model where if you do need those few around,
00:25:37
Speaker
You can yeah, yeah, it's also it really shines in special case scenarios. Like for example, if you have Mac iOS build and test, right? You don't have Kubernetes running on Mac right now, but Nodeless will automatically detect that you need Mac compute shape for your iOS build.
00:25:55
Speaker
and it will spin up the Mac Metal instances on your cloud provider or your ARM shapes or GPU shapes. So you don't have to special case like have separate clusters or non-cubernetes environment for your Mac, iOS, Build and S teams. So all of that goes out of the window and it actually lets you consume Kubernetes as this standard black box interface. Got it.
00:26:19
Speaker
That's a good point to make. I mean, we're on a Kubernetes podcast, but it goes beyond that, right? You can obviously tailor abstracted views of different types of compute, not just Kubernetes. Who knows? People do other things than Kubernetes?
00:26:34
Speaker
Well, let's talk a little bit about what you've been up to at Elodil and how you're sort of building solutions for this. You have a couple different product over there, Luna Nova. Let's talk about Luna first. So what is Luna and how does it kind of help accomplish these goals?
00:26:53
Speaker
Yeah, Luna is a just-in-time compute provisioner for any Kubernetes cluster on the major cloud providers. So it's currently used on AWS, GCP, Azure, and OCI, the major cloud providers. And it gives the standard out-of-the-box just-in-time compute for any application running on your standard EKS cluster, GKE cluster, AKS cluster, and OK cluster.
00:27:21
Speaker
And it can provision on-demand, spot, Fargate, ARM, Mac, any compute shape needed based on the application's resource footprint. So it will auto-detect what the app needs and provision the right compute shape for it. And it will make the smart choice between bin selection versus bin packing as well. Got it. So if someone were to use Luna, do they basically
00:27:47
Speaker
define what those different pools look like for them and how does the application choose which pool? That's the beauty of Luna is you don't have to define anything. You drop Luna on your EKS cluster, your GKE cluster, and you continue shipping your applications through your GitOps pipeline or by hand or whatever CLI you use.
00:28:11
Speaker
on your compute cluster, and Luna will auto-create the node pools if needed. The only thing you can do is you can blacklist certain node shapes if you don't want to use node shapes. For example, machine learning folks are really picky about, hey, I know that this algorithm is not performing well on this GPU shape, so I don't want you to pick this GPU shape.
00:28:39
Speaker
So you can say that I don't, Luna don't consider these shapes, but, and you can also say consider these shapes from these instance families. So there's a lot of like, you know, knobs available for you to, to select or unselect what compute shapes Luna should use. But if you don't, if you don't tweak the knobs, Luna will consider all compute shapes available to you because it's running inside your cloud account on your Kubernetes cluster. Okay.
00:29:08
Speaker
And as an end user, you choose which clouds you want to use, I imagine, or does Luna decide that? Okay. So Luna works, Luna is deployed on a cluster, so its worldview is a single cluster. So you drop Luna on EKS cluster, you drop Luna on GKE cluster 27, etc.
00:29:26
Speaker
Okay, so Madhuri, I have worked with AWS Carpenter before and how it helps you select spot instances so you can save cost. And I know Google Cloud also has a solution called Autopilot that kind of does similar things, right? So how does Luna compare to those cloud-based or cloud-specific solutions?
00:29:44
Speaker
Yeah, okay, first of all, we came to market first. And secondly, the cloud provider solutions have this, like, you know, if I were working on Carpenter at AWS, I would make Carpenter first-class citizen for EKS clusters.
00:30:09
Speaker
You're actually working with a prospect where they went with carpenter on EKS and then they had to start using GCP in addition to AKA in AWS and carpenter port for GCP wasn't available so they're like
00:30:25
Speaker
we are not interested in writing a carpenter port for GCP. So then they switched to Luna because Luna is exact same functionality on any cloud provider. So you don't have to, you're basically future-proofing your multi-cloud strategy. And by multi-cloud, I don't mean stretched clusters, but if you want your footprint across various in silo deployments across various cloud providers.
00:30:49
Speaker
I think it definitely helps, right? Like it reduces the toil that platform teams have to go through when they want to switch between different clouds. And multicloud is a reality, like not just based on the discussions that we have been having with our previous guests, but even in the ecosystem, right? And when you're talking to people, like everybody's thinking about multicloud, if not already implementing it today. So that helps.
00:31:10
Speaker
Yeah, so they're great products for sure and it's kind of validation for us that, hey, this is something that is needed in the market because I didn't know the first time I talked about Luna and Nodless, it was at PromCon I think four years ago and people were like, the audience were like really frustrated. They're like, we have Flusterer Autoscaler, why would we need something like

Introducing Nova for Kubernetes Clusters

00:31:32
Speaker
this?
00:31:32
Speaker
And it took like three or four years of people experiencing the pain points of hand managing your node pools that like, okay, that is you need to prevent wasted capacity, wasted spend. Right. Right. Otherwise you have to kind of really apply a lot of different tools to accomplish anything similar. And that's just for that one cluster. Right. Yeah. Got it.
00:31:54
Speaker
So, we've talked a lot about state. You have worked directly with Flacker in the past. And I think the question I have is, you know, are the applications using Luna that are stateful or how many of them are stateful and how does that work? Does it basically just, you know, consider the clusters know this and go on to Maryway as provision volumes as it normally would? Or yeah, give you some more about that.
00:32:21
Speaker
Yeah, yeah. So when Luna provisions the just-in-time compute, the compute node is a first-class cubelet worker node. So your CSI will work, your CNI. So all of your ecosystem components work as is. It's just that the node wasn't provisioned two months ago, waiting to run an application. It came up just in time. So your selection of your CSI driver and how your state works will be as expected.
00:32:50
Speaker
Got it. So in that case, is it limited to the cloud providers, I guess, first class CSI drivers where you can kind of put checkmark and say, I want to use this? Or, you know, can they also kind of look at other ecosystem tools as well and say, well, I really like this solution. Can you add this to my nodeless pool when it comes up?
00:33:11
Speaker
Yeah, yeah, that's a really good question. So the template for the template AMI that is being used by Luna to provision just-in-time compute, there is a default template, but you can override parts of it. So some of our prospects, for example, they have their custom AMI that they baked in with the custom CNI. So you just say that, hey, Luna, use this AMI instead of the default EKS AMI.
00:33:39
Speaker
And the cloud in it will pull in the right, use the override defaults that come in the default cloud in it. So it comes with the default, which is the cloud provider default, but you can supply. Okay. No, that makes sense. Sorry guys. Luna provides that template, but they can basically extend it. Right. Okay. Got it.
00:34:01
Speaker
Okay. And when Ryan started talking about state, right, as like, we have been talking about just in time provisioning and scaling up or scaling out. What about scale down? Like, what have how does Luna help me when my instances are going away? Does it talking about state and stateful and stateless applications, right? How does it help me make it non disruptive or make it persistent?
00:34:22
Speaker
Yeah, yeah. So when your application is terminated either by the user or let's say it's your machine learning application and the job has completed, once the app is terminated, the underlying compute is automatically terminated by Luna after a certain grace period. The only exception is because Luna is aware of the pricing structure of the compute,
00:34:49
Speaker
Luna is smart about keeping the computer around for longer if needed. For example, Mac won metal instances. The pricing structure is you're charged for 24 hours, even if you lose it for five minutes and terminate it for 24 hours. So Luna is aware of the pricing nitty gritty details of the pricing structure. So it'll keep the node around for 24 hours in case another pod comes up.
00:35:14
Speaker
Um, so it's, it's smart in that sense that it's aware of the underlying cost characteristics and your workload characteristics. So it'll decide whether to terminate immediately or keep it around. Okay. Oh, that makes sense. And, uh, like talking about the developer lifecycle, right? I know Ryan brought his Jenkins experience and like running, uh, QA pipelines.
00:35:37
Speaker
But today, we talk a lot about GitOps frameworks and continuous delivery, and they have a push model and a pull model. That's just applications being deployed. Does Luna integrate with tools like ARBO CD or Flux and provision things, or it just waits for the Kubernetes API server and to tell it, okay, I have more workloads coming in, please scale out my cluster. How does it work?
00:36:00
Speaker
Yeah, it works at the second option that you presented. It works at a Kubernetes. It responds to the Kubernetes API servers. Basically, the scheduler bringing in a pod and pending state, it responds to that. So, it doesn't have a need to integrate with anything that is not bound. So, whatever GitOps pipelines you have in your environment, they should work out of box.
00:36:25
Speaker
Okay. Now that helps, right? Like I don't have to worry about individual integrations into different tools and I can move between tools if I wanted to. So that's awesome. Thank you. Yeah, sure. Yeah. Got it. Got it. Cool. So, um, we probably could have had several podcasts for both Luna and Nova it sounds like, but I'm going to switch gears and talk about Nova a little bit. Um, I guess the first thing is what does Nova, how does it differ from, uh, Luna and yeah, just generally what is it? What is it?
00:36:51
Speaker
Yeah, so when we started off with Nodless, the big vision was to commoditize compute for Kubernetes across regions and across multiple cloud providers. So we wanted to solve the problem in two steps. The first step is solve it for a single cluster, commoditize compute for a single cluster, and the second level is commoditize clusters for a fleet of clusters.
00:37:13
Speaker
So if you have two, three, five or 1600 clusters, you shouldn't be treating each cluster as a pet because what's happening is when people are migrating to Kubernetes fleets, they are starting off with these fleet of homogeneous clusters with your Kubernetes API server version X.
00:37:33
Speaker
and your CSI version Y and your CNI version Z, et cetera. But pretty soon, as soon as you hand over these clusters to individual business units, someone will install some security patch updates and your clusters will diverge and your clusters have become pets. So all of a sudden, you're not sure if you should terminate a cluster because there are no workloads running or if it is like a special snowflake cluster, right?
00:37:57
Speaker
So Nova basically creates this supercluster from a federation of your workload clusters and it commoditizes clusters. So your supercluster is simply an API server, Kubernetes API server. So instead of all of your Kubernetes clients, whether it's DDoS clients or your kubectl clients talking to individual pet clusters, they talk to this single server
00:38:24
Speaker
and the DevOps person inserts policies as to where these apps should land. So these policies give you a lot of power and a rich language in expressing which underlying clusters are the right target clusters for the apps that are coming into the super cluster.
00:38:46
Speaker
Talk a little bit more about those policies. What types of knobs and whistles are available? Yeah, for sure. An example would be availability-based scheduling policy. If you're running a machine learning workload, and your machine learning workloads manifest says, I want NVDA GPU shape X.
00:39:06
Speaker
And let's say you have 10 clusters in region X on GCP and 20 clusters in region Y on GCP. And let's not even go to multi-cloud, right? Multi-region itself. And we are all familiar that not every GPU shape is available in every region on every cloud provider.
00:39:27
Speaker
So what's happening today is people are earmarking, hey, this cluster 27 in region Y has this GPU shape as the available. And this other cluster in this other region has this other snowflake GPU shape available. By the way, these GPU shape availability is fluctuate.
00:39:50
Speaker
thing that is set in stone, right? So when you have this Nova supercluster, Nova would create a single supercluster out of your 30 clusters. And your machine learning application is scheduled to Nova API server endpoint. And the policy is schedule my app to a cluster that has the available resources for running the application.
00:40:14
Speaker
So Nova knows that this cluster 27 in region Y has this GPU shape available for that the app needs, and it will automatically schedule the app to that cluster, if that makes sense. Yeah, it absolutely does. I couldn't tell you I was working on sort of a problem where geospatial picture modeling and
00:40:36
Speaker
Yeah, spinning up GPUs. I couldn't tell you how many times an Azure would be like, nope, not available, right? Just keep trying or like come on later that night and hopefully you get some or keep them up and pay for them the whole time. So I could see the value there. Now, does Nova and Luna work together or are they distinct things?
00:40:55
Speaker
Yeah, Nova and Luna are designed to work independently or together. So Luna is because Luna's value proposition is primarily on public cloud, right? Because it doesn't matter whether you spin up just-in-time compute on on-prem. Whereas Nova, you can federate clusters on on-prem or public cloud.
00:41:19
Speaker
So Nova is simply shipping the workload to the right workload cluster that it selects, right? That workload cluster could be running Luna, it could be running cluster autoscaler, it could be running carpenter, it could be running autopilot, it could be running nothing.
00:41:36
Speaker
So it is like you can combine the two to have true just in time all the way. And by the way, Nova can terminate clusters as well. So it does what Luna does for a single cluster. If the cluster is not running any workloads, it will transition it to standby state and it'll terminate the cluster. So you don't have this like graveyard of control things that you don't know what they were used for and why they need to be running all the time. Yeah.
00:42:03
Speaker
I feel like we hear both stories where organizations either, I hear them falling into two categories. One is they have a bunch of shared clusters and application pipelines go to those bigger shared clusters or each individual application or, you know,
00:42:21
Speaker
team gets their own cluster, right? And it sounds like Nova could actually be useful in both cases. Right. Yeah. Yeah. And it actually makes a lot more sense when you think about running your workflows through GitOps pipelines, because GitOps pipelines, you don't want to enumerate all of your pet clusters, 1600 pet clusters in your Argo pipeline or your Flux pipeline. Yeah, that's fair.
00:42:45
Speaker
So with Nova, you send your application through your GitOps pipeline to Nova API server endpoint. Nova will dispatch it. And it also enables you to maintain this homogeneous fleet, right? So you can perform things like cluster upgrades super easily. And you can maintain your LMA stack consistency, your login monitoring analytics stack components.
00:43:11
Speaker
then make sure that you can have consistent LMA stack versions on all of your red clusters. Red clusters get a red version and blue clusters get blue version because no other scheduled policy will enable you to spread, schedule, replicate, things like that.
00:43:29
Speaker
Okay. So Madhuri, you said no, I can definitely help you delete clusters. So I'm assuming it can also like deploy new clusters as and when needed. If I need a different type of GPU shape for my deployment, it will create a new cluster in a specific region where it's available and give me access to it, right?
00:43:48
Speaker
Okay, and then when Noah is managing these clusters or deploying these clusters, does it spin up individual control planes? The reason I ask this is Red Hat is working on an interesting thing called hosted control planes, where the control plane nodes are actually running as pods on the master cluster or the supervisor cluster. So you're not spending resources for all your worker nodes. Does Noah work with that or Noah have a similar architecture?
00:44:16
Speaker
So Nova deploys a Nova agent on your workload cluster control plane. So your workload cluster control plane could be a first class control plane like an EKS, or it could be running as a virtual control plane in the scenario that you talked about. It doesn't matter. So Nova simply has this agent running on each workload cluster that it uses to schedule your workloads.
00:44:42
Speaker
Gotcha. And then you said, Luna is cluster specific, right? Like it performs things inside the cluster. So it's definitely not multicloud on its own. Is NOAA multicloud, right? Like, can I have a master cluster? Is master cluster the right term? That's my first question. But then if the master cluster is running in GCP or GKE,
00:45:00
Speaker
Can it spin up these workload clusters in EKS as well? Yes. Yeah. That's the primary use case we're seeing is Bonova is multi-region fleets and multi-cloud fleets. That's awesome. Cloud bursting use case is also something that would be super valuable. You can have your primary cluster be on-prem and if you want to burst to your Cloud provider for excess capacity needs, you can do that and it can bring back your footprint back to on-prem.
00:45:29
Speaker
Okay. And I think that opens up another interesting use case in disaster recovery, right? Like if I want, I don't want to pay for that pilot cluster or that secondary cluster all the time. I can just have no provision at on demand or just in time and have a DR site that's ready to go.
00:45:45
Speaker
Yeah, yeah. HA and DR are actually really compelling use cases. When we built Nova out, our vision was when people would come to us and say that, hey, we have five clusters, we'd say that you're too small for a fleet manager. So you're better off managing your five clusters as pets. But then we started hearing about the HADR needs of stateful applications where you have
00:46:08
Speaker
a three active, active, active deployment or an active standby deployment and failure detection and failure recovery things are needed for even two clusters because Noma has this God's eye view of what's happening with your clusters as well as your applications, right?
00:46:26
Speaker
and it can spin up your DR site just in time, or it can spin up an HA site for if your region A goes down on Cloud Provider A, then it can spin up a region B cluster just in time and have the second active run be deployed on that newly provisioned cluster.
00:46:47
Speaker
Okay, that makes sense. I think the only thing I'm trying to think is how would data get copied

Community Engagement and Future Directions

00:46:52
Speaker
over? But again, we can dive into it later. Yeah, so our first version, so we are compute folks, and we want to work with the state folks and the networking folks in the ecosystem. So right now, Nova does whatever your underlying cloud provider, whatever knobs it gives you.
00:47:12
Speaker
If your cloud provider lets you share volumes between regions, then the volume is available to recover. Or if your DB is doing in DB recovery, in DB replication, then the app is taking care of it. But looking forward, we want to work with the storage providers in the ecosystem and the networking providers in the ecosystem to build out the storage and networking model as well. Because like I said, Nova has this guard side view, so it knows when the
00:47:40
Speaker
Yeah, when things go wrong. Yeah, so it can initiate the network and storage failovers. But we are not, we as a company are not interested in tackling these problems. Yeah, because then you open yourself up to a ton of very specific use cases where
00:47:57
Speaker
You don't be aware of more about the application itself, where now it sounds like you can do the orchestration piece and now it's up to whatever organization or team to make those movements, whether that's application to application data.
00:48:12
Speaker
Right. And what we've seen with Prospect so far is people have these very opinionated preferences for storage and memory. So we don't want to like, you know, impose things on them. We want to work with whatever they thought was the best choice for them, right?
00:48:29
Speaker
Got it. Well, if we can talk about a few use cases, we don't have to name names, obviously, but I'd love to hear about some of the top use cases that come to mind, I guess, in terms of how people are using Nova. Yeah, for sure. So the availability-based scheduling for machine learning workloads, that is a pretty compelling use case for a machine learning workloads where you want to run a machine learning job.
00:48:54
Speaker
that is either doing training or inference of machine learning workloads and the shapes are bespoke on certain cloud providers. So because machine learning shapes are also expensive, you do not want them to be always on. So that's another...
00:49:10
Speaker
and why it's a really compelling use case. A chain in the art of databases, we're working with Percona and a couple of other folks and we're going to be sharing a lot of the content at Data on Kubernetes, there at KubeCon.
00:49:27
Speaker
Yeah, so that's a very compelling use case that even with two clusters or three clusters, HADR, automating those workloads is a compelling use case. Cluster upgrades is another compelling use case where you want to perform a cluster upgrades. And if you guys remember this whole maintenance mode of virtual machines and
00:49:48
Speaker
You want to train your workloads. Because the clusters are commoditized by changing the schedule policy, Nova will automatically reschedule your workload. You can say that I want my workload to run on new cluster instead of red cluster. So your red clusters will be drained of all the workloads so you can perform upgrades. So fleet management is another compelling use case.
00:50:15
Speaker
Got it, got it. And I imagine there's a way where you could do that.
00:50:19
Speaker
and have your application kind of running on both blue and red and send traffic to one another. Yeah, that's another scheduling policy, which is spread scheduling. You can say that I want 80% of my deployments replicas to run on red clusters and 20% to run on blue clusters and slowly transition over. And that's like one of the Argo talks we had at Rejects last this year. Yeah, this summer was a demo of that, how you can slowly transition your traffic over.
00:50:49
Speaker
We'll have to grab any links or anything to those and include them in the show notes here because I feel like I want to watch them too. Okay, I think that's all I have for that one. Bhavan, you had one more here. Yeah, I just wanted to clarify things. I think we covered like Luna is just cloud only, but no one can work with on-prem deployments as well. Is that true? And then does it support all Kubernetes distributions?
00:51:14
Speaker
Yes, so Nova is for, you can deploy Nova agent on any workload cluster that Nova Control Plane can ingest as a workload cluster. Having said that, we have only tested it on EKS, GKE, AKS, OKE, and for on-prem we have tested it on MKE, Mirantis.
00:51:39
Speaker
So we do want, as opportunities and prospects come about, so technically it should work on other distros, but we can't claim that it's working out. Yeah, of course. Yeah. I think, thank you for drawing that line because we see vendors in the ecosystem, like, yeah, we work with everything, but only when a customer actually tests it, it's like, ah, yeah, we need to change or fix something. No, I appreciate that.
00:52:08
Speaker
Great. Well, I do want to give you a chance to talk about the community aspects of everything that you do. But before that, we are going to do our little chat GPT section here. So I asked chat GPT, what's the coolest things about nodeless computing? And so you can give me your answer, but I'll do a synopsis of what it also has here. Okay. Am I going first or chat GPT? Yeah, you can go first.
00:52:35
Speaker
The coolest thing about Node-less is eliminate ongoing maintenance headache of compute management for Kubernetes across multiple cloud providers across regions and prevent wasted spend.
00:52:50
Speaker
Got it, got it. Yeah, so when I asked this chat, GBT, it was interesting. And this is where I kind of started off in the beginning of this, where it says, nodeless computing, also known as serverless computing. This is his first line. It's his first line. So I wanted to make sure we drew that distinction, and we did. So that's super helpful. It talks about how it's cool and innovative and kind of lists out a whole bunch of different things that it associates with nodeless from wherever it's pulling out this information.
00:53:18
Speaker
And it talks about auto-scaling, no-list computing being able to handle scaling of the resources. We definitely talked about that. It talks about co-efficiency, where no-list computing, you could also pay for the actual resources and the processing time and those kinds of things, which is, I think, it's pulling from a lot of the serverless stuff. Yeah. It even talks about event-driven. And so I think, you know, information-wise, it's definitely pulling some interesting things there. You can focus on code, not infrastructure. I actually like that one.
00:53:47
Speaker
where developers can concentrate. We talk about this in various aspects of the Kubernetes community of the more abstractions we talk about getting, do you actually focus on the things that matter? High availability, fault tolerance, it talks about easy integration. I would draw a line to that super cluster where you have one thing to think about.
00:54:10
Speaker
operational overhead you also mentioned so yeah it's this looks like the discussion that we just had right like this is just like if it if we generate a transcript this is it with a few other ones shorter time to market which I think was an interesting point there
00:54:24
Speaker
and community and ecosystem was its last one. It's saying it's a thriving developer community and rich ecosystem of third-party tools and libraries. I think that's a perfect segue. Thanks, Chad GPT, to talk about community a little bit more. Tell us about what you're doing in the community. You mentioned DoK, Data on Kubernetes Day, which is a day zero event.
00:54:50
Speaker
at KubeCon, if folks didn't know. Talk about Rejects, which is also, that's before KubeCon as well, right? And so yeah, just give us some more about that. Yeah, so we kicked off a multi-cluster meetup group in the EU version in Amsterdam, was kicked off two weeks ago.
00:55:09
Speaker
in collaboration with the European cloud provider called LeafCloud. Multi-cluster is super interesting in the European market, especially because they don't have these top three vendors that are eating up the market. They have a lot of small cloud providers that are backed by their governments.
00:55:25
Speaker
So DR and HA become things that people need to plan for from day zero. So that was really interesting. We talked to a lot of people that were using Kubernetes as the de facto deployment platform and are thinking through their multi-region and multi-cloud strategies from day one. That was interesting. We are launching the multi-cluster meetup group
00:55:50
Speaker
U.S. version on the 20th of this month. If you're in San Francisco Bay Area, please join the Meetup Group. And we have some cool talks. We have a principal engineer from Intuit who runs one of the oldest deployments and largest deployments of Kubernetes in production. Talk about the multi-cluster. And we have folks from Cloudera talking about data, actually, building their platform for data on Kubernetes.
00:56:15
Speaker
And besides that, we have multi-cluster podcasts. So if you are a power user or practitioner, please let me know and come on the podcast. And we used to do a lot of work in the virtual kubelet community, virtual system in the lunar space for a single cluster space. Virtual kubelet, unfortunately, the project couldn't take off really up to its potential because of Kubernetes conformance issues.
00:56:43
Speaker
community that's really close to our heart because we've been involved in it for a long time. So I think it's still a CNCF project. Got it. Makes sense. And will you be at KubeCon Chicago then? Yes, of course. Yeah. So seek, seek Midari out, come talk to us, ask to be on her podcast. And yeah, we'll put all the links that were mentioned in this episode in the show notes, any of the talks, we'll try to get anything that's available, like Slack community, whatever, we'll, we'll make that available.
00:57:11
Speaker
But I think that's that's going to end. I feel like we could talk another hour and a half, two hours about both of these things. It's very interesting topic. I think very powerful topic. But I want to thank you for coming on the show today. Yeah, it was really fun chatting with you, Ryan. And thanks so much for the insightful questions.
00:57:27
Speaker
Yeah. Thank you for being an expert. All right, Bob. And that was a fun conversation. I know the no list term is much more cleared up for me. Um, you know, coming into it, you know, I saw a ton of articles, like I said, in the interview about sort of how it's, it is serverless. And I was like, yeah, well that can't be right. So there is, um, sort of a way of explaining it how.
00:57:51
Speaker
the application breakdown uses serverless, but when you want to just have the node side of things, the infrastructure side of things, it's nodes, and you can kind of stack those on top of each other, which totally makes sense. I like that layered, at least in my brain, the layered viewpoint of it makes a lot more sense for me. What did you think? I think my interpretation was like, nodeless is
00:58:13
Speaker
basically just-in-time computing. You don't want to pay for resources that you're not using. This is a framework and an automation toolset with Luna that can help you provision compute nodes for your Kubernetes clusters as and when your applications needed it.
00:58:29
Speaker
definitely helps save on cost. I really liked a point where she said, Luna helps prevent wasted spend instead of going back and trying to fix things after the fact. So prevention is better than cure. Let me help you reduce costs right now instead of six months down the line when you already have spent millions of dollars on AWS. Let's just start from designing a good solution for their applications. I like that part a lot.
00:58:58
Speaker
Yeah. Yeah. The, the whole concept is really cool. I liked, I liked the term super cluster or as I called it master cluster. Right. Yeah. I got confused. Like I was like, did she say super cluster master? Like some higher level entity cluster, super, super, I've heard super cluster and other than I can open stuff and, or maybe even incorrect, forget where, but, um, you know, allowing sort of that consistency across a lot of clusters. Right. So I think treating a cluster itself as a commodity is something a lot of people probably aren't used to doing.
00:59:26
Speaker
even if they're giving out a ton of them, but I think, you know, tying back to, to everything we hear about security these days, I think it just improves that whole sort of security posture when you have that sort of, I think you can use the term God view, right? Yeah. Which, which can be really powerful in those scenarios to have those viewpoints. So really, really, uh, really cool stuff, really fun stuff. I know I can learn more about it.
00:59:50
Speaker
Yeah, like the ability to provision clusters on demand is really cool. I know you brought up like people started by deploying like really
00:59:59
Speaker
huge clusters and doing it in a multi-tenant way, but now slowly they're moving to smaller clusters per individual developer or per application team. I think using a solution like Nova definitely helps customers adopt that mindset. You can have policies already defined, and then whenever a developer needs a cluster for any task, they can just spin up a cluster. The super cluster spins it up with the right policies, the right security guardrails, and they can test everything out.
01:00:28
Speaker
delete cluster again, just in time, like on demand cluster. So that definitely helps, I think. Great episode. Yeah, exactly. I think a lot of detail and the fact that you can do sort of a hybrid model, right? If you're not like totally sold on it, because we talked a lot about the sort of warm up times in our serverless episodes. And I was like, well, that's got to be the same concern, but worse in this scenario. But yeah, it sounds like there's a way to run it either way.
01:00:52
Speaker
Yeah, agreed. I think we could have Miduri back on the show and talk for a whole other couple hours if we really wanted to about this kind of stuff. Reminder for anyone who wants to sync up with us or Miduri, she will be at KubeCon. If you're going to be at KubeCon, go check that out. Or the various meetups that we talked about, we will link them in the show notes if you are in one of those areas and are interested.
01:01:14
Speaker
Well, I just have one more thing to like share with our listeners, right? I know we have been doing this podcast for a while. We did start a YouTube channel. And most of our listenership still comes from the audio format. But if you are a listener, go and hit subscribe and like on the YouTube channel, right? Like give us
01:01:31
Speaker
give us access to those YouTube algorithms where we suggest or YouTube suggests the podcast to other communities, users or communities and people that are interested in communities that will help us out a lot. So just one action item or call to action from my end run this time. Absolutely. Absolutely. Come join our Slack channel. Give us some feedback or episode, or if you're at KubeCon, come talk to us. If you have a topic that'd be great on the show, we'd love to talk to you. We're going to be doing hopefully a live thing like we did last year at Detroit, which was a lot of fun and put out some episodes that are live. So.
01:02:01
Speaker
Cool. Well, I think that brings us to the end of today's episode. I'm Ryan. I'm Robin. Thanks for joining another episode of Kubernetes Spites. Thank you for listening to the Kubernetes Spites podcast.