Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Running Kubernetes at the Edge using K3s image

Running Kubernetes at the Edge using K3s

S4 E3 · Kubernetes Bytes
Avatar
1.6k Plays10 months ago

In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Jason Dobies - Director of Edge Engineering at SUSE to talk about all things K3s. They discuss why Kubernetes is best suited for Edge deployments, and why K3s was built and how it helps users architect their edge solutions. The discussion goes into topics like Security, Storage, High Availability at the Edge. 

Check out our website at https://kubernetesbytes.com/  

Episode Sponsor: Elotl  

  • https://elotl.co/luna
  • https://www.elotl.co/luna-free-trial 

 Timestamps: 

  • 01:20 Cloud Native News 
  • 06:01 Interview with Jason 
  • 50:42 Key takeaways  

Cloud Native News:

  • https://vmblog.com/archive/2024/01/29/dynatrace-to-acquire-runecast-to-enhance-cloud-native-security-and-compliance.aspx - 
  • https://chronosphere.io/news/chronosphere-acquires-calyptia/  

Show links: 

  • https://k3s.io/ 
  • https://www.linkedin.com/in/jdob/
Recommended
Transcript

Introduction to Kubernetes Bites Podcast

00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management. My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.
00:00:30
Speaker
Good morning, good afternoon, and good evening wherever you are. We are coming to you from Boston, Massachusetts. Today is February 7, 2024. Hope everyone is doing well and staying safe. Let's dive into it.

Monthly Reflections and Personal Insights

00:00:41
Speaker
This is, I can't believe we're already in February, January just flew by. I had a couple of work trips, one small PTO or so. I've been trying to maintain this cadence. Obviously, hopefully you guys like the episodes that we have been bringing you this year.
00:00:56
Speaker
And I'm not making that greater progress on my new year resolutions, but at least I'm tracking them this year. So hopefully you are doing better at those than I am at mine. Before we dive into the topic for today, we have a great interview lined up around one of the most like one of the requested topics from our listeners.

Industry Acquisitions: Dynatrace and Chronosphere

00:01:15
Speaker
Before we do that, let's let's talk about a couple of things that are happening in the cloud native ecosystem.
00:01:22
Speaker
In terms of news, we have two acquisitions. The first one being Dynatrace acquires a startup or a company called Runecast. Runcast was a vendor that I came across when I think I went to VMworld or VMware Explorer back in 2018 or 2019.
00:01:44
Speaker
They were a security firm that were helping build or helping provide a security posture management solution to VMware customers. I think that Dynatrace acquiring them definitely helps them expand their contextual security protection and analytics platform and helps them bring those capabilities, the security posture management capabilities for the hybrid and multi-cloud environments.
00:02:09
Speaker
Personally, I also feel this gives Dynatrace a foot in the door for existing VMware customers, especially customers that are looking to modernize their infrastructure and application stacks as we are in the first quarter of 2024, calendar 2024. But yeah, that was an acquisition that the price wasn't really disclosed. I thought we'll just discuss it since Dynatrace is a vendor that many people use in the cloud native ecosystem.
00:02:37
Speaker
Next up, I think we have another acquisition. Chronosphere acquires Calyptia. I hope I'm pronouncing this right, guys. Calyptia was, I think, was the original creator of the Fluent ecosystems of FluentD and FluentBit. All of those open source tools that you see in the CNCF landscape, those were built by Calyptia. Again,
00:02:59
Speaker
I only found out about because of this acquisition news, but again, it seems like an important vendor in our ecosystem, right? Because anytime we like when we were preparing for our certified communities, admins or application developer courses, or whenever we talk about logging in general, when it comes to these cloud native applications, we think about fluent, fluently running on our host set as a demon set, right? So
00:03:21
Speaker
This acquisition will allow Chronosphere to add the observability pipeline product that was built on top of Fluentbit into the Chronosphere platform, which enables routing transformation and optimization of log data at scale. So we'll link to the acquisition blog in the show notes or the press release in the show notes so you can read more about how these two companies are planning on working together going forward. But another exit for a startup in the cloud native ecosystem.
00:03:51
Speaker
Personally, I have a feeling that 2024 is going to be the year of mergers and acquisitions, if not IPOs.
00:04:00
Speaker
We have been seeing a lot of mergers and acquisitions to start of the year, but I think I'm hoping for at least that the IPOs start rolling in the second half, so the third quarter or the fourth quarter of calendar year 2024. But here's to hoping, right?

Interview with SUSE's Jason on Edge Computing

00:04:14
Speaker
That's it for news today. I couldn't find anything else that's happening from any of the vendors in the product ecosystem, so we'll just keep it short for the news today.
00:04:22
Speaker
Diving into the topic for today, we are going to be talking with Jason from SUSE. He's the Director of Edge Engineering. Again, a great participant in the cloud-native ecosystem or cloud-native community. I know he has done stints at Red Hat and Google Cloud and has been a vocal advocate for Kubernetes and OpenShift and all of the technologies that we love today.
00:04:47
Speaker
So, we'll be chatting with him about K3S, what it is, how to deploy it, how it works, how the security and storage work and all the good things. So, without further ado, let's bring Jason on the pod. This episode is brought to you by our friends from Ilotl. Ilotl Luna is an intelligent Kubernetes cluster autoscaler that provisions just-in-time, right-sized, and cost-effective compute for your Kubernetes apps.
00:05:13
Speaker
The computer scaled both up and down as your workloads demand change, thereby reducing operational complexity and preventing wasted spend. Luna is ideally suited for dynamic and bursty workloads such as dev test workloads, machine learning jobs, stream processing workloads, as well as workloads that need special resources such as GPUs or ARM-based instances.
00:05:41
Speaker
Luna is generally available on Amazon EKS, Google Cloud GKE, Azure AKS, and Oracle OKE. Learn more about Luna at elotel.co.luna and download their free trial at elotel.co.luna-free-trial.
00:06:01
Speaker
Hey Jason, welcome to Kubernetes Bites. Thank you for joining us for this episode to talk more about K3S.

Benefits and Strategies for Kubernetes at the Edge

00:06:08
Speaker
Why don't you introduce yourself and tell us more about what you do at your day job. Sure. And thank you for having me. I'm excited. It's been too long since I've got to sit here and nerd chat with somebody. And I'm really excited for this.
00:06:20
Speaker
So my name is Jason Dobies. I am a director of engineering at SUSE. SUSE is known for a long time for our Linux distribution a couple years ago, purchased Rancher and a major player in the Kubernetes space. And then very recently, over the last year, we have built up an Edge department.
00:06:41
Speaker
I'm one of the two directors of engineering in that Edge department. We're looking to flesh out our portfolio and try to release something in the next couple of months. In addition to that, I'm an adjunct professor, which is a really cool sounding way of saying part-time. I teach software engineering in the spring and then senior projects in the fall. And I mention that now because there's a handful of times I may say something like my students. And if without that intro, it gets very awkward to picture me and work referring to them as my students.
00:07:11
Speaker
Okay, I keep forgetting about the professor thing. So I'm glad that you brought that up. So some background for this episode, right? We, Ryan and I did like a Kubernetes at the Edge 101 episode where we spoke about some of the challenges that exist at the edge and talk about K3S and microkits and some of those solutions that are out there. And then we actually got feedback that, Oh, can you do a deep dive on K3S? And when I saw that you, you now lead engineering at SUSEV around Kubernetes at the edge, I was like, okay, perfect guy to have on the pod. Yeah, that worked out well.
00:07:40
Speaker
push so let me start there like i want to get your perspective like why do you think communities in general is a better solution for these as deployments. Go ahead yeah so better is an interesting term there because i don't know it's.
00:07:57
Speaker
Interesting, depending on what you compare it to. We're in a really fun space right now because there's really no incumbent. We're not looking to compete with something that's been around for decades. And, you know, sequel has been around for like 70, 90 years by now. We're not like that. We're in an area where the space is growing, but the solutions just aren't there yet. So when you say better, it's an interesting question because there's no real kind of incumbent that we're like, hey, we're going to take this on, we're going to replace it.
00:08:27
Speaker
But why is it good? There's a variety of reasons here. One of the things is that we have a lot of customers and a lot of people moving to Kubernetes in general. For reasons I'm sure you have covered a thousand times now on this podcast, so I won't dig too deep, but I will mention that being able to put it at the edge
00:08:47
Speaker
keeps the same APIs, same deployment structure, keeps all of your knowledge that you're used to, and lets you apply that now to the edge. So I'm certainly not about to say all of the problems are solved. And if it was, I wouldn't have a job, we would be having this conversation. But it does reduce quite a bit where your learning curve is not quite so huge, because you understand how to interact, you understand this declarative model and how to say this is
00:09:16
Speaker
what I want and you understand and you of course being the admin who is comfortable with Kubernetes, which in and of itself is different for your traditional admins who like to know every little bit and piece of where everything is running, that doesn't scale to the edge. So being able to rely on Kubernetes and say, look, just get this out there, keep it running and tell me if something's wrong is really, really nice incentive there.
00:09:43
Speaker
On top of that, you get all of the other projects and ecosystem around Kubernetes that in varying degrees is applicable to the Edge. Some of them are more heavyweight, but at the end of the day, a lot of the work you've done to certify for security and the hardening in your data center now basically gets transferred to the Edge with Kubernetes out there. You can fit it into a lot of your existing workflows.
00:10:10
Speaker
At the risk of trivializing it too much, you basically get a similar API that you're used to working with inside a data center, but now it's going to bridge to all of these edge systems and all of these edge environments where you traditionally can't interact with them in the way you would with a data center.
00:10:27
Speaker
No, I think I like that answer, right? Like the consistent API server experience because in the past I used to work at Lenovo and we had like an edge solution which was more hardware based. We had like a ruggedized server that we sold to customers who wanted to deploy like a two node thing, but then eventually talking to different customers and even
00:10:44
Speaker
at the same customer, different sites became kind of that snowflake environment, right? Like they didn't know what was actually being deployed or how it was being managed. Some were running virtual machines on top of like a Windows Server instance, some were running like bare metal applications. So I think Kubernetes definitely brings that consistent experience for sure. Yeah, yeah. And I just completely verified it. So let's just keep moving.
00:11:08
Speaker
Okay, so like next question I wanted to ask was like, Kubernetes is great, right? Like, as you said, we have covered it 1000 times already. But why did we need like a different distribution? Why couldn't we just take Kubernetes and run it at the edge? Why K3S? Yeah, so I love this as a question, because so much of this speaks to kind of my experience with K3S. So I've been working with Kubernetes probably six or seven years now. I've played with different distributions and models.
00:11:35
Speaker
And a lot of them are very heavyweight and fat. So I was doing developer advocacy, trying to convince developers like, yeah, this is why you want to use Kubernetes, but you need six nodes in AWS and you can practically hear the bill ticking up as you're running it. And that's a very hard sell to your typical developer who, and I know I'm not talking about edgy, but I'll get there, I promise. But that's a very hard sell to your
00:12:03
Speaker
traditional developer who's like, I just want to run it on my laptop. They always say like, so I can do it on a plane, I sleep on planes, but that's beside the story. With the edge, let's be fair, I thought of that one. I actually just popped my stress ball on top of everything else. No covered in sand.
00:12:20
Speaker
So what do we need? We needed something lightweight. So it started from this developer setup. The first time I played with K3S, by comparison, it was incredible. I ran a basically single command and then it just looked at me in the shell and I'm like, okay. And I troop cuddled, get nodes, and it ran. I was like, oh my God, this is insane.
00:12:43
Speaker
So now let's take that from a development environment and everyone's like, this is awesome. I can finally do stuff on my laptop or in my home basement.
00:12:52
Speaker
I'm sorry, my home lab in my basement. But now that's very similar in a lot of ways to your Edge environments. You have these lower resources. You don't want to be spending massive amounts of compute power for a coffee shop down the road or something like that. So you want something very, very lightweight, incredibly easy to install. And that's admittedly a loaded statement. I mean, depending on how much tweaking you want to do, you can always overcomplicate it.
00:13:21
Speaker
Really, that lightweight aspect of it was absolutely compelling from the start as a developer standpoint. And then his edge kind of grew. We're like, this is kind of perfect, actually. This is exactly what we need out there.
00:13:34
Speaker
OK, and that makes sense, right? And my next question focuses around some of those aspects. We have a technical audience for this podcast, so don't hold back. But the question is, how is K3S, from a deployment perspective, from an architecture perspective, different from a vanilla open source Kubernetes or any of the other Kubernetes distributions that are out there?
00:13:56
Speaker
Yeah, and I'm glad you mentioned this is a technical audience because I could just dive into the easiest one is this is a single binary, about 100 megs. And that in and of itself is really wild because it is able to embed a lot of the core infrastructure you expect to see with Kubernetes inside of this binary.
00:14:13
Speaker
So i'm going to it earlier but the install at its simplest and again this is slightly different for production but is effectively you download a shell script you run it and to die you have k3s so now we're looking at this single binary with.
00:14:29
Speaker
everything inside of it. In a default deployment, you'll actually be running SQL lite instead of etcd. Obviously, there's scaling issues there. It's the trade off, right? It's yes, you don't scale like you would. But at the same time, you're not using the resources you would either. And back to my home lab or Raspberry Pi or my laptop. I don't want to be running a full etcd server, especially if I don't need to.
00:14:54
Speaker
So you have all of this as this package binary and you can still run it as a system de-service. At the end of the day though, it is still Kubernetes. Same ports are open, same APIs, same data types, same manifests. So let me be very clear that it's
00:15:13
Speaker
different from K3S and that architecture standpoint. Actually, let me get back to that for a second. You can still absolutely deploy it multi-node in HA server agent environment. Your vanilla install and the one I would typically run on my laptop or in a CI CD system is going to be a single node, but there's absolutely the option to set up your HA clusters and everything you would expect out of Kubernetes. It's just smaller and very lightweight. It makes for
00:15:43
Speaker
It's also scalable as you would expect it to be we do have customers running over a thousand nodes in a single edge cluster a lot on top of k3s yes so. You got reaction i need to help us to look at it and say it's no way the single binary is like this is a cute.
00:16:05
Speaker
Exactly. I'm sure that was part of my thought initially, like, okay, this is cute, like, give me something real. And then he starts to look around, he like, people are using this, and they're using it very, very well. Interesting. Okay, like, one of our questions that I had for today was, like, how small does the setup need to be? Like, if it's a single node, non HA thing, can it run on like one node? Or does it still need like three nodes for some sort of HA? But if there are people running 1000 nodes, it covers the other side of it for sure.
00:16:33
Speaker
That's exactly right. And you know, when you're talking about the edge, HA is not necessarily table stakes like it is in many environments. We actually have been researching into two node HA, which is a bit of an oxymoron in and of itself. But looking at these smaller footprints that don't want a massive HA overhead because they don't need it.
00:16:54
Speaker
I refer to this coffee shop type of model where the world continues to go. That's probably a bad example. If coffee shop goes down and people can't get caffeine, that is riot worthy, but come with me on this. I'm glad to appreciate that. Everyone's like, what is our mission critical applications? Healthcare? I'm like, it's the coffee people, guys.
00:17:15
Speaker
But yeah, they don't necessarily need full HA at the edge. You have other options for your uptime. You have situations where the uptime isn't quite as critical. And this is, I'm gonna pause on this because I'm sure the questions will come back to it, but Edge has such a variety of use cases that HA is not the be all and end all of everything that people need to deploy.
00:17:41
Speaker
Yeah, it's, it definitely has like a different meaning when it comes to these edge locations, right? Last year, we had somebody from Chick-fil-A on the podcast and they had like, they run like a three node Intel Nook based K3S cluster at one of those, every one of those Chick-fil-A locations. And they were not just thinking about
00:18:00
Speaker
HA in terms of like the Kubernetes or K3S clusters, but they also were thinking about HA in terms of network connectivity because still they had like a core environment sitting outside the edge locations. And they're like, if my physical network that I'm paying money for goes out, they had like an LTE fallback on. So like HA definitely has different meanings, different implementations when we were talking about these edge deployments. Yeah. I love how you put that different meanings, different implementations.
00:18:27
Speaker
It's an interesting twist for people who have been preaching HA for years now to start thinking, okay, it's not a data center where we can very easily section things off and have redundancies and redundancies in our network and power and so forth. It's funny, it's almost, when you think back to early computing in the 70s and 80s when
00:18:50
Speaker
space was at a premium, memory was at a premium. And then we all got really complacent when things blew up and all of my code is fat and ugly and just takes up as much space as you give it. And all of a sudden now we're like, oh crap, we got to go back to these smaller types of setups. And we actually have to start thinking about assumptions we may have made during this boom of extra hardware and say, all right, you know, does this actually apply here?
00:19:15
Speaker
Yeah, no, I completely agree. I think I've anecdotally heard stories that senior developers or staff developers at organizations just took resources for granted. Like, oh, it's on AWS. Like I don't need to build my application efficiently. I can just put more resources behind it. And now it's coming back to them and then they now have to like design the application almost from scratch and make sure that they can keep up without adding a lot of cost underneath it.
00:19:43
Speaker
Oh, that's exactly right. For years of my, and you know, this speaks volumes to how lazy of a coder I am, but for years, I haven't had the concern about memory footprints because just out of throat, a couple more gigs in there, you're fine. All of a sudden, when you're talking about these nukes and these raspberry pies, you're like, oh, everything feels very, very small around here.
00:20:03
Speaker
Okay, so let's talk about what a vanilla deployment for K3S looks like. So you said SQLite as that key-value data store. What else? Does it have a cubelet? Does it have a control plane component? Does it run as a pod? How do you deploy containers or pods on it? Sure.

Exploring K3S Architecture and Security Measures

00:20:20
Speaker
So for a single node, it acts
00:20:23
Speaker
basically just like that, a single node. So your control plane is intermixed with your user workloads. And again, and I feel like I keep saying this, but in these environments, sometimes that's okay. If you're not too particularly worried about security or your uptime on your control plane, those can traditionally be fine. Talk in terms of something like storage, we've seen options where people will deploy in a single node and just simply use local storage.
00:20:52
Speaker
Which, you know, obviously has its limitations. If that machine goes down or if that hard drive falls over, then you're a little bit of trouble. But it's a trade-off you can make versus sending Chick-fil-A a server rack and having a bunch of Chick-fil-A employees be like, what is this thing? And why is it 50 degrees in this one tiny room in the entire restaurant? Can I borrow my airframe though? Exactly.
00:21:16
Speaker
I still remember early in my career, the first time I walked into a server room, why is it so golden here? And that was the lesson of, hey, heat is a concern. So yeah, that's totally an option. There are, you have your CNI interfaces, you can plug in your own CSI back and you can build it up as necessary.
00:21:39
Speaker
But yeah, by default, it will be single node. You deploy your workloads directly into it and largely just use namespaces to keep the control plane stuff separate from what you want to be touching. Gotcha. And so like we spoke about like the single node version, right? But in your conversations with the inside the community or with customers, right?
00:22:00
Speaker
Do users usually do it in a single node fashion or they have like the two node, the three node deployments to have some sort of resiliency at the edge? Yeah, so the podcast can't see me smiling at that, but I was smiling because we are working on a product called Edge Image Builder, which is meant to build a single image with everything you would need to run a cluster at the edge. So your K3S install, your operating system, the configuration on it and so forth.
00:22:28
Speaker
and it was considerably simpler to add in single-mode K3S. I'm like, can we just ship this? And it really stared at me. I'm like, okay, fine, we'll add NHA. So yes, there is absolutely a need for adding it. The configuration, the setup is actually really, really slick. It's largely just saying this one's a server, this one's an agent.
00:22:50
Speaker
Again not to trivialize it or downplay it's capabilities or the amount of work that went into making it that simple but it really is very slick when you look at it in that perspective and yeah there's absolutely people using it in HA even if my dev brain just thinks it's running on my laptop I don't particularly care I'm gonna keep it up for 10 seconds and then I'm gonna you know rebuild it because I screwed something up.
00:23:12
Speaker
Gotcha. So are there any prereqs that listeners should know about? Do I need four CPUs, or do I need eight kicks of RAM? From a resource perspective, are there minimum requirements that they should keep in mind?
00:23:25
Speaker
There's probably listed somewhere some minimum and recommended. What I can say is it's nowhere near that beefy. There are absolutely Raspberry Pi instructions that say the only thing with Raspberry Pi is you have to explicitly enable C groups because it's not by default. But being able to run it on a Raspberry Pi
00:23:47
Speaker
Kind of indirectly answers that question because that is a, this sounds obnoxious, that is a very low bar in a very intentional way. I'm not doing a raspberry prize here, but that is a very low bar. And if we could say yes, you are capable of running on that, it shows you just how small that footprint actually is.
00:24:06
Speaker
Okay, gotcha. So like I know we already spoke a bit about our storage is handled right with maybe using local disks. But I'm sure like you said like, okay, we can package up any CSI based storage provider if we want to. But do you see people
00:24:22
Speaker
storing data at these edge locations for a longer duration of time, or they're just using it as scratch space if I don't come up with a better word, but just keeping it there for a couple of hours and then pushing everything back to the data center or the cloud environment.
00:24:36
Speaker
Yeah, holy cow. What a great question. And the answer, as with everything with edges, it really depends on the use case in the cup. Again, as a teacher, that's such a cop out answer. It depends. Can you not ask me that again? I do have a real answer though. So there are absolutely use cases where they do not want to be sending a lot of data across the wire. Either it's
00:25:00
Speaker
going as far as a satellite connection or if it's sending a absolutely ton of data. So you've got to figure we haven't talked about the edge use cases yet, but when you factor in something like IoT and you are measuring readings every second and you're getting all of this data, that across the wire is extraordinarily painful.
00:25:21
Speaker
So having that cached and dealt with at the edge side and then summaries sent across the wire or some sort of calculation sent across the wire reduces that bandwidth between your edge sites and your data center.
00:25:37
Speaker
Again, we were joking a couple minutes ago about we've gotten so used to big resources. We've gotten so used to good network connections. I stream video all the time and I don't think twice about it, but that's not always the case. If we can offload some of that to the edge sites, have them deal with a lot of the data, and then send a summary, that is a very viable option.
00:26:00
Speaker
Another concern is data privacy and concerns where when you start getting into healthcare situations, they don't want to be sending a ton of customer information that violates any number of four letter acronyms out there that you're not allowed to send data across. So can we do the calculations on the edge side and then send across
00:26:22
Speaker
the net results or some kind of summary, some kind of status. And all of that is outside of any kind of air gap situation, where you're talking about government, defense, things like that, where they straight up don't have the ability to send it back or don't want to for whatever reason. So the answer is yes to all of them. The important takeaway here is
00:26:46
Speaker
Again, that mentality shift of resources are suddenly becoming finite, again, in a very weird way at the edge. And it's a callback to 20, 30 years ago that you have to consider that network pipe. And yeah, maybe it doesn't make sense to shove everything across the wire. Maybe edge does hold on to it and have its own security policies, obviously, but then it's able to do its calculations at some reason, set across a smaller report or an
00:27:15
Speaker
interval-based report or an emergency report of, yeah, not going to say anything unless something is wrong, in which case, then it's going to start yelling back to the data center.
00:27:24
Speaker
Gotcha. Interesting. You've brought up data privacy, which I think it's a perfect segue for the next question that's focused around security. How do I make sure that I don't expose those API server endpoints from all of these edge locations? That's just being one of those concerns. How do I secure my K3S clusters? Because ideally, I might have tens or hundreds or if not thousands of these different edge locations that I have to manage remotely. How do I make sure I'm securing each one of those?
00:27:52
Speaker
Yeah, and I love the fact that you went to that scale. I think that's the scale we need to be thinking at with Edge of the tens of thousands. You know, K3S has that baked into it. It is hardened. We're working toward FIPS compliance. It is secured just by default. And then if you look at the entire suite or the entire solution, you put that on an operating system that has also been tailor-made to be secure in these situations.
00:28:19
Speaker
something like SLEEMicro, which is a immutable file system and built to be lightweight and also secure and try to prevent some of these situations. You've started to realize the entire solution from the ground up really has to be architected with that in mind, particularly as laws get more and more strict about this stuff. And let me be very clear, rightfully so strict. That's certainly not a knock on them.
00:28:45
Speaker
But as that starts to happen, you realize, hey, it's a little more than just can we send this over SSL and be done with it. So on top of that, you have other abilities, something like New Vector that comes in and is a security platform doing runtime enforcement, doing your container image scanning and so on and so forth. You have your network policies that you can
00:29:13
Speaker
implement just like you would on a normal Kubernetes structure, cluster. So at that rate, you're looking at, and this goes all the way back to the very beginning, right? Like using a Kubernetes based distro gives us that option to use this wide ecosystem of products. Or you look at a solution like we're pitching with our edge product of, hey, here are all these products working together. And
00:29:37
Speaker
answering these types of particular questions because we're absolutely right for asking them. These are very, very valid questions that we keep joking about the resource differences, but it's also the where the data lives. Suddenly, that's a bigger deal of, Chick-fil-A is a great example. Something tells me that's not quite as secure as your traditional data center. I'm going to say that's a pretty safe guess.
00:30:03
Speaker
Yeah, you can't have bodyguards or restrict people to physically access your chick-fil-a location. That's right. Now when you gotta get the employee there to go bounce the server or something like that, it doesn't really work that way. Yeah. Okay. So like when you talk to a lot of people, right? These are definitely great things that people should keep in mind, but are there like security best practices that you tell them like, okay, these are the common issues that I have seen.
00:30:28
Speaker
please fix this and then think about everything else. Like, are there any low hanging fruits you want to share? Um, low hanging fruits, let's say, uh, don't make your root password password. Uh, now I, yeah, just write that down if you want. We'll send out the list. No, honestly, to be perfectly honest, this is slightly outside of my area of expertise. Um, so I don't want to drive people in the wrong direction. Um, so kind of simply reiterate the tools are out there. Um, you look at the CNCF landscape in general and
00:30:58
Speaker
there are a number of players in this all addressing these. So if nothing else, knowing you have to ask those questions is the really important part because there are the answers out there. That part of the solution is there where multiple people play and we understand. And I guess I can provide a summary of you want your containers to be scanned themselves. That can go all the way back to your S-bombs and your
00:31:26
Speaker
the other term related to S-bombs, the whole accountability of all your artifacts going into your builds, your network security, there are platforms and plugins for that. Your operating system is still potentially a vulnerable vector if you were to go and pick some generic open source Linux distribution that is not meant to be hardened by default, or they never intended it that way. So... Like, don't assume that it will be, like, yeah.
00:31:52
Speaker
Yeah, you know, it's, it's, I'm sure there's people out there who are like, Oh, well, Linux, it's not, I feel bad saying this, I probably shouldn't, but like, Oh, it's not Windows, Linux is clearly secure. Yeah, well, there's another level of that. So realizing the different touch points that you have to be cognizant of, choosing a secure version of Kubernetes that has actually had this hardening on top of it, all of that factors into it. So in terms of low hanging fruit,
00:32:20
Speaker
The biggest thing to realize is just there are a lot of areas to consider. And there are solutions out there. And there are companies like SUSE who are putting these solutions together and saying, if, and I'm sure many, many of the listeners here have seen the CNC landscape page with last I checked, we're at what, 30,000 icons on it. It actually crashed my machine last time I opened it.
00:32:44
Speaker
having someone to guide you through that and say, we have tested this particular combination and we stand behind it. That is absolutely crucial because you do not, no one has the time to go through that landscape image, much less test them out and come up with their own decision.
00:32:59
Speaker
I know. And that's the thing that I like, right? Like the, the TNCF is open, like full of possibilities, but do you want, really want to spend the time and figure out how these things interoperate with each other and which is the best one in each layer of the stack. So yeah, I like that you said, like there's a vendor out there who will support all of these things for you.
00:33:18
Speaker
Yeah, we've done some of the research. We've done the integrations. And, you know, I love using that landscape diagram as it's a pro in so many ways shows that the living ecosystem shows the investment. And at the same time, if it scares you, that's a good thing because it kind of should. It doesn't mean it's going to be easy just because of choice.
00:33:39
Speaker
Oh, yeah, I think the only only fair use I see of for that CNCF landscape is either like to scare people for a vendor solution or put it on like that huge screen that they do at like CNCO it's like that's when those logos are actually visible Jason. That's right. That's the only time you can see one without zooming in like 1000%. And you're right, I have my developer advocacy days absolutely use that as a scary thing. I'm like, you think this is good. It is but
00:34:06
Speaker
Have fun. That's my answer. When everyone's like, you work at an open source company, how do you make money? Take a look at that picture, take a couple, and you let me know if you want to pay or not.
00:34:17
Speaker
Okay, so next question is more around like the manageability and D2 operations, right? Like, is there a way it can be like an open source solution or it can be like a solution from SUSE?

Managing and Scaling K3S Clusters Remotely

00:34:28
Speaker
Like, is there a way to manage all these remote K3S clusters from a single pane of glass where I can apply the same policies, I can monitor everything together? Like, is there a solution out there? Yeah.
00:34:40
Speaker
I'm laughing because this is so much of like the bade of my existence is figuring out the single pane of glass. Let me talk about why this is so funny to me because we mentioned a couple minutes ago, thinking in a scale of 10,000 edge nodes, visualizing that and finding a way to show this information in
00:34:59
Speaker
consumable, useful way is a very non-trivial answer. I mentioned at the outset that a lot of this is blazing new territory and scale is absolutely one of them that
00:35:11
Speaker
We are looking at these massive, massive tens of thousands of machine deployments, single pane of glass there. I mean, just between the two of us trying to picture that much less implemented is a little bit fuzzy. At SUSE, we are incredibly lucky to have the Rancher project. Rancher is
00:35:30
Speaker
fantastic at this. And it is one of the reasons why our Edge department has seen the success it has is because we have this kind of strong integration and the strong backing of Susan Manager and Rancher and this ability to manage
00:35:47
Speaker
large machines at scale. And what a couple of my co-workers on the Edge team are working with is, okay, how do we visualize that? And that is still a very much evolving answer. But the question is extremely valid because, you know, and I tell my students this all the time. First thing I do in like the beginning of class is I was like, you guys have a laptop, maybe a gaming PC, maybe your mom and dad's PC, you help them admin. You know, that gets an update, we reboot it,
00:36:16
Speaker
Go get coffee and come back and you're done with that's not an option anymore when you look at this scale you don't just press a button and walk away and say i'm sure it's gonna be fine so using something like ranch and using the extension mechanism in there to add on these extra features and.
00:36:37
Speaker
using the edge work that we are doing to be able to visualize and kind of federate that entire approach is absolutely crucial because, again, sure, open source, you could potentially put that all together on your own, but why do that when you have someone focusing on it and you can leverage that and then use that to do your 10,000 deployments? Okay. No, I think that's a valid answer, right? But, okay.
00:37:06
Speaker
Sorry, I'm going off script here. I want to make sure that I have this question.
00:37:12
Speaker
How do I, can you share some of the customer stories? How are they actually using this in the real world? We have some examples already on our existing episodes, but can you share a few examples from customers that you work with that are using these technologies? You don't have to name names if you don't want to, but I just want to see or hear more stories about
00:37:36
Speaker
this thing in real life? Yeah, I'm trying to filter out in my head so I don't get in trouble. This is where me being a tech nerd, I'm very, very scared because you just say customers. I'm like, I can't talk about that. I think it comes down to considerations you want to take in mind in terms of
00:37:57
Speaker
things like rolling deployments, things like in there. And I know I'm saying things that you guys have come across before, but at the end of the day, each customer solution has its own kind of unique spin on things because everyone's got their own unique sets of problems.
00:38:14
Speaker
I've alluded to a lot of the different areas of Edge when you range from healthcare to military to cars to IOT. All of those are going to end up doing their own thing.
00:38:31
Speaker
managed from more of a central location where you do have this rancher and you're pushing things out. There are plenty of other options, especially to get into air gap that entire answer becomes different because now you are walking around with the USB key. The important part is
00:38:49
Speaker
really comes down to the customer understanding their particular needs. And at that point, that's where I hesitate to say anything more beyond kind of generics because I don't want to tip my head at like, oh, so and so XYZ.
00:39:06
Speaker
I don't think we have that wide of a listener base where we have malicious actors listening to this like, oh, let me find out which organizations are using the stupidest solution. But at the same time, I am also not clever enough to hide them correctly. So some government with a red, white, and blue flag is doing that. I'm just kidding. I'm not saying anything at the US government.
00:39:28
Speaker
That is funny, dude. Okay, no, that works, right? I think we all know, even there are examples on like CNCF's website and K3S's official site that lists some of those customers out there. So there are people using this in the real world for sure. There are, and I would even say to continue to plug SUSE, we have those customer stories out there as well, written by people with lawyers looking over their shoulders to say that they're saying the right thing. Yeah, safer to read those.
00:39:55
Speaker
Okay, let's do a pivot. I think this discussion has been great, but I wanted to talk about AI and that's one of the new questions that we have added in 2024.

AI's Role in Education

00:40:08
Speaker
Ask our guests, how are you thinking about AI? How do you see AI being used in a good way? You can also share a bad way if you want to, but think about some of, share some of the use cases. Our listeners are just focused on chat GPT as the definition for AI. They come across different ideas and different thoughts on how we can explore more of these options.
00:40:31
Speaker
Yeah, and I'm going to answer this from more of the professor hat because this has in the past two years, so I've been teaching effectively for 15 years. I took a little bit of time off when my kids were born, but only in the past two years has the university have to sit there and say, okay, guys, we have to pay attention to this AI now.
00:40:51
Speaker
So it's been kind of fascinating to me because I am in my mid 40s, my students are in their early 20s, late teens. So there is enough of a generational gap there where it's almost
00:41:07
Speaker
ingrained with them already. Much like my kids from age six understood really how to use an iPhone to navigate that as compared to my generation didn't struggle much, but you see where I'm going with this is that, you know, this was foreign technology for a while. I have noticed these younger kids, these college kids just diving into it. I had one specifically asked me at the start of the semester two weeks ago, like, can we use AI in this class? I'm like, I don't actually know what that means, but I'm probably gonna say no, because I don't trust it.
00:41:36
Speaker
But at the same time, I had a student once, actually, he's a coworker now, I asked him, I don't even know if I asked him specifically, but I was like thinking out loud in Slack. And he like paced me this command. And I was like, holy, that does exactly what I want it to do. And he's like, yeah, I just asked chat tpt that
00:41:52
Speaker
That's kind of wild. It has affected how I craft my class because I imagine probably a good 95% of the people listening to this have solved Towers of Hanoi in code. Every language imaginable. Probably someone did it in Minecraft. So you suddenly can't ask generic questions. You have to get really tricky with the programming assignments and make it intentionally slightly vague and something no one's ever seen before.
00:42:22
Speaker
But there have been times when I've absolutely let them embrace it when it makes sense. So I had actually one of the cooler senior projects that I had one of my students do, he used a Cubevert to spawn up virtual machines dynamically. The idea was it was a 3D space where you are doing an escape room and you go up to a computer and you turn it on. Then this again like a 3D JavaScript space and the computer itself is backed by a Cubevert
00:42:52
Speaker
VM, thank you. Running on top of, I believe K3S actually was definitely SUSE suite. And you interacted with that and solved puzzles in there and effectively, air quotes hacked it to figure out the solution to escape the room. Now as part of that, some of the assets he used in there, he just straight up used AI to generate.
00:43:11
Speaker
And I would never have known if there was two reasons. One, he told me, he was very clear about it. But number two, the only weird trick is the word basket was in the title for one of them and it had two Ks. But it was in this image that was generated that the image was so nice. It was an entire node based site, very simplistic, but I had no idea if he hadn't told me.
00:43:35
Speaker
so i guess this is this is a slightly bad take on it in the sense of this is becoming more and more difficult from the university perspective like how do i get my students to actually write their papers versus get ai to do it and
00:43:52
Speaker
It's not quite as simple as you would think because I've had some very poor writers and AI sounds a bit better. You're almost like, you don't want to use AI next time, please. I don't want to grade this yet. No, I agree. I didn't think about this before. But when you brought up the other side, like as a professor, like how do you enforce that students are not using AI? I don't think there are any tools even from open AI that have like, oh, this content was generated from AI or any
00:44:20
Speaker
Like I know when I went to school, there was some libraries that professors used to make sure that this is not plagiarized and it's not just picked up from the web. I don't think that sort of thing exists for AI generated content yet. So it is difficult. No, that is exactly right. So we have the system that we handle basically student interactions with engraving. It will automatically check papers for exactly what you mentioned, looking whatever it does in the back end. None of it is equipped for AI yet.
00:44:49
Speaker
taking this all back to Edge, looking at the possibilities of, and this goes a little bit back to my idea of we have all this data sitting on the Edge and we don't necessarily want to send it back. If you're in that kind of limited communication spectrum of Edge, think, not even necessarily air gap, but think military, where you don't have an ethernet plug to plug in and you're very bandwidth constrained because of satellite or because you're a submarine or something.
00:45:17
Speaker
Being able to do much more of that processing on the edge side and having AI assist with that is completely fascinating because now we have our ability to administer these systems remotely and do everything we need to do in terms of upgrade and uptime, but letting them start to handle more and more of it at the edge and dynamically adapt
00:45:42
Speaker
without the need to call back into the data center and say, here's my three terabytes of data, figure out the best way of doing this. Those can start to learn on the edge systems. And I think that's really going to get fascinating, continue to push us away from the model we've gotten into in the last 20 years of everything sits in an awesome looking data center with all these blingy lights. And I say that every data center I've ever seen has had wires hanging from the ceiling, an actual box fan set up somewhere.
00:46:12
Speaker
I just had to confess like last week to my teammates that I was a shitty like wiring guy like whenever I worked in a data center like they were talking about how pristine their cabling looks like oh they have zip ties everywhere and like oh nope you shouldn't look at how I work or you shouldn't see how I work in a data center.
00:46:31
Speaker
Exactly. I tell my students that too. I was like, it looks so cool in movies. And I'm like, and then you get there in real world and there's a sticky note that says, do not press this button. The entire company will end.
00:46:44
Speaker
And I know that because I'm the one looking at the button going, God, I really want to press it and see what happens. Yeah. See what happens. Right. No, I think that was a great response to our question about like how to think about air that definitely like opened up some, some possibilities or some parts at least. Uh, I want to, as you said, I bring it back to the K3S discussion. How can people get started?

Getting Started with K3S

00:47:04
Speaker
How can they contribute to the community? How can they learn more or experiment or get their hands dirty with, with K3S? Any, any recommendations?
00:47:11
Speaker
Yeah, and I love that phrasing, the hands dirty because like I said at the outset, this is one of my favorite things about K3S is that if you are brand new to Kubernetes and starting to figure it out, K3S is such a great avenue for that because it is so low resources. So K3S.io, that is going to be, you know what, let me double check that and make sure I didn't say that.
00:47:35
Speaker
K3S.io. It's one of those things that I've typed once and auto complete the rest of it. I am correct. K3S.io. That would be embarrassing. It's our landing page for it. That is going to have your traditional YU's K3S and so forth. But you will notice in the top right of that page, there's a basically curl bash command that is literally all you need to do. Copy and paste that running on your machine.
00:48:00
Speaker
From there, you can find links to the community. It is an open source project that has actually been donated to the CNCF, so you have all of those resources available, you would, with every other Kubernetes project. This isn't a SUSE in-house type of thing, like, no stay away. It is very much open. We are very much an open source company.
00:48:22
Speaker
And it is so lightweight that the hardware, it runs on, I should also be clear, x86 and Arch 64. So you've got your nice shiny new Mac M1 that doesn't run 40% of the things you're used to because they're not x86. K3S will in fact run though. And if it sounds like I have some PTSD from that, it's completely true because it's been
00:48:45
Speaker
an interesting couple of years working on them. But all of that is supported by K3S. So K3S.io is a great starting point. From there, you will find everything from GitHub to community links and plenty of people to interact with. And then in terms of getting your hands dirty, I genuinely believe that is the best way. I've had students use it for their senior projects and
00:49:11
Speaker
Wow, this is very difficult to say if that sounded obnoxious, but they are not the most technically savvy at that age. So being able to say yes, they were able to install K3S is a lot bigger deal than it probably sounds like. No, dude, like trust me, like when I was at school, like I was doing my master's, somebody just gave me like an ESXi ISO and they're like,
00:49:32
Speaker
Yeah, go and install vSphere and get a VMware environment up and running. I had no clue. I was like, what do I do? You gave me a USB and pointed me to a rack full of servers. What's next? Agreed. This is a learning experience and a curl command is definitely way easier than installing ASX. I will definitely take that. Yeah. Once you get over, it's funny because for the younger
00:49:56
Speaker
less experienced engineers, that's absolutely easiest. And then you have your occasional admin who's like, are you really expecting me to run a curl bash on my machine? You're like, that's a very fair assessment, expected documentation is going to give you some more information.
00:50:09
Speaker
Awesome. This has been an awesome conversation. I'll make sure I'll include all of those links, some of the sessions that I found useful while doing some research. I'll make sure that I link to your LinkedIn page. If you have any other ways people can reach out to you, let me know, and we'll put that in the show notes.
00:50:26
Speaker
Yeah, LinkedIn is best for now. Happy to engage anyone who wants to talk about it or just generally talk nerdy stuff. This was a blast, thank you for having me. Like I said, it's been a long time since I've got a chance to nerd out and just laugh about this kind of stuff and this was really cool. Yeah, thanks, thanks so much.
00:50:42
Speaker
That was a great episode. I love how we focus not just on K3S, but also how his students at Villanova University are using artificial intelligence and how the professors are just trying to keep up and then advising students on when and when not to use Gen AI technologies. But going back to key takeaways specifically focused on K3S, I like the point where I just want to reiterate the point that
00:51:06
Speaker
Jason brought up, like the security hardening work that is already being done, the workflows that have already been built, the CI CD automation, the API, the ability to interact with the Kubernetes API, the familiarity that all of us have with Kubernetes inside our data center or cloud environments can now be extended to the edge just because we have something, a very small footprint, like 100 meg footprint solution like K3S available in the ecosystem. So K3S can be used not just to
00:51:34
Speaker
not just as part of our CI CD pipelines as that lightweight Kubernetes compatible distribution, but also add these edge locations. And one thing that got my eye on, I really liked was how Jason wants us to think about high availability in a different way when it comes to these environments, right? Okay.
00:51:51
Speaker
If you are an admin that is already responsible for these edge, architecting these edge solutions, this might not be new to you, but it was definitely new to me that HA is not really a strict thing that people have to follow. The definition for high availability and resiliency at the edge can definitely change and can be different from organization to organization, from edge location to edge location. From things like just
00:52:16
Speaker
the ability to run a single node server, but then making sure that it always sends the summary of all the data that it has accumulated and analyzed back to the data center. That's OK. But there are situations when you can scale up solutions like K3S to a three node deployment with maybe an HCD, if not a SQLite instance, run your CSI plugins, have persistent data at those edge locations. That is possible as well. So I think this discussion more evolved into the art of possible.
00:52:45
Speaker
can we do with K3S or what else can we do with K3S? So I think I would like to just keep that in mind when thinking about these edge solutions. Again, thank you for giving us the feedback and so we can always improve our content. Let me know if there are any other episode suggestions that you might have. You can reach out to me directly through LinkedIn, Slack, Twitter, any of those social media channels. If not, maybe in person at some conference, right?
00:53:16
Speaker
But I think that's, that brings us to the end of today's episode. Before we sign off, I want to make sure that like I reiterate, like please give us five star ratings on the podcast app that you use to listen to our podcast or hit subscribe and like and share the episode on YouTube. If you, if you watch us on YouTube, this brings us to the end of another episode. I'm Bhavin and thank you for joining another episode of Kubernetes Bites. Thank you for listening to the Kubernetes Bites podcast.