Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Kubernetes at the Edge image

Kubernetes at the Edge

S3 E19 · Kubernetes Bytes
Avatar
1.4k Plays1 year ago

In this Episode of the Kubernetes Bytes podcast, Bhavin and Ryan discuss Edge Computing and how Kubernetes and other platforms and distributions can help build better solutions for the edge.   

Join the Kubernetes Bytes slack using: https://bit.ly/k8sbytes

Check out the KubernetesBytes website: https://www.kubernetesbytes.com/

Ads:

  • Ready to shop better hydration, use "kubernetesbytes" to save 20% off anything you order.
  • Try Nom Nom today, go to https://trynom.com/kubernetesbytes and get 50% off your first order plus free shipping.

Timestamps:

  • 00:00 Introduction 
  • 05:30 Cloud Native News 
  • 15:50 Kubernetes at the Edge   

Cloud Native News: 

  1. https://www.cncf.io/announcements/2023/10/11/cloud-native-computing-foundation-announces-cilium-graduation/
  2. https://www.pulumi.com/blog/series-c/
  3. https://www.calcalistech.com/ctechnews/article/ryrfsmtla
  4. https://cloudnativenow.com/features/loft-labs-bolsters-virtual-kubernetes-cluster-management-at-scale/  

Show Notes:  

  1. https://www.cncf.io/blog/2022/08/18/kubernetes-on-the-edge-getting-started-with-kubeedge-and-kubernetes-for-edge-computing/
  2. https://youtu.be/rj4FFf1gZ0g?si=j74AHT_09l5wwIMi
  3. https://thenewstack.io/a-new-kubernetes-edge-architecture/
  4. https://www.packtpub.com/product/edge-computing-systems-with-kubernetes/9781800568594
  5. https://www.redhat.com/en/topics/linux/ARM-vs-x86
  6. https://www.xenonstack.com/blog/edge-data-management
  7. https://www.cncf.io/blog/2022/08/18/kubernetes-on-the-edge-getting-started-with-kubeedge-and-kubernetes-for-edge-computing/
  8. https://docs.k3s.io/architecture
  9. https://microk8s.io/docs/getting-started
  10. https://kubeedge.io/docs/category/architecture
Recommended
Transcript

Podcast Introduction

00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management.

Cloud Native News & Interviews

00:00:09
Speaker
My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.

Friday the 13th Superstitions

00:00:29
Speaker
Good morning, good afternoon, and good evening wherever you are. And we're coming to you from Boston, Massachusetts. Today is October 13th, 2023. I hope everyone is doing well and staying safe. Let's dive into it. Friday, October the 13th, that is. Hopefully your day is going pretty well. How you doing, Bobbin?
00:00:49
Speaker
I'm doing good, man. Like, yeah, Friday the 13th. Yeah, I'm not as superstitious. So I'm doing good. I'm usually not either. But the way my day has been going so far. You have something to blame for like, come on, it was because of October the 13th. Like, between like my basement half flooding and just just going out.

Weekend Plans & New England

00:01:12
Speaker
Not fun.
00:01:14
Speaker
Hopefully you're doing some fun stuff this weekend. I don't know. Maybe I don't really know rain. Yeah. Agreed. Like even the temperature, even though the temperature is like, eh, not as much fun to be outside. Yeah. This is like, uh, epic sweater weather. Yeah. I'm just debating whether I should make the trip to like white mountains for the fall thing, but then just looking at,
00:01:43
Speaker
things on Instagram like, man, it's so crowded and I don't really, yeah. So I might not end up doing that, but I'm still trying to figure out like, if I should like wake up like super early, drive there, reach there at like 7 a.m., look at the colors in the perfect golden light and then come back before the crowd show up. Yeah. So if I, I've seen pictures of like, are you talking about like the kink among his highway? Yeah. Yeah. And it's just like bumper to bumper.
00:02:10
Speaker
I don't know. I feel like it would take all the fun out of it. Yeah. So that's why I don't have any concrete plans. I'm just trying to convince myself that it's okay if you don't go. You're still in New England. They won't take you out. Just look out the window. You're kind of in the city, so you got to get out

KubeCon & DevOps Days Boston

00:02:28
Speaker
of here. So come visit me and then you can see some trees. Perfect.
00:02:33
Speaker
But yeah, no, no other plans, right? Just like full steam ahead for KubeCon as, you know, like both of us will be there representing like employers, but also just trying to catch up with people in the community. So should be a fun time. But yeah, three weeks, man. Whoo, cutting it too close. Yeah, no, real close. And we're going to be at DevOps stays Boston. I know. Tuesday. We'll be doing the show sort of pseudo live in the afternoons at DevOps stays Boston, which would be
00:02:59
Speaker
a lot of fun. So I don't know if anybody listening is going to be there, but come stop by. We'll have a little bit of swag, uh, stickers and stuff like that for you. And you could come talk to us about what you're, what you're up to and why you're there. That'd be fun. I know. It sounds like a really cool event, right? Like I haven't been to any previous DevOps space postures. I moved here during COVID and like, I think this is the first one. Oh, that's right. Yeah. I've been to them.
00:03:22
Speaker
pre-COVID, like a long time ago, I think it's been going on since 2015, at least, because that's, I want to say the timeframe I was there. Okay.
00:03:31
Speaker
Yeah, and they have been posting on their LinkedIn channel about the different speakers and the different talks. I'm excited for those. I added a few people from the speaker list to my LinkedIn network, and those are some awesome people to follow. So I'm excited, dude. Hopefully, we get to have conversations with them. Again, this is mostly ad hoc. We'll see how it goes, but yeah, I'm excited for the event in general. Yeah, exactly. Paul, Bruce does an awesome job with the organization.
00:03:56
Speaker
and organizers in general, thanks to all of them for putting it together.

Kubernetes at the Edge Intro

00:04:00
Speaker
And we're excited to be there. Cool. So we have a fun topic. It's just you and I today. We're going to jib a little bit about Kubernetes and specifically about sort of what its role is at the edge. But before we dive into that topic, we've got a bit of news. So I think it's all your news articles. So let's dive. Yeah, we'll be right back after the short break.
00:04:22
Speaker
As long time listeners of the Kubernetes Bytes podcast know, I like to visit different national parks and go on day hikes. As part of these hikes, it's always necessary to hydrate during and after it's done.
00:04:36
Speaker
This is where our next sponsor comes in, LiquidIV. I've been using LiquidIV since last year on all of my national park trips because it's really easy to carry and I don't have to worry about buying and carrying Gatorade bottles with me. A single stick of LiquidIV in 16 ounces of water hydrates two times faster than water and has more electrolytes than ever.

Cloud Native Community Updates

00:05:00
Speaker
The best part is I can choose my own flavor. Personally, I like passion fruit, but they have 12 different options available. If you want to change the way you hydrate when you're outside, you can get 20% off when you go to liquidiv.com and use code KubernetesBytes at checkout. That's 20% off anything you order when you shop better hydration today using promo code KubernetesBytes at liquidiv.com. And we are back.
00:05:29
Speaker
Let's do some cloud native news. So first and foremost, Psyllium, the CNI plugin that everybody loves, officially graduates as a CNCF project, so moves into that graduated status, joins the likes of Flux, Argo, Kubernetes itself. Again, that just means it has a lot of community support and the amount of money that Isovalent is putting in behind it that says a lot as well.
00:05:52
Speaker
For people who don't know what Selium is, it's a CNI plugin that provides 3 and 4 connectivity between container workloads. It has since expanded. I think it added network policy support, some sort of meshing multiple Kubernetes networks together and allowing for ingress and egress gateways and so on and so forth. Obviously, Selium
00:06:17
Speaker
I think it's a default offering if you select deploy like an AKS cluster and an AKS cluster. But this just shows to me like, OK. This is a stable thing. Yeah, they have some observability stuff in there as well, some security stuff. And I'm reading here that it's the second most active CNCF project in terms of like commits and all that stuff. Oh, wow. OK. So I'm assuming first with Kubernetes. I don't know. That's a good question. OK, awesome. And then a couple of funding rounds, right?
00:06:45
Speaker
Just a note, I think people are raising money that's always a good sign that we want these Kubernetes startups or startups in the Kubernetes ecosystem to be around, like to start new ventures that solve challenges or be around and continue delivering value to users and customers in the community. So none of the startups are now sharing what their total valuation is because some of these might be like flat rounds or down rounds. This is just new money that's making sure that
00:07:13
Speaker
people's laptops and the technology continues to be there. So first vendor on the list, Pulumi, everybody knows, everybody who's using anything or doing anything in the infrastructure as cloud space knows what who Pulumi is. They raised CDC 41 million dollars
00:07:31
Speaker
And we had Scott Lowe from Pulumi last year at KubeCon on our podcast to talk about everything that they're doing. But an interesting stat that they shared in their announcement or press release was they have 2,000 customers that have chosen their Pulumi Cloud offering for their infrastructure as code. So that's always a good number. In addition to the 150K end users that are using your open source offerings, having more than 2,000 paying customers is definitely a good sign.
00:07:59
Speaker
with Terraform, the way things have evolved in the community and with OpenTofu, I think Pulumi has the opportunity to become like the go-to infrastructure as code solution in this ecosystem. So congrats to everybody at Pulumi. We're glad that you have some new money to spend. Yeah. VC money is not an easy thing to come by in today's ecosystem, especially in sort of the infrastructure as code space where there's, you know, there's time going on. So yeah, absolutely.
00:08:29
Speaker
And I think even like we are VC money a comment, right? I think it's really difficult for people to do these B C D E F rounds. I think if you are in that early like seed, maybe pre-seed, maybe series A, I think you still have a good chance of raising money because the valuations are more in your favor for people who have raised money during the pandemic when the valuations were crazy high. Yeah.
00:08:53
Speaker
Yeah, the VCs don't want to invest more money at that high valuation, so you will see compromises being done when these next rounds show up. But talking about seed funding rounds, perfect opportunity to call out a new startup called PerfectScale. Did you do that on purpose? Yeah, I did.
00:09:13
Speaker
I'm so proud of myself at this moment, right? Like, come on, yes. Like, this is what episode 50 something does for you. You know, you build sentences on the go. But yeah, Perfect Scale raised like a $7.1 million seed funding round. They did have some pre-seed funding. So basically their total capital raised is around $10 million. They want to, I love their mission, right? Their mission is to level the playing field for optimizing Kubernetes costs and performance.
00:09:43
Speaker
So basically turning every organization into an organization that has those elite DevOps and R&D teams to manage their Kubernetes clusters for them. So power to perfect scale. From the website, it looks like they have great dashboards where they can help you monitor and improve your application performance, resilience, some Q-cost flavor. So they show you a wasted cloud spend or the amount of money that you can be saving by changing some things.
00:10:13
Speaker
Adding automation to reduce repetitive operations tasks.

Edge Computing Essentials

00:10:17
Speaker
So again pre seed round, right? So they're still brand new their technology stack might not be like fully evolved but It's another startup to watch out for Yeah, I feel like in the last, you know year the space around optimization and automation or even when you combine those things with sort of
00:10:37
Speaker
AI and ML models. Those types of startups I think are probably at the top of the list when it comes to the ones getting funding and or the ones breaking into the market, I'd say. I'd love to break that down a little bit more, but just by
00:10:54
Speaker
looking back at kind of all the conversations we've had, I would put money on it. I think I don't have the stat in front of me, but I read some article from TechCrunch, which was like, if you have anything to do with AI, like true AI, like not just adding the word AI in your description or for what it does, you have like, I think 40% or 60% more chance of raising venture funding right now. So that's obviously a good sign. And as you said, anything to do in that ecosystem. Yeah, you're good. So don't worry about not being able to
00:11:23
Speaker
grow your company or build your company from scratch exciting stuff and then finally loft labs i don't know if people know loft labs but yeah they have an exciting solution called weak cluster that has been around in the open source ecosystem since i think 2021 so around for two years according to their website it has been super popular with over 40 million virtual communities clusters already deployed using the weak luster project so weak luster allows you to run have like a
00:11:50
Speaker
a multi-tenant large Kubernetes cluster, and then you can spin up these V clusters or virtual clusters in individual namespaces of that multi-tenant cluster. And each of these virtual clusters are fully CNCF conforming, have access to the same API. So if you wanted to run a multi-tenant stack where each user got their own Kubernetes cluster, or at least they felt like they got their own Kubernetes cluster,
00:12:14
Speaker
vCluster is the solution for you. The new announcement talks about vCluster.pro, which is the commercial edition now for the vCluster software. It adds additional admission security controls, provides

Kubernetes at the Edge Discussion

00:12:27
Speaker
an isolated control plane, has a new core DNS pod, which basically
00:12:33
Speaker
Combines like code dns api server and sinker components into a single pod and enable some networking between The virtual cluster and the host cluster or between different virtual clusters that are running on the same host cluster So again, uh v cluster is interesting. This is just a commercial solution for it Gotcha. Yeah, I think the um
00:12:52
Speaker
The whole concept of abstractions, right? This is a recurring theme, your abstraction on top of Kubernetes. Not surprising here, although I did read through this a little bit and it does say sort of like, today it's most widely used pre-production, meaning reducing the overall physical Kubernetes servers that you're paying for. Again, really tied to efficiency and cost and those kinds of things, but I think the pro thing is more aimed at production anyway.
00:13:18
Speaker
Yeah, and the pre-production thing makes total sense. You don't want all developers to continuously spin up Kubernetes clusters and have to pay cloud costs on your cloud bill or on-prem you need to scale and have that infrastructure because each Kubernetes cluster
00:13:33
Speaker
needs a control plane, a highly available control plane, even maybe three nodes for the testing that you want to do. So yeah, this definitely helps solve that pre-production use case where it accelerates the time with which developers can write code, test it out, and then eventually push things into a proper Kubernetes cluster that's running in production. Yeah, and I feel like this is a topic we've kind of talked about a little bit in terms of the complexity of managing a lot of, like a fleet of Kubernetes clusters, you know, with our conversation with Madhuri, as a model.
00:14:03
Speaker
They're being able to manage them all the same from a single end point. There's solutions out there for this kind of thing, but it's clearly high priority problem that needs a solution. Yeah, very interesting. I haven't heard about it till this. Awesome. That's it for the news section for me, Ryan. That's it for the news section for me too, Bobbin. We'll be right back after this short break.
00:14:25
Speaker
If you've ever had a puppy and raised it to become a big dog, you know that changing food and finding the right food is hard to get right. Ultimately, you want them to feel good and act happy and be okay with what they're eating. They're part of your family, after all. I have an eight-year-old golden retriever named Roscoe, and he's always had a sensitive stomach, so finding the right food is kind of a pain. That's where Nom Nom comes in.
00:14:49
Speaker
Nom Nom's food is full of fresh protein that your dog loves, and the vitamins and nutrients they need to thrive. You can actually see proteins and vegetables like beef, chicken, pork, peas, carrots, kale, and more in the ingredients.
00:15:05
Speaker
So here's how it works. You tell them about your puppy, the age, breed, weight, allergies, protein preferences, chicken, pork, beef, and they'll tailor a specific amount of individually packaged Nom Nom meals and send them straight to you. If you're ready to make the switch to fresh, order Nom Nom today and go to https forward slash forward slash trynom.com slash Kubernetes Bites.
00:15:29
Speaker
and get your 50% off of your first order, plus free shipping. Plus, Nom Nom comes with a money-back guarantee. If your dog's tail isn't wagging within 30 days, Nom Nom will refund your first order. No fillers, no nonsense, just Nom Nom.
00:15:47
Speaker
And we're back. Let's go. You've been a little short to this week. It's Bhavan and I again. We don't have a guest today. We are going to be recording some stuff again next week at Davos Days Boston. So we're kind of driving into next week. But today we did want to have sort of one of our high level one on one episodes about sort of Kubernetes at the Edge. We did a few interviews with
00:16:14
Speaker
some members of the community both from MLB and last episode with Chick-fil-A that really focus on these types of solutions. So we wanted to kind of take a step back, even though having those real use cases are super valuable to hear the insights from those individuals.
00:16:34
Speaker
It's, we wanted to kind of step back and say, you know, what problem is being solved? What does it mean? What are the projects out there that you could do this kind of thing? So I don't know. Where do we start? Let's start with, let's start with just defining a little bit about what is edge computing. Let's go. Yeah, edge computing, right? Like, I love that we have started using these terms. And again, edge computing is not like a brand new term that even Kubernetes came up with.
00:17:00
Speaker
It has been around, but I think for me, the crux of the solution or crux of this term is it's just enabling you to run or process data closer to where it's generated rather than shipping everything back to

ARM Processors & Kubernetes

00:17:15
Speaker
like a central data center location or the central cloud location, which might result in additional latency. So this is like going to the source.
00:17:22
Speaker
I know I'm going to butcher this up, but it's like taking the horse to the well to drink water instead of drinking the well over. I hope I said it right, but it's about managing and manipulating your data and processing your data closer to where it's generated. And as Ryan already said, those examples like the Major League Baseball stadiums and the Chick-fil-A locations or any other retail locations that you might have.
00:17:48
Speaker
And one more thing before I hand it over to you, Ryan, for your perspective. This is also because even today, even with the amount of bandwidth that we have available, I know I have a gig internet at home, so obviously commercial. You do? I do, dude.
00:18:06
Speaker
Even with all of these things, the cheapest way to transfer data is actually through UPS or FedEx. That's another reason why you see those big-ass 18-wheeler trucks from AWS helping you to move your data from your on-prem data centers to your data centers. This is like, okay, why do I need to push everything? I can just process my data, analyze it, and maybe send a summary back to my core data center. This is us talking about everything that
00:18:36
Speaker
that it was or falls under that bucket of processing data locally. Yeah, absolutely. I mean, for me, I think Edge is really defined by sort of the way we as sort of users or humans interact with and expect systems to work, right? And in terms of like we, when we're on our phones, we expect our, you know, whatever we're interacting with to be sort of speedy. We want fast response times for whatever application we're working with.
00:19:04
Speaker
But from the organizational side, we know, like you said, moving lots and lots of data that might be collected at the edge, whether that's in a manufacturing facility, at a farm, or in a hospital. Those various use cases, I'm sure we'll talk about them a little bit more.
00:19:24
Speaker
There's not the amount of time that needed to wait on sending everything back to a core data center and coming back to it. So really, I think Edge really fueled by performance and cost, I think, in my eyes in terms of looking at the amount of data and the user experience, or I shouldn't say solely the user experience, really the experience of whatever that use case may be. It could be another system interaction.
00:19:53
Speaker
No, and like, I like that you bring up performance. Because thinking about performance, right, as you said, you don't want to wait for

Comparing Kubernetes Projects

00:20:01
Speaker
the response time of sending all of that data, getting processed at your data center and then shipping it back. Like imagine self driving cards, right? Like they have to process data and videos in
00:20:13
Speaker
real-time videos in some millisecond, even smaller than that fraction of a millisecond maybe, if they're waiting for to analyze like a red light or a pedestrian, if they're sending all of that video files that they're huge files, right? Because it has it, like, I'm sure it's in a good resolution. It's not, I'm sure it's not 4k, but it's not like,
00:20:31
Speaker
some old format, which I can't remember right now, but they have to do that like near real time analysis, right? So having edge solutions definitely help with that. Yeah, I mean, self driving cars is definitely like
00:20:45
Speaker
a sort of a cornerstone use case for Edge. I mean, a cell phone technically is an Edge device. A car is obviously a big one. There's so many out there we can do. But I do want to mention the couple of episodes that we did talk about. In MLB's case, those are the stats being shown to you on screen when a home run is hit or
00:21:10
Speaker
how fast a fastball is going or those kind of things happen in real time as you're watching them. I think the expectation for you to find out how fast a pitch was the next inning, which is like at least 20 minutes later, wouldn't be a great experience. So again, the demand for
00:21:34
Speaker
processing the data closest right in the stadium we're talking about I think was the example and these are and these aren't necessarily what everybody thinks about when they think edge. There's a lot of different sort of
00:21:49
Speaker
perspectives on edge, right? And I think there's this good paper that I was reading on the packet hub, packet pub, I should say. But they kind of talked about the different breakdowns, right? There's sort of the cloud layer, which is like the core that we kind of think about every time we talk about cloud, AWS, GCP, Azure.
00:22:09
Speaker
Then there's sort of the near edge, which is like the cell towers, LTE networks, those kinds of things. Then you have sort of the far edge where you'll find a lot of like the infrastructure based, you know, even Kubernetes clusters we'll talk about today. And then I think a lot of people when they think about edge, they're thinking this paper defines it as tiny edge, which I kind of like that term because
00:22:30
Speaker
it really puts in perspective of like things are tiny, right? Like physically at the edge, like these are sensors on farm equipment centers and an EKG or whatever it may be that are collecting little tiny bits of data and little small computing device. I think that's the most common
00:22:50
Speaker
Maybe I'm wrong, but that's the most common people think about like the IOT devices and edge they associate those things, but there's you know, there's so many levels of Necessarily what someone's talking about edge right and there's there's even terms that have been used before this I mean with our conversation With Chick-fil-a that we talked about fog can yeah, right? I
00:23:13
Speaker
Um, which, which one I wasn't necessarily super familiar with, but it, it more, it kind of like think about a cloud that kind of encapsulates a lot of those different edge layers. Um, so yeah, you know, for, for the purposes of sort of the, those use cases, I think they, they vary them will be one in the two play one because chick plays, I guess you'd classify as retail. Right. Yeah. I guess, I guess that would be, and really that's about sort of the experience of the customer.
00:23:42
Speaker
Mostly right and i think we talked about how the stuff that the business cares about is what they're sending back to the core data center anyway right some of it's just processed and used right away at the real estate location same thing with them i'll be right the stuff that surface right away.

Data Management at the Edge

00:24:02
Speaker
And then it's kind of tracked back. So we'll talk about the different types of data and data management later. I agree, right? Like the different protocols comment that you made earlier, like solutions or protocols like MQTT being relevant, those tiny devices are tiny edge solutions. And this was something that Brian last week brought it up as well.
00:24:23
Speaker
There are air fryers or just regular fryers that check for locations that will send data to the edge location or their edge deployment in mqtt format but i'm clearly not an mqtt expert so for this discussion for me at least it's mostly like okay how does kubernetes and
00:24:43
Speaker
and its variants, like different solutions that our projects that we'll talk about today, fit into those use cases. So I'm more worried about where the servers or the nooks that we have at these edge locations and how we orchestrate them and run our applications at that location. Then I personally am at how we manipulate those tiny devices because they are definitely important. All these edge use cases, they are the ones that are collecting and generating a lot of data. I remember
00:25:12
Speaker
When you used to work at Footworks, you did a cool demo around a temperature sensor inside your office space and how it fluctuates and send all of that. That's where the data is being generated. Our solution is like, okay, let's figure out how we process that data or how we make sense of that data closer to that edge rather than sending all of that information back.
00:25:36
Speaker
Yeah, that was a fun little use case, right? It was like the whole goal was to predict the temperature of my office, right? Eventually to run predictions on it, which, which was really not so useful because, you know, with a, with a smart, you can pretty much guess

Edge Computing Use Cases

00:25:53
Speaker
the temperature based on, but there's interesting cause you could see like different types of, of the day or year, that kind of thing. And, and that's a good use case because it's like, you know, for this episode, we're going to talk about,
00:26:06
Speaker
the Kubernetes components and where and how they run. We're not necessarily talking about every single edge use case. And we talked about tiny devices. It's not always a tiny device. It could be a full rack nearer to whatever application it's serving. It doesn't have to be those kind of things. So we're going to talk a little bit about
00:26:26
Speaker
sort of what are the projects out there and in the Kubernetes space specifically, you know, the edge is such a giant topic, but we'll focus in on them. Before we do that, I do want to talk about some of the challenges.
00:26:41
Speaker
Okay, so I think my challenge is like, like, I know we have a bunch in this list. But for me, it is around physical constraints, like edge locations are not your side. Okay, tangent. So when I was doing my master's at NC State, right, they had a data center too. And that was the first time I stepped foot in a data center.
00:27:00
Speaker
It was so fancy, dude. It was like they had the hot aisle, cold aisle. Everything was properly ventilated. They had to maintain temperatures. Everything was so neatly wired up. I'm pretty sure it was like an EBC data center. I've personally worked in a proper data center as well, and it's not as neat. It's not as great.
00:27:17
Speaker
But at edge locations, you can't expect to meet all of those same requirements. You won't even have a proper rack. You might just have a Home Depot table or an IKEA table where you are just putting down your server and just putting a switch on top of it and making sure that it works. Yeah, I think Brian said there are nooks in Chick-fil-A, which is the back office, or at least at one time. I don't know what it looks like today.
00:27:43
Speaker
Yeah. And that's a valid concern or challenge to solve for. So like in my past, I've worked at Lenovo and Lenovo did like come up with like a really.
00:27:53
Speaker
awesome, like a small x86 based server. I forgot what it was called with model number, but it was a completely ruggedized system. It was dust resistant, moisture resistant, and still had like an Intel CPU under it. So you can actually run proper workloads on it. So I think it's when

Exploring Edge Technology

00:28:11
Speaker
you are thinking about the edge, don't think about Chick-fil-A's, maybe even think about
00:28:16
Speaker
where you might have it's middle of the ocean. So you don't even know whether you'll have proper electricity coming. Obviously there will be, but like you have to plan for all of those things. So I think that's a big challenge to solve for. Like how do we handle those physical elements when we are thinking just about technology?
00:28:33
Speaker
Yeah, or you know what the use case that comes up that always makes me giggle a little bit is our friend and colleague Tim. Oh yeah. When he sort of wrote a paper on how he designed this bear-proof box because of the location he lives at on the top of a mountain because he's a mountain man in many regards. But basically to help kind of maintain his network and connectivity for when
00:28:56
Speaker
things are off-grid and those kind of things. But the whole idea was it was kind of baked into this bare-proof box because it was outside and those kind of things. So Edge can really serve so many use cases. Challenges-wise, the other thing is Edge comes with complexity of the fact that there's just more locations. When you're moving things, parts of the application
00:29:22
Speaker
closer to the data and where it needs to be processed, you therefore have more locations you're putting this in, right? Even in Brian's use case, 3,000 stores, or MLB was hundreds of stadiums. And you're not going to have personnel at those locations all the time.
00:29:41
Speaker
You need to manage these things remotely. You may send field engineers, but that's a whole other aspect of it. So that definitely adds to the complexity for the challenges on Edge as well. No, I agree. It has to be about...
00:30:00
Speaker
non unattended installs and unattended management figure out how much you can do remotely. I remember working with a grocery chain as a customer at a previous job.
00:30:14
Speaker
to run all of their checkout systems, they installed like a half a rack unit, and that was their edge solution. That's what the scale of edge looked like. And it was important because they couldn't send all of that data from each point of sale system and the sales checkout counters back to their core data center, which might be in a completely different region.
00:30:36
Speaker
Because that would just lead to bad customer experience, huge lines at the checkout counters. They eliminated all of that by processing everything locally. They didn't have an IT admin on staff to manage all of that equipment. It was all about how can people from their main office manipulate or manage all of these half rack or server instances.
00:31:00
Speaker
How can they figure out if there is any application failure, if there is any physical drive failure, node failure? How do you plan for these things? I think those are the challenges that you have to solve for when you're thinking about edge locations. Cool. Any other challenges before we go to the next thing? No, I think that's it. Those are enough challenges.
00:31:20
Speaker
There's many more I'm sure we didn't mention, but that's totally fine. Let's switch gears to talk about sort of, can Kubernetes help at the edge? Let's talk about some of our opinions in this matter.
00:31:34
Speaker
Yeah, so I think Kubernetes can definitely help. Like, okay, if you expected a different answer, listening to Kubernetes by its podcast. Come on, guys. We have knocked Kubernetes before. Yeah, but I'm not going to say Kubernetes is useless. No, Kubernetes can definitely help.
00:31:50
Speaker
you with these edge deployments, right? And for me, I think I was listening just for prep for this podcast episode, right? I was listening to one of the sessions that our webinars that one of our friends Michael Kate did. It was a panel discussion where they were talking about a similar topic. And the thing that popped out to me was
00:32:11
Speaker
Yes, Kubernetes is great, but it's not just Kubernetes, right? It is the standardization that it drives at all of these thousands or hundreds of thousands of edge locations that an organization might have. So it's more about the API server and how it allows you to...
00:32:28
Speaker
use declarative state for management or allows you to have that consistent API set to interact with it, then having actual Kubernetes at these edge locations. So it's about the standardization. Imagine having 10,000 retail locations per target, right? And even if you are able to manage them perfectly, even if you have like, let's say 10% of those locations having snowflake deployments, it becomes a real pain in the ass.
00:32:54
Speaker
to manage all of that. It becomes an operational nightmare. So yeah, I think standardization is the key for me. That's where Kubernetes can really help you with these applications.
00:33:03
Speaker
Yeah, and we've obviously talked about how Kubernetes isn't always just the thing you throw at a problem because it's there. You definitely have to take into account, do those APIs, do the orchestration components, does the sort of distributed system help you really accomplish what you're getting at? And you mentioned standardization, and I always come back to
00:33:27
Speaker
Sort of containerization as a as a as just a topic in general, right? because Kubernetes takes advantage of containerization, right? We're managing many containers
00:33:41
Speaker
and orchestrating many containers. And really the benefits there are the benefits that we're talking about when it comes to smaller run times, the general standardization of how applications are built and distributed, all these things we talked about in early days of sort of Docker and containerization, you definitely get those from managing many edge locations. I think we can't overlook those. And just because we're talking about Kubernetes.
00:34:12
Speaker
No, agreed. The ability to package everything up in that container, in that, I don't know, manageable unit for your application definitely helps. Agreed. Yeah, and I'm curious about some other topics that I didn't look into for this episode, but we talked about Cloud Native Wasm and what its role will be. Again, really focusing in on performance and form factor, smaller applications.
00:34:39
Speaker
I could see it playing a pretty big role in Edge as well. I know we've talked about its use case in Edge a little bit, although it's definitely a newer approach to application development. But I'm curious how that goes across. And maybe we'll sync back up with Nigel again to talk to him about the Edge comparison. Gotcha.
00:35:02
Speaker
And Ryan, I know I spoke about the server form factors, but I know you wanted to talk about x86 versus ARM. How does that play into edge locations? Yeah, so if you're looking into edge deployments or just edge architecture in general, no doubt about it, you'll come across the usefulness of ARM processors compared to x86. And I'm not an ARM expert here, but basically- You have two of those. Come on.
00:35:37
Speaker
But really ARM is sort of a solution that allows for
00:35:43
Speaker
less energy consumption and generally less power to be used, which means that it also can run without fans a lot of the time. So a lot of x86 servers have the built-in fans and those kinds of things to keep it cool. You talked about hot and cold aisles and data centers. You don't always have hot and cold aisles in a back office.
00:36:06
Speaker
in the retail location, so ARM definitely allows you to run things, albeit it has a lot of times less processing power, but it also means it doesn't get as hot, so you don't have to worry about that as much in terms of that brings the form factor down. That being said, some of the biggest supercomputers are built from ARM. It's not that it can't compete with x86, but it is used a lot more.
00:36:33
Speaker
in sort of the edge use cases. And it's unavoidable, I think, when it comes to looking at form factor and how you'll design your compute when thinking about edge devices. Because you'll be able to kind of think about how small those sensors and those kind of things are.
00:36:50
Speaker
I agree. It's not that ARM can't be run in data centers. It's the ability for it to go down in form factor or support that no fan use case. I think that's where it shines specifically for these Edge deployments. The big difference that I got out of looking at x86 versus ARM
00:37:11
Speaker
X86 has all these graphics cards and memory and storage and CPU as independent modules, basically. Whereas ARM doesn't have a separate CPU. Instead, it's part of the processing unit, the same physical substrate. They often talk about these processors as a system on a chip or an SOC. Again, that brings the form factor down. It also means they're designed by more manufacturers and they're designed more
00:37:40
Speaker
purpose-built for the use case and solution, right? You talked about self-driving cars and things like that. Those are all going to be very purpose-built systems. And I don't know to break down of how many or if they use Arm or not, but it just kind of leads to the point of Arm is going to be here to stay when it comes to Edge. Yeah, makes sense.
00:38:01
Speaker
Cool. So let's turn to some of the, let's just list a couple of the options in the Kubernetes ecosystem when it comes to sort of architectures or I guess I'll call them more platforms or distributions. So we have K3S.
00:38:22
Speaker
which is, well, we'll get into the details. But K3S is out there, which is kind of a smaller form factor of Kubernetes. We'll go into the details about that. Microkates, again, also sort of a Kubernetes distribution that's a smaller form factor built for these types of use cases. And we'll also talk about kubedge, which is sort of a bigger concept of how you manage both sort of the core and the edge components.
00:38:52
Speaker
And then some other ones in CNCF, Landscape, Super Edge, Acree, I might not even be saying that right, AKRI, and OpenYurt I think is the ones I have on here that we'll kind of dig into a little bit. So let's start with K3S. Let's dive in a little bit.
00:39:08
Speaker
Yeah, so K3S, right? Like if you do a quick Google search, Kubernetes at the edge or Kubernetes small form factor, the first result I'm sure that shows up is K3S. So K3S, there is a fun story, which I learned this week, right? I was, as I was doing research, like it was Darren Shepherd, like he created these, this
00:39:24
Speaker
a specific distribution at Rancher. And the whole reason for that was he was writing code, and every time he wanted to test something, he had to deploy like a full-fledged Kubernetes cluster, which took time, which took a lot of resources. And he's like, nope, there has to be an easy way. And I'm just like, wow, when I have a problem, I just wait it out. I don't go and create a distribution, because Rancher, like that in that Rancher made
00:39:49
Speaker
this K3S distribution by removing over 3 billion lines of code from the Kubernetes source code. He trimmed out all the non-CSI storage provider, any alpha features, any legacy components that weren't necessary to implement the Kubernetes API functionality or API fully. Everything was gone. Then he created his small footprint offering or a distribution that he could use for his testing. Then it has just taken off from that point.
00:40:18
Speaker
Yeah, I think the binary is less than 70 megabytes, which is pretty tiny when you compare it to a lot of other things. I think the website also says less than 100, but I think it depends on the version and things like that. But again, K3S is also supported on ARM64 and ARMv7.
00:40:39
Speaker
So again, sort of a small distro, but also optimized for those specific architectures. Doesn't mean you can't run it on a big old x86 server in cloud. You can absolutely do that, which is kind of maybe a pro for the flexibility of K3S. Definitely.
00:40:57
Speaker
And even though it removed all of those things that I just listed before, it is completely bare bones, but it still has all the necessary components that you would need to run a Kubernetes cluster. Right. So you have two main components, right? It's the server and the agent in K3S.
00:41:13
Speaker
Yeah, and it still has container D support, ingress controller, it has a CNI that you can use. So it has those things available inside the 70 megs of binary that it has that you can use to deploy. The only thing is it doesn't rely a lot like on etcd for that backing data store that uses it can use SQLite and I know recently
00:41:34
Speaker
Ranchers also introduced like dqlite or craft or kraft, but I don't want to confuse the Kafka folks, so craft to be used as the backing data source. Obviously, you can take this and customize this if you want to install a specific plugin for a specific storage solution that you're using at the edge, you can definitely do that. But yeah, bare bones, I think that's the keyword for me.
00:41:57
Speaker
Yeah. And I think it's, you know, some of the pros I had on my list for K-3S was it can run in a single master sort of deployment, right? For those really resource constrained, you have like a single master and an agent.
00:42:11
Speaker
at a minimum, and it uses the built-in data store instead of ITD for that. But it does have the capability to run HA, meaning it can run three masters. And then I think it also plugs into other data stores. I think it was MySQL and others and some other options. But it gives you the ability to recover from certain failures of the master nodes and those kind of things. So it is flexible in the sense that it kind of can run how you want it to, depending on the use case you have.
00:42:41
Speaker
for it. No, great. That's it for me for K3S, like that's the information that I like, we can definitely like do a full on episode and talk about all the different architectural components. But yeah, this is that one, one inch deep, mile wide. Yeah, absolutely. The only thing I'll add there is, you know, whenever you have a technology that's backed by a bigger company like SUSE Rancher, I think that gives you a little bit of support, right, for using
00:43:08
Speaker
uh the technology so uh that combined with sort of the range of operating systems it can run on i think has a lot of things going for it it's probably if not the most popular out there i think the only thing i had on my cons list really that that came to mind is it's it's just the community's distro so it's not really going to solve
00:43:28
Speaker
other edge problems like how do you sync data back to the core, how do you manage multiple K3S clusters, those kind of things. It is just kind of focused not on the full platform, but just the distro itself.
00:43:42
Speaker
Cool. We'll put a link into sort of more information about K3S and some of the architecture and stuff like that. Well, let's move on to microkates. Yeah. Microkates. I know this is, as you said, right? Like it's important that there is somebody backing the project, even though these are part of the ecosystem. Microkates.
00:44:00
Speaker
backed by Canonical, so you have a huge brand name behind it. I think the thing that differentiates this from K3S, for me, is the ability to run across Linux, Windows, macOS systems. The default way to run this backed by Canonical is on Ubuntu, but you can still run it on your MacBook Pro, you can still run it on your Windows machine or any other Linux distribution, and you can get access to Kubernetes compatible API set that you can work with
00:44:30
Speaker
I think it mostly runs as a virtualized Ubuntu flavor though. So it's not like it's natively running on windows or anything like that as far as I remember. So it does definitely lean towards the Ubuntu flavor.
00:44:45
Speaker
a bunch of experience, meaning like installation's easiest with SnapD, sort of their package manager and those kind of things. One of the things I really like about MicroCates is it starts as that really small form factor. I think they talk about at minimum needs 500 something meg of memory to run by default, which is pretty small.
00:45:06
Speaker
It's not like the smallest thing ever, but it's pretty small. But they have this idea of add-ons, where if you needed a DNS component or a CNS component or a storage component, you can just add on this piece depending on how you're using it. So it can grow definitely bigger than just that embedded IoT use case as well. And I think just a couple of things to add.
00:45:34
Speaker
With K3S, if you are running it on a three-node cluster, you have to say that, okay, please deploy it in a highly available fashion. For microkits, one thing that they have changed is if you add three nodes, it automatically knows that it has to do high availability or it has to figure out high availability. So it's by default versus manually enabling it. And then because it's using the Snap ecosystem through Canonical, right?
00:45:57
Speaker
It has automatic updates, so the package itself gets upgraded if you have to set the right policy, so you don't have to worry about figuring out the right packages and maybe plan for upgrading your systems at all of these edge locations. If you set it correctly, it will automatically update the binaries. Yup. Again, I think MicroCates worked both with ARM and Intel.
00:46:20
Speaker
as well. So definitely kind of tailored towards the edge use case. One of the things I did like that I pointed out in my notes here is that they provide a CIS hardening add-on. So if you're very security conscious at the edge, which I imagine most are, you can kind of enable this CIS hardening add-on, which basically will
00:46:42
Speaker
kind of probe the environment and use those CIS standards to give you that capability of having that assurance that you're kind of meeting those requirements as well.
00:46:56
Speaker
Okay. Let's move on to the next one. Sure. All right. So the next one we have on our list is CubeEdge. CubeEdge. So CubeEdge, again, I'm just talking about like, okay, all of these are smaller solutions, right? So the thing that I really liked when looking into CubeEdge is the high performance. Like they did some comparisons, some IO performance and some application performance. And out of the three solutions that we have discussed so far, like CubeEdge performs the most for somebody, like especially like for the blog that they did, right?
00:47:26
Speaker
You would have trusted that one. With performance testing, Ryan, I know you and I both have experienced like, okay, you have to set the right expectations, set the right parameters, and you can typically find a solution or a scenario where your solution stands out and off. But again, they do say that low resource footprint requirements, but with a higher performance to allow you to run a Kubernetes solution at your edge locations.
00:47:53
Speaker
Yeah, and just because we were talking about microkates using 500-something megabytes of memory to run, by comparison, the edge sort of runtime for Cube Edge uses 70 megabytes or requires 70 megabytes to run, which is a fraction, really, when you're thinking about this. And I think there's a big difference that I see in Cube Edge is that
00:48:20
Speaker
It's not just the management, sorry, the control plane and the runtime. It's a whole slew of components. If you dive into kube edge, right? There's cloud components, which they talk about cloud hub, edge controller, device controller, which
00:48:36
Speaker
run more in your cloud core, so the core data center that's kind of connecting to your edge endpoints. And then you have a whole bunch of edge components, which is Edge Hub, Edge, which is really the agent I was talking about using that much memory to run those applications on those devices.
00:48:53
Speaker
Meta Manager Event Bus, we talked about MQTT before Event Bus allows clients to connect through MQTT that's more used in the PubSub IoT framework. So really allowing you to have the flexibility of using existing devices that might use those protocols. Service Bus, Device Twin, being able to visualize all the devices in your Edge network that it may know about.
00:49:22
Speaker
So Device Twin just kind of is a representation of what's on your Edge network, even if they're not connected all the time. And mappers, which kind of allow you to kind of tie in other common IoT protocols like Modbus, OPC UA, and Bluetooth, which we all know. So lots of components, if you're just kind of coming from the, hey, I'm looking into K3S for Edge, and then you come to Qbed, it's a little overwhelming.
00:49:47
Speaker
Yeah, it's like, wow, just an architecture diagram that we link like, okay, so many boxes rule.
00:49:54
Speaker
in a good way, right? It makes it really flexible. It also ties in that major component of, oh, how do you sync certain information from your edge to your core? Which the previous two we talked about don't really tackle that, right? Whereas Edge Hub and Control and Cloud Hub are able to talk to each other to kind of get a viewpoint and sync certain information from your edge devices all the way up to your
00:50:18
Speaker
Cloud course, so there's a whole lot going on there. I think it's probably a great place to start for
00:50:25
Speaker
Um, you know, the full solution when it comes to, uh, thinking about how an edge architecture looks like and, um, having the necessary projects and those kinds of things to get started. That being said, it is fairly new. How old is this project? I forget. I was looking at their documentation. The only reason I'm saying this, cause even some of the documentation is like coming soon, right? Yeah. Again, it's an incubating project, right? So it's not like.
00:50:50
Speaker
close to graduated, but it's not even like a sandbox project from CNCF. So it's still in the incubating phase, early phases. So if you want to change directions, you can definitely do that right now. Yeah, exactly. So lots of really good stuff to integrate in the Kube Edge project, I think. That's definitely one of the benefits I had on here is that it's more than just a distribution. It handles all those things and supports the various protocols and those kind of things.
00:51:18
Speaker
That being said, I never used it myself, but we'll put some links to the documents and things like that on there as well. Okay. No, I think we spoke a lot, right? Like these are the three main projects. I know there are solutions from like Red Hat and how they do like three node clusters, including control plane and worker nodes. They do like something based on microshift, which is a space. I think we should check out because they just spoke about it in the
00:51:44
Speaker
in the product update webinar yesterday. So that's definitely coming to GA status soon. And the ability to run remote worker nodes. So your control plane can still run at your core data center. You just have remote worker nodes. So Red Hat has some interesting solutions. If you are in that OpenShift ecosystem, you can definitely check those out.
00:52:03
Speaker
Cool. So, you know, we mentioned some other ones. We won't dive into all of them in this episode. Agree. Open your Super Edge. We'll put links to those in there. All other sort of CNCF projects.
00:52:15
Speaker
that focus on sort of enabling sort of edge use cases and things like that. But let's go into a little bit about the sort of, let's talk about data because that's always one that I think comes up a lot is when people talk about moving their applications to the edge, they often say, well, their data is going to the edge. That's not true, right?
00:52:36
Speaker
I mean, even in the most basic forms, we're talking about, yes, there's a lot of data at the edge. But again, the point is to process that data as quickly as possible and then reduce that to the value. And that value is the thing that's fed back into other systems and or the thing that might be synced back up or analytics or statistics or those kind of things.
00:53:03
Speaker
Yes, you do have the opportunity to run data stores, right? We talked with, like I said, with Brian and Chip Bleh. They had mentioned they were using Postgres at the end. And MongoDB. Yeah. And MongoDB. And the point there was that, yes, it's a database. Yes, you can use it just as any other Postgres.
00:53:24
Speaker
The expectations around persistence isn't really expected beyond minutes, I think, because you can use it that way, but the goal isn't to have long-term storage of the data that's being done there. Yeah, I agree. You don't have to store everything for eternity at those edge locations, but you need
00:53:47
Speaker
to connect everything, analyze it, ship the relevant information back to your central cloud locations, and then keep doing this in a loop, I guess. I agree. Persistence is a thing that people need to solve for, but long-term persistence, maybe not.
00:54:05
Speaker
Maybe not. Before we wrap up, let's get into a couple of the use cases. Why don't you kick us off? I know we spoke about retail, definitely a big use case, oil rigs, cruise ships, all of these are definitely the
00:54:20
Speaker
I don't want to use the term. I remember I heard this sometime from a colleague, heavy edge. These are definitely scenarios where you have that half rack, full rack solutions, which you can use to run a lot of applications and a lot of compute power.
00:54:35
Speaker
But then the self-driving cars, the retail locations, the grocery chains, all of these things, I think that definitely helps. I know you wanted to talk about like a couple more around use cases as well. Yeah, I think the ones that always get me, I don't know why they get me excited, it's like the farming equipment.
00:54:54
Speaker
the state-of-the-art farming sensor edge type use cases where they're picking up how much rain has been happening or the nutrients or some crazy, crazy types of data that's being processed and therefore can feed back directly into how
00:55:12
Speaker
that parcel of land or the crop is being maintained. And those use cases are really exciting because they tie this physical world with the technological world. And again, those sensors are really providing feedback that needs to be directly fed back into some of these
00:55:34
Speaker
some of these pieces of equipment that can adjust them to fly, right? Yeah. Like based on the moisture level in the soil, turn on the sprinklers or based on like, I know you like to talk about drones, right? So like fly a drone over your farm and then figure out, okay, what needs care and what needs to more care and you can figure it out or make decisions based on all of that data that you're generating. Yeah. Like the thermal and multispectral cameras that can feed data into exactly the health of the crop, if it's disease, those kinds of things.
00:56:02
Speaker
Really crazy things that we don't know nearly enough about, but those use cases are pretty good. There's a whole domain, like ag tech or agricultural tech. I'm a big fan of TechCrunch and all of their articles. They have a whole section around ag tech and what companies are doing there. Yeah. Then you can think about finance and banking.
00:56:25
Speaker
your phone is technically an edge device for banking to ATMs and even just probably something everybody does is streaming from their Apple TV or Roku or something like that. Those are those are devices that need sort of that speedy response. We don't want to wait forever to start watching our shows because anyway, lots of lots of use cases. Let's let's end with where to get started.
00:56:54
Speaker
Yeah, like I would just say like start experimenting with the projects that we discussed on this episode, right? Like if you find anything that's interesting and you want us to do like a detailed episode, we can definitely do that. We can bring in experts from the community. Maybe we can get that in to talk about K3S. So we can definitely do that. But yeah, we'll have a ton of links in the show notes where you can find more information about all these projects. Maybe some use cases that you can use are used to like
00:57:20
Speaker
I don't know, start thinking about things. Yeah, I mean, one place I think I would suggest you get started is go grab a Raspberry Pi and go on Amazon and find some cool sensors that plug right into that Raspberry Pi.
00:57:35
Speaker
that can do some basic measurements or calculations and you can start to process and kind of write applications that process the data locally. You can also spin something up on AWS that you can send some data back to to visualize or even do processing like the example we talked about before. I think that's a fun way to tinker if you're into that kind of thing. Definitely go check that out and obviously the projects we listed are all a good place to start.
00:58:04
Speaker
Awesome. That's it. Let's wrap this up. All right. So again, we'll be at DevOps Days Boston next week. So if you are there, come say hi. Well, shout out to joining our Slack, which there's an easier way to get there now. You can head over to Kubernetesbytes.com.
00:58:20
Speaker
We finally bit the bullet and created a practical experience. I know. Let's go. You can listen to all of our episodes there. You can watch all of our episodes there. You don't have to go find them on wherever you may find them or YouTube. You can just go to KubernetesBites.com, find out where to get our slack, watch our videos and episodes there. Go check that out.
00:58:40
Speaker
Yeah, I think that's about it. And if you're at KubeCon, also we'll be there. So come check this out. And I think that brings us to the end of today's episode. I'm Ryan. I'm Bob. And thanks for joining another episode of Kubernetes Spites. Thank you for listening to the Kubernetes Spites podcast.