Introduction and Podcast Overview
00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management. My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.
00:00:29
Speaker
Good morning, good afternoon, and good evening wherever you are. We're coming to you from Boston, Massachusetts.
Season Two Reflection
00:00:35
Speaker
Today is December 16th, 2022. I hope everyone is doing well and staying safe. Let's dive into it. Bhavan, how have you been? This is our last season of our last episode of season two. I don't want to stop doing this right. Come on.
00:00:58
Speaker
This is just the last episode of 2022. This has been a great season, right? I think we clearly exceeded all the different goals that we had set for ourselves. We covered so many different topics. So I'm just super pumped. I hope our audience, which has grown over the year, also enjoys all the content that we put out. And I know we have like a summary thing at the end of this podcast planned as well. But yeah, I've been doing great. I think finally the winter is here.
00:01:26
Speaker
We got our first snow. I was glad that I didn't have to shovel it because it wasn't like it didn't accumulate as much. I was being lazy. We don't want to hear you talking about snow because we know where you're going next.
00:01:39
Speaker
Yeah, it's funny beaches of Puerto Rico. I booked this trip, I think a month, month and a half back. And now I look at the weather next week and it's supposed to rain. So I was like, hopefully that goes away. I'm keeping my fingers and toes and all my extremities crossed. But I don't want to actually go out and do things, not stay inside a hotel or stay inside a bar. So like, yeah, fingers crossed. I'm sure you'll enjoy it.
Personal Experiences and Episode Topic Introduction
00:02:08
Speaker
That's for that's for certain.
00:02:10
Speaker
Thank you. How about you? How have you been? Oh, this last two weeks, I was telling you earlier, my whole family has been sick constantly since we got back from reinvent. I'm not blaming reinvent at all. I came back to sickness and now I was telling you earlier, you know, we're just like,
00:02:30
Speaker
come running down and continuously sick. And I'm hoping just this means that because we're tackling the sickness before your actual Christmas holiday, that we'll be good to go during and after. Oh yeah. Get it out of the way, right? And then you can party. Exactly.
00:02:45
Speaker
So that's the hope, and I know we're looking forward to the time off and getting into 2023. Can't believe it's already here. I know. Yeah, this year just went through quickly. Yeah, exactly. So if you're wondering, today's episode is going to be about service mesh. We're going to do another sort of intro 101 style episode with just Bob and I. We don't have a guest today.
00:03:08
Speaker
We wanted to make it our last episode of the season with a nice and cozy audience here.
Kubernetes Ecosystem and Service Mesh Introduction
00:03:16
Speaker
But before we get into what Service Mesh is and a high-level overview, we're going to cover a tiny bit of news, it looks like.
00:03:23
Speaker
Yep. So I know we are closing 2022 out and we do have a couple of funding grounds. Much needed. I think after the first quarter of this year, we barely heard any acquisitions or any funding grounds. So this is good news. People in the Kubernetes ecosystem are still raising money. This means that
00:03:44
Speaker
People are not losing their jobs and they can survive the next year or the next couple of years till they need that next funding round or till the market settles around and they can go IPO. But Trillio, one of the vendors in the community's data protection ecosystem, raised $17 million.
00:04:01
Speaker
And the way they have structured this is there is series B in December of 2020. And they announced that this $17 million is just an extension of the series B round. So I'm assuming they got like a flat round on. They made a few changes in their exact team. So the CEO now becomes executive chairman. They appointed a new CEO to lead the team with new funding. So hopefully they can do better next year. And then the second vendor that I wanted to talk about was in the community security ecosystem.
00:04:29
Speaker
Again, we know that security is kind of the hottest thing in the ecosystem right now. Everybody wants security. So SNCC, one of the vendors there, raised a whopping $196.5 million. This is not their valuation. This is the actual amount of money they raised.
00:04:48
Speaker
at a $7.4 billion valuation. So this is a series G for them. They did raise a series F in 2021. They raised $530 million at a $8.5 billion valuation, just looking at these numbers for series F and G. This looks like a down round, but again, this is more than enough money to help them go through the next couple of years and make sure that they can survive as a company.
00:05:15
Speaker
Congratulations to everyone at SNCC. Let's keep doing this thing. Absolutely. It makes me wonder, how many letters can you go in series funding rounds? What's the record? What's the record? I don't know. But as long as VCs are willing to give you money for a part of your company, you can keep going to Z. What happens after Z though? Maybe Greek alphabets, alpha, beta.
00:05:39
Speaker
I don't want to do the AAABAC thing. If anybody knows, if anybody's been a part of that or knows the answer to this, send us a message. I'm actually very curious what happens after Z, if you can have that many funding rounds that answer.
00:05:54
Speaker
Good, good. Yeah, everyone over at snake. Congratulations. I only have one update for news. It's again in a slow week news here. But if anyone follows sort of the persistence and sort of storage space and Kubernetes, they probably know the name Chris Evans from blocks and files. He does a number of great articles over there. And he just put out a another article sort of an update to one he's done in the past of comparing
00:06:23
Speaker
in contrasting cloud native storage, the performance of them, and giving the exact sort of environments that he's running these tests on and what it looks like. So if you want to go check out this article and compare some of the sort of more mainstream and popular sort of cloud native storage companies out there and how they compare to each other, obviously take it with grain of salt. Everything is going to be
00:06:49
Speaker
you know, ideal in that scenario. But, you know, I think it's still overall a really interesting way to look at the whole ecosystem and market. Yeah. With that, Bob and let's let's jump into our topic today, which is what is service mesh or
00:07:07
Speaker
Another way to say this is service mesh 101. I know this is sort of a newer topic to myself. I mean, I've been aware of service meshes for a long time in this ecosystem, but never really tried to dive headfirst. And I know we like trying to do these new types of 101 episodes. So why don't you kick us off with sort of a high level sort of overview of your research around service
Service Mesh Functionalities and Benefits
00:07:30
Speaker
Yep, for sure. So one thing, as you were building up to that question, one thing I realized is I don't have a dictionary definition or one-on-one definition for Service Mesh, but I have what it means, things like that. So Service Mesh, if I just put it plainly, helps you add functionality to your applications that are running on Kubernetes that you don't have to code as part of your application. What does that mean? So there are four
00:07:56
Speaker
major buckets that Service Mesh provides functionality in and around. The first one being security. There are implementation models that we look at, but let's talk about these four buckets first. First is security. What does this mean? If you want different microservices in your applications to use MTLS for that inter-service communication, instead of coding all of this functionality into your application code, you can
00:08:22
Speaker
just rely on a service mesh like Istio to provide that functionality. And this can be done through, let's start the discussion by talking about the sidecar based model, right? So you don't have to code anything inside your application code, but you can have a sidecar container inside your Kubernetes pod that provides this functionality. So any traffic that goes out from your microservice to another
00:08:44
Speaker
all of that traffic is encrypted using MTLS by the sidecar container and not by your application code. So this removes that overhead that developers need to worry about to build this functionality inside their application. The second is connection policies or connection functionality.
00:09:02
Speaker
Let's say if you are doing an alpha beta test or a blue-green deployment test, you want to test out different versions of your applications, you want to ensure or enforce some sort of traffic splitting or retry policies or circuit breaking for your applications. That's where Service Mesh can help you as well. Using Service Mesh implementation, let's say you have a blue and a green version of your application running, you can ask Service Mesh to enforce that 90% of the traffic continues to
00:09:31
Speaker
go to the blue deployment, the older deployment, but then 10% of the traffic can go to the green deployment as well. So to try out the newer features and see how users interact with it. So that's another functionality that service mesh brings to the table. Then you have observe or observability, which is where you can use the service mesh dashboard to actually see how your application is communicating between all the different components. So how are, which microservice is talking
00:10:00
Speaker
which microservice, you can figure all of that out, you can get a neat diagram. And then you can also use this, not just to see how traffic is flowing, but also to identify where the bottlenecks are in your application. So if a specific part of your application is not doing well, maybe you need to improve the code or enhance that code to make sure that it's not the bottleneck service mesh can help you identify that with not just traffic flow, but also logs and metrics.
00:10:25
Speaker
Finally, you can use a service mesh deployment to enforce control. Let's say you have a three-tiered application in a microservice deployment. You have your front-end component, a middleware, and a backend database. This is not how a microservice application looks like, but this is what I could come up with right now.
00:10:46
Speaker
Let's say your UI shouldn't be talking directly to your database. It should just be talking to your middleware layer. And this is how you can enforce that. You can enforce which components can talk to other components inside your distributed application. So those are the four different buckets of functionality that Service Mesh brings to your application or brings to your Kubernetes cluster.
00:11:07
Speaker
Yeah, absolutely. And you brought up sort of you don't have an actual definition. And if you look up what is service mesh, like if you literally do this in Google right now, you'll find a lot about a lot of the same language, meaning that it's a separate infrastructure layer, you'll pretty much always see those terms, at least is what I've seen. And it really helps as a way for there to be a separate infrastructure layer on how applications share data with one another, right?
00:11:33
Speaker
At its core, that's really what it's doing, but it gives you all these other functionalities around control and security. But the key there is, and I think this is a good segue into what I wanted to talk about is really around why does Service Mesh exist? You get all this functionality in a separate layer, but what did it look like before? Why does Service Mesh exist?
00:11:58
Speaker
And, you know, we have to talk a little bit about the challenges that we've seen in microservices and that architecture compared to sort of that monolithic traditional architecture.
00:12:10
Speaker
which is, you know, a monolith is everything's kind of built into a big code base, typically running on, you know, a single server or a very large server, sometimes with HA, but a lot of the, you know, a lot of the components and things like that of that application will be, you
Microservices vs Monolithic Architecture
00:12:28
Speaker
know, all together. And the communication between, say, like an orders and
00:12:35
Speaker
a checkout and a UI component are all within that code base. And so they don't have to communicate over the network in a model. It's just within a giant, probably Java code base or something like that. No hard feelings on Java or anything, but it was designed fundamentally different. And as we started to break that monolith up and provide small teams working on small services,
00:13:02
Speaker
That checkout service is now its own little microservice and that shopping front end is its own little microservices. They depend on each other now and they run completely separate in containers as we've seen microservices architectures come to fruition with things like Kubernetes.
00:13:21
Speaker
And those communications between those services are now more you know they're more complex a little more fragile and less we do have something like service right where sort of and i say this because there's there's a value in adopting service mesh once you get to.
00:13:42
Speaker
an actual microservices deployment. Meaning that I don't see like, you know, if you're running a monolith or even a monolith in a container, which happens, right? There's less, there's less, yeah, there's less value in a service mesh. Really, I think you have to kind of dive deep into that microservice architecture. And that's where you get sort of the, the for free components that are, that don't have to exist in the application logic anymore. Right. So the monolith traditionally would do all the
00:14:11
Speaker
the retry logic. If it had to talk to another service, it would go ahead and implement in code how many times to retry and what backoffs to do. It would implement some telemetry sending somewhere. It would configure certificates in TLS and that stuff where you can think of all those components that used to be in a larger code base now separate from a microservice. You'll see this looking up service mesh is
00:14:40
Speaker
One of the value propositions is application teams can just focus on business logic and not really worry about all this other sort of communication logic and security, leave that to the service mesh, right? It kind of exists right next to as a component to the application. So, you know, that I think is important to understand of like, why do we even care about implementing a service mesh? And it really kind of dives into microservices.
00:15:10
Speaker
Yeah, and I think to extend your point, it definitely removes the overhead from a development perspective that you don't have to code in all of this additional logic as part of your business logic, but it also removes the overhead. Microservices are supposed to be small and compact and just do one thing really in the right way. If you keep adding the same logic over and over again,
00:15:33
Speaker
you are adding a lot of overhead. So I think the model that started by just having like a sidecar container inside your application pod really worked for service measures.
Sidecar Model in Service Mesh
00:15:43
Speaker
So I think next, let's talk about like what the sidecar model is, right? Like how it was implemented, what were the benefits that it provided? Yeah, yeah, absolutely. And before we dive into that, right, what a sidecar is, I want to kind of step back and talk about something that
00:16:01
Speaker
helped me fully understand this a little bit better before not knowing a term like sidecar that well, right? Obviously, like
00:16:09
Speaker
Being in this industry, Sidecar is something we talk about a lot, but if you're not familiar with it, it might be a little confusing. So in terms of components, a service mesh is really a set of proxies. And if you're familiar with proxies, in a traditional sense, you might set up a proxy in your corporate or use a proxy in your corporate day job that will intercept traffic and do something with it, secure it, that kind of thing.
00:16:37
Speaker
Right? Ultimately, their service mesh is a whole sort of concept around how to provide sort of proxying services for your individual microservices. And those proxies often sit in sidecars. And it's not the only way to implement it, but I did want to bring that up before you dive into sidecars. But I now say that you can go into sidecars. Gotcha. Thank you.
00:17:05
Speaker
Thank you for that background, right? Sidecars, right? Again, this was a way for Kubernetes to ensure that you can have non-application code as part of your Kubernetes pod. So I know when we started with Docker, containers were the smallest addressable unit, but in Kubernetes that becomes a pod and a pod can have one or more containers. So this enabled functionality like init containers, which help you do certain things before your application container actually starts. And then sidecar containers can provide this additional
00:17:32
Speaker
capability like the MTLS functionality, the security capabilities, things like that. So that's where a Sidecar model comes in. You are running it as part of your application code. So this makes a few things easier. As we said, with service meshes, the developer doesn't have to worry about writing all of this logic. All you need to do is add that container into your Kubernetes pod for all your application components and then deploy your application and everything just works.
00:17:56
Speaker
So that's definitely a great benefit, right? It simplifies communication by not adding a lot of overhead. Yeah, and ultimately what you're getting is you're doing a very similar deployment as you always would be with the typical YAML you're using. You're just adding a little bit more YAML and another container exists within that pod. And that makes up sort of the data plane component of the service mesh.
00:18:23
Speaker
Yep, and this helps with both layers 4 functionality and layer 7 functionality. So layer 4 can just be the MTLS functionality that we just spoke about, but all the intelligent traffic routing or the control aspects, all of that can be handled at layer 7 as well. So you bring both of these functionalities by just adding these sidecars inside and injecting these sidecars inside your applications.
00:18:48
Speaker
If you don't have any other challenges or benefits to talk about, I really want to talk about challenges next.
00:18:54
Speaker
Yeah, I think we should absolutely talk about challenges. Let's go into it. Okay, so one of the challenges, okay, even though sidecars seems really easy, one of the challenges that we have seen or the ecosystem has seen is the over provisioning of resources. Since this is a sidecar container, at the end of the day, you will have to specify requests and limits as part of your Kubernetes pod to make sure it has enough resources and the sidecar itself doesn't become the bottleneck of your application.
00:19:21
Speaker
So even if your application is not utilizing all of its resources, you always have to provide enough resources to your sidecar containers as if it was operating at full load. So this resulted in a lot of over-provision resources, a lot of waste when it comes to the CPU and memory that is needed by your application from your Kubernetes clusters. Also, each server needed to, or each app,
00:19:48
Speaker
application component needed a sidecar container. Basically, if you didn't add it from before day zero, if you are adding it at day two, this means that your application needs a restart. It needs to come back online for this communication to work. When you're doing this at scale or maybe across different applications, this did result in some applications being restarted, which is not definitely something that you want as part of your application code.
00:20:16
Speaker
I think more challenges include things like the sidecar upgrade requires restarting of application or just adding it from a day zero perspective, but also any new functionality that gets introduced in your sidecars needs a restart of your application itself. And then I think one main thing was around jobs. Kubernetes has a construct called Kubernetes jobs, which does one thing and then it dies off.
00:20:42
Speaker
But even for that job, you needed a sidecar container and the sidecar container had a long life than the job which just did one thing and left. So you had a lot of post containers living around in your Kubernetes cluster.
00:20:55
Speaker
Yeah, some overhead for sure there. And I think Kubernetes does a lot to enable service mesh, but there's also some things that are a little bit of an anti-pattern or a friction point for developing it with the Sidecar model. Not to mention, to adopt a service mesh, you basically are adding a network hop into your application, meaning typically your application would communicate directly with whatever other service.
00:21:25
Speaker
in a lot of cases, but now you're going to go to the proxy and then you're out to the proxy. That's something I think is that comes up, you know, often, but it's one of those, you know, you can't really get away with that. If you're adding an infrastructure layer to do this kind of stuff in, you can think of if it was baked into your application, you would still be.
00:21:43
Speaker
Performing that logic and wouldn't be sitting next to it so i love that one but that's what i want to think about and just the overall complexity right you're adding you know especially from like a troubleshooting and maintenance if you're the person sort of looking after these these clusters with with meshes you now have.
00:22:01
Speaker
literally double the amount of containers to potentially monitor and troubleshoot if something goes wrong.
Challenges and Advancements in Service Mesh
00:22:09
Speaker
So definitely some downsides, but it also, the Sidecar model is a proven model. It is a way to really battle test service mesh, so to speak. And I know
00:22:22
Speaker
Even with some of the advances in service mesh that we're going to talk about next, Sidecar still exists. It's not like going anywhere anytime soon. But yeah, maybe that's a good segue into what does service mesh advances look like today?
00:22:38
Speaker
Yeah, and to your point, right, that sidecars are still around and are not going anywhere in the near future. Even if you look at the day zero event called Service Mesh Con at QCon Detroit. Obviously, I didn't attend it in person, but to get ready for this episode, I did go and watch maybe 70% of the content on that channel. So we'll have a link for it in our show notes as well, so you can go and watch those videos.
00:23:04
Speaker
There are going to be pros and cons for any model. But for organizations that were just using this sidecar-based model for basic functionality or layer 4 functionality like MTLS, just MTLS, but they were adding a lot of overhead, which was not needed. All the layer 7 functionality was not even needed for their application. They just needed MTLS and they still had to have one container per application part.
00:23:27
Speaker
That's where something like an ambient mesh comes into the picture. So ambient mesh basically removes this dependency of having one sidecar container per application pod and it moves it to one proxy or Kubernetes worker node.
00:23:44
Speaker
Instead of, let's say you were running 20 different application pods on one single Kubernetes node, instead of having 20 different proxies, you have one proxy, maybe a bigger instance of that proxy, but running on that node which handled things like layer 4 MTLS encryption or communication between different application components. So if you
00:24:03
Speaker
If your application had to communicate between nodes or between services, it goes to this proxy that's running on the node, and then it's going to the different node or a different microservice running on the same node. So it removes that overhead.
00:24:16
Speaker
And I will clarify, ambient mesh is sort of a name that we're familiar with in the Kubernetes ecosystem tied to a very popular service mesh in the CNCF called Istio. But really, an ambient mesh is also called a shared agent, meaning that you have, like Bhavan said, a single proxy agent on the node, and it's shared amongst applications on that node that are running.
00:24:41
Speaker
Yep. And it definitely helps resolve some of the challenges that we just discussed with the sidecar model. It reduces costs. So instead of having a proxy per container or per pod, you have a proxy per node. It becomes a multi-tenant proxy as well. And the way
00:24:57
Speaker
ambient mesh work is this proxy just does L4. All you need is just mtls functionality. Just having this one proxy per node does everything for you or handles mtls thing for you. It also decouples the proxy from application, so it simplifies operations, so you don't have to go and add this container inside your YAML file for your deployments. If the proxy needs update, you don't have to restart your
00:25:18
Speaker
applications, all of those benefits from simplifying ongoing operations does come into the picture. And then it improves performance because since it just does L4, if you just need MDLS, you have a faster way of communicating between different components in your application. So those are some of the benefits that Ambient Mesh brings to the table as well.
00:25:40
Speaker
Yeah, and if anyone's familiar with the concept of virtual overlay networks, it conceptually is similar in this case where you have an agent on the node. If folks are familiar with OpenVSwitch, this was very similar in the network space.
00:25:55
Speaker
where you had an agent running a network and could do basically tunnels and encapsulate basically network packets over sort of that tunnel to another node and then kind of decapsulate and get it to the right application and those kind of things.
00:26:10
Speaker
This works in a similar way from a service mesh component where it has those tunnels from node to node and provides access to this specific application. Now, I will say this, and this brings back up the sidecar thing, is that to do other things beyond level layer four,
00:26:28
Speaker
There still is the concept of a proxy, a waypoint proxy in the ambient mesh world they call waypoint proxies. But this basically acts as a container based proxy that we typically saw with Sidecar.
00:26:43
Speaker
But I believe, at least in this architecture, it would run per namespace. But what happens is, if you need to do more than layer 4, it'll get to that destination. It won't go right to the application. It'll go to a waypoint proxy. Basically, if you're familiar with just the concept of waypoints, it's got to hit another point, and then be processed for layer 7, and then back down to the application. So depending on your needs, there still is more added in terms of
00:27:11
Speaker
you know, compute and those kinds of things, but it's still overall much more efficient because you're looking at like namespace versus like every single pod.
00:27:19
Speaker
Oh yeah, and that's so true, right? Like you're not losing any functionality that was in the sidecar model just by moving to this ambient mesh based deployment. You still get all of these things by that L4 proxy and that WavePoint L7 proxy. So having one per namespace definitely makes it easier, right? I haven't spoken to a lot of people that are running multiple applications inside the namespace. Namespace usually becomes that construct which encapsulates your application that's deployed on the Kubernetes cluster.
00:27:48
Speaker
Or at least a tenant which represents some type of application. Yes. And then going back to your secure tunnel between the nodes using L4, I know you described how it works for people who want to learn more about it. It's called the H-Bone protocol. So, HTTP-based overlay network encapsulation.
00:28:09
Speaker
H1. I like H1 more. I didn't know what it stood for, but it was so cool. And then I was like, okay, maybe I should know what it actually stands for. So H1, and then it establishes a Z tunnel between your different nodes to enable the secure communication between all the application components across nodes. Exactly. And that's a good point that you brought up before. You're not losing anything, right? Whether you're doing it as a sidecar or doing it with more of the shared proxy
00:28:32
Speaker
shared agent mechanism. There's pros and cons to both, I would say, you know, take a look at both. There's a whole slew of companies out there, which I actually want to go through some of them because there are a ton that we put on here. But if you're if I don't think we went through sort of the overall list of things like mutual TLS is definitely one of the biggest reasons adopt this because yeah, having to conceptually put this into every development team and every application.
00:28:57
Speaker
is a lot of overhead versus just like, don't worry about it, let the service mesh do it. There's a lot of benefit there and obviously security being a forefront. But there's also generally service discovery, load balancing, encryption, failure recovery, high availability, latency aware load balancing based on the control aspects that Bob was talking about.
Service Mesh Features and Solutions
00:29:16
Speaker
success rate stuff, transparent traffic shaping, we used to call those canary deployments, but very similar in that case. So there's a lot you can do from that data plane. And then, you know, that's only the data plane, right? And then the control plane just adds so much to the overall sort of SRE, sort of concept of being able to manage policies from a source of truth and
00:29:40
Speaker
monitor those and do all the telemetry and traces that really we've talked about in general with microservices and Kubernetes. This seems like you're adding a lot of complexity, but if you are really adopting these things, they're going to add a lot of overall benefit.
00:29:59
Speaker
Those are all provided by many of these service message. I'm going to go through a list of service message here. And I can't speak for every one of them if they provide all of those services that I just mentioned. But I was surprised when I started going through this list, which ones definitely stand out versus the ones I didn't know about. So there's the AWS App Mesh. There's Azure Service Fabric Mesh.
00:30:26
Speaker
It's also interesting to see how these are named. But there's the buoyant conduit. I didn't know about that one. The F5 nginx service mesh. And if you're like, oh, I know that nginx, that's because nginx we know as something that can provide a proxy component in traditional IT, I should say.
00:30:48
Speaker
the Google Anthos service mesh, HashiCorp console can provide service discovery and very similar mesh technology. Istio, which, or Istio, I guess, depending on how you say it. I've never heard Istio.
00:31:04
Speaker
And that's I think the popular one in CNCF and sort of in the Kubernetes ecosystem. For the longest time it was like outside CNCF and now it's a sandbox project. Everybody who was already using Istio celebrated like had a party when Google actually donated the project to CNCF.
00:31:22
Speaker
Yeah, exactly. And Kong Mesh, Kuma, Linker D was another very popular one for a long time. One of the original ones, I think, Linker D was early on. The Red Hat OpenShift Service Mesh. Solo.io is also a very kind of forward-thinking company behind Ambient Mesh, and they have something called Glue Mesh. But they're like, definitely go check out Solo.io because they're doing a whole bunch of awesome stuff in this first mesh space.
00:31:51
Speaker
Touchrate, Tigera, Calico, Cloud, I guess, provide service mess. I mean, I'm familiar with Calico. Traffic, that's T-R-A-E-F-I-K, labs. Also, I think one that's well known, and then obviously one from VMware.
00:32:09
Speaker
Tanzu and Service Mesh. And I'm sure there's ones I didn't talk about on here, but just doing a preliminary search, there's a lot more than I originally thought. Yeah. And I think one of the things that you definitely missed, even though I had access to this list before,
00:32:24
Speaker
are the people from Psyllium. Psyllium being a CNI does some of these things at the CNI level, which is a really cool implementation details and they are using EBPF functionality. Instead of running that proxy in the kernel user space, they're actually leveraging EBPF and running all of this L4 proxy functionality in the kernel space. Again, this is a one-on-one level episode. I don't know enough about what Psyllium actually does or how these things work together. Anybody from Psyllium, if you're listening to this,
00:32:53
Speaker
maybe come on our board and have like a deep dive discussion on service meshes and how you're using all of this to help customers. But yeah, that was another one that I wanted to add to your list. Yeah, absolutely. And I think that goes for anyone who works at, you know, in this space, if you're a practitioner, or if you're working at one of these companies, yeah, please come let us know. We'd love to talk to you.
00:33:15
Speaker
specifically about sort of what you're doing and sort of dive into the details there. But I think this has been a really good 101 episode. I know that just doing these episodes are really valuable to myself.
00:33:30
Speaker
If I'm less familiar with a certain area, this is a really good way to dive in here.
Podcast Journey and Future Plans
00:33:36
Speaker
And again, we're trying to learn with the community, open with the community, and we know we're far from experts, so we don't claim to be either. So if you're listening to this and know a lot more, please reach out to us. We'd like to talk to you and learn more. With that being said, we do want to talk about our wrap up because this is, as we said, the last
00:33:56
Speaker
episode not season of season two our long long season two um and um and talk about our plans for a season three as well so bobbin how did season two go for you i think season two had a lot of ups i couldn't really think of any downs like if you look at i can think of one we applied uh we started loading our stuff on youtube and youtube uh
00:34:20
Speaker
said we were spam and kicked us off. So we're trying to get back on. I don't know how we are spam dude. Like I think one of the things is we posted all the episodes together. Maybe that was it had like the same image, but again, people, we did have a YouTube channel for a while and then for some reason YouTube decided we'll get it back.
00:34:43
Speaker
But just to discuss a few highlights of season two, right, I can't even remember the number of episodes that we had, which was way more than what we did in season one. But just talking about the Spotify unwrapped statistics, we created over 1050 minutes of content. So
00:35:00
Speaker
All of our listeners, thank you so much. Like 1000 minutes just paying attention to someone is a lot of time. So we do appreciate you for spending and giving us this 1000 minutes of time in this year. Thanks for putting up with me for that long as well.
00:35:16
Speaker
No, and I think the highlights, right? Our podcast was heard in 62 countries, which is awesome. That means people in 62 countries care about Kubernetes, if not more. Definitely a growing ecosystem. We were amongst the top 20 most shared podcasts. We were the top 10 followed podcasts in the Kubernetes ecosystem. So these are just some of the numbers that make us feel good and like encourage us to keep doing this and keep doing a better job at it.
00:35:46
Speaker
Absolutely. And, you know, for all of our listeners, I think just again, thank you. We watch our plays and our listens every day and we're excited all the time as the amount of listeners go up. We're going to do a lot more interesting things next year. We're going to expand to video, hopefully, if all goes well. We got to get our YouTube channel back, but we're going to start recording video with our guests.
00:36:13
Speaker
I know a lot of people like to consume video-based podcasts. It doesn't mean we're going to stop producing the audio-only version. That's going to still be there. If that's the way you listen, I know it's the way I listen most of the time, it will not change for you. If you're moving along, doing house chores, this is the best way. Just put your AirPods on if you're a Samsung user, maybe birds, and then just listen to our podcasts.
00:36:36
Speaker
Exactly. We're definitely going to dive into that some more. We're also going to be doing maybe some sponsorship stuff. That might mean a few times there's an ad here or there. We'll see. We've liked this ad-free experience we've been able to provide to our listeners, but a little bit of money coming in here and there allows us to do some more interesting things like provide a website for everyone to go to.
00:37:00
Speaker
to find, you know, sort of an archive of episodes or, you know, find out some more information or maybe let us email you some stickers or that kind of stuff, right? Have some t-shirts finally, right? Like I know this year at KubeCon we did sticker giveaway, maybe next year we can bump it up to like t-shirts or hoodies and just grow this community.
00:37:18
Speaker
Exactly. And, you know, when we do live in-person episodes and bring all of our equipment to Cubecons, we, you know, it helps to fund those things. So, you know, hopefully we don't, you know, that's not an off-put for anyone, but we're going to definitely dive into that. And if you are at a company and you do, you know, podcast sponsorships reach out to us, we'd love to be able to do more for this community through doing some more with you. So
00:37:44
Speaker
Those are definitely some big goals and aspirations for us, but I know, again, our listeners make everything for this podcast and we could thank you enough.
00:37:56
Speaker
Thank you so much for all your time. I just have a couple of asks to end this episode and end this year. If you're meeting your friends, meeting your family for holidays, if there is anybody who is mildly interested in the containers or Kubernetes ecosystem, share this podcast. Each person sharing it with somebody they know helps us grow our audience. Obviously, that's the best way to grow our audience as well instead of just
00:38:21
Speaker
having those promoted tweets on Twitter, I guess. So if you meet anyone in the system, please share this podcast. It definitely helps us grow our audience base. And then the second ask is something that we have mentioned over the last couple of episodes. If you have interesting stories to share, if you have some gotchas that you want to share, or if you had interesting experiences with Kubernetes, not just storage, but
00:38:44
Speaker
anything around like GitOps or EBPF or security, anything that we have covered throughout this year. Share those clips, right? It's really easy. Open your voice memo app on your phone, record a quick clip, and email it to us at kubernetesbytes at gmail.com. Send it to us on Twitter, LinkedIn, and maybe individually to us. But yeah, do that during your holidays. It would be a fun episode for us to put together and have the community voice heard as well through this forum that we have going.
00:39:11
Speaker
Yeah, and if you think like doing something like that is not enough time for what you want, we'd love to have you on the show, like doing a practitioner approach. You know, reach out to us if you want to do more than just that. We'd be happy to have you on the show. And as Bobbin
Future Podcast Directions and Announcements
00:39:24
Speaker
said, you know, this show started off as sort of a persistence and storage based on our backgrounds. But we've, you know, obviously, through a lot of episodes have grown beyond that we're going to cover
00:39:34
Speaker
a lot of things in the cloud native and Kubernetes ecosystem, as you've probably heard our, you know, our episodes kind of expand and we're going to keep going down that route because I think that gives us a lot more we can talk about and share with everybody. And we've gotten a lot of good feedback so far from our listeners. And yeah, you know, again, I will just end by saying, you know, thank you and we'll see you in
00:39:56
Speaker
early to mid-January. We're going to take a little break, but we already have a few guests lined up to talk about a bunch of awesome things, you know, security and some more stuff. So, you know. Thank you. And yeah, happy holidays. Yeah, likewise. And with that, brings us to the end of today's episode. I'm Ryan. I'm Marvin. And thanks for joining another episode of Kubernetes Bites. Thank you for listening to the Kubernetes Bites podcast.