Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
The evolution of service mesh technologies image

The evolution of service mesh technologies

S4 E10 · Kubernetes Bytes
Avatar
1.6k Plays7 months ago

In this episode of the Kubernetes Bytes podcast, Ryan and Bhavin talk to Christian Posta - VP and Global Field CTO at Solo.io about all things Service Mesh. They discuss how things have evolved from the early Linkerd days to sidecar less istio service mesh implementations. They also talk about how service mesh can help you connect to application components running outside Kubernetes, and how developers and platform engineers have a shared responsibility model when it comes to implementing service mesh using internal developer platforms.   

Check out our website at https://kubernetesbytes.com/  

Episode Sponsor: Nethopper
Learn more about KAOPS:  @nethopper.io
For a supported-demo:  [email protected]
Try the free version of KAOPS now!   https://mynethopper.com/auth  

Cloud Native News:  

  • https://loft.sh/blog/our-24m-series-a-led-by-khosla-ventures/
  • https://www.harness.io/blog/celebrating-150m-in-new-financing-to-accelerate-innovation
  • https://www.akamai.com/newsroom/press-release/akamai-announces-intent-to-acquire-api-security-company-noname
  • https://www.linkedin.com/posts/rouvenbesters_its-official-the-otomi-platform-has-activity-7194604616901120000-48g7?utm_source=share&utm_medium=member_desktop
  • https://www.wiz.io/blog/celebrating-our-1-billion-funding-round-and-12-billion-valuation 

Show Links: 

  • https://devsummit.infoq.com/conference/boston2024 
  • https://www.solo.io/topics/cakes-stack/
  • https://www.solo.io/  

Timestamps: 

  • 00:06:10 Cloud Native News 
  • 00:15:37 Interview with Torsten 
  • 01:01:58 Key takeaways
Recommended
Transcript

Podcast Introduction

00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management. My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.
00:00:29
Speaker
Good morning, good afternoon, and good evening wherever you are. We're coming to you from Boston, Massachusetts.

Ryan's Biking Accident

00:00:35
Speaker
Today is May 15th, 2024. I hope everyone is doing well, better than me, and staying safe. Let's dive into it. Why better than you, Ryan? Come on. Well, yeah. Bob and I were just talking about this. I just got some x-rays today from a nine-day-old mountain biking accident, and I have three broken ribs. So yeah, it's been a painful week.
00:01:00
Speaker
It doesn't sound fun at all. Listen, it was fun. It was fun. And I still, like I said, 11 miles after that, I was still biked. It was probably just mostly adrenaline and shock, but I'm still pretty proud of it.
00:01:15
Speaker
Yeah, you'll just be out for the next four weeks, hopefully taking it slow and then, yeah. I know. Taking it slow, slowing down. That's hard for me. I'm going to go through withdrawal of things on two wheels, I feel like. I can probably still do the Peloton. I can still do the Peloton. No, that's true.
00:01:31
Speaker
I've done that. Even if you want to go outside, just go outside and do a hike. Go for a walk instead of jumping. Walking is less comfortable, I feel like. Okay. I mean, breathing, I got that down. So we're good.
00:01:47
Speaker
What do you have to... How you been?

Red Eyed Summit Overview

00:01:50
Speaker
I've been good. Last week I was at Red Eyed Summit in Denver. That's right. How was that? Fun show. A lot of focus was around AI for sure. Not a shocker. Not a shocker.
00:02:02
Speaker
How much was OpenShift AI's stuff talked about? Yeah, so OpenShift AI was definitely brought up, but I think their main focus was around something called instruct lab, which is your open source way of allowing users to download models from hugging face and then
00:02:18
Speaker
they have a neat little way of instead of building RAG you can actually they have a way that you can fine-tune model so once you download a model you can define a knowledge base and specific taxonomy and again these are all their terms once you feed it some data you can actually ask one of their models to generate synthetic data based on what it gave what you gave it as a as a knowledge base okay then you can retrain the model for your specific use case so
00:02:43
Speaker
Again, it's all open source, similar to everything that Red Hat does. And they also announced a couple of LLM speech and code assistant models with IBM research. I think they're called granite models. But yeah, just Red Hat throwing its actual hat in the ring for having its own version of open source models as well. So overall a fun conference. And then they did highlight
00:03:12
Speaker
OpenShift web console, the UI or OpenShift that we are all familiar with, they have a new chatbot kind of an assistant that shows up in the web console. What does it help you do? Does it do troubleshooting or just how to use the... So it's alpha right now. The use case that they demoed during the keynote was more around...
00:03:30
Speaker
Oh, uh, I need to auto scale my application. How do I do that? Uh, and, and it gives you like, but it isn't like practically. Yeah. Yeah. It doesn't perform the actions for you. It just gives you like how you can do certain things. One cool that I liked about it. One cool thing that I liked about it was you can deploy it from the operator hobbits. It just shows up as a different operator that you can choose to install or not install it.
00:03:53
Speaker
But you can also switch up the LLMs that it's using on the back end. So you can use an OpenAI model. So you as the user can select which one? Yeah, the user can select which LLM they want to use to back up the chatbot, to be the intelligence behind the chat. From a list, or can you also pull in your own? No, I think they have a certified list. I think they showed OpenAI and their own model as the two out of the three examples. But yeah, I'm hoping more and more open source models will get added.
00:04:23
Speaker
Yeah. I interacted with a new support chat AI today. I'm moving. And so I had to like set up gas service and ever sources in Massachusetts. And they had like, this is the new and improved, like, so her, like the chatbot's voice was, you know, well spoken and all these things, but terribly unhelpful again.
00:04:47
Speaker
Terribly unhelpful again. I'm like this is not improving the process. I wouldn't expect ever since to be there on day one dude Just saying I'm just saying I lost a little and how these things are being implemented. But yeah
00:05:01
Speaker
you know, two out of 10 ever sourced. Eventually we'll get there. I think for some of these things, a basic website works like, okay, enable the service, enter your address, and that's it. You don't need a half an hour conversation just to figure out. Here's the thing. You know, the old chat bots, they used to, if you said like, let me talk to a representative, they immediately just send you off to a representative. This time,
00:05:25
Speaker
little like chatbot Sally or whatever her name was, was really persistent. She was, she, I said, can I speak to your representative? She goes, you know, it'd be really easier if you could let me help you again. And so three times, I had to ask three times for representative. I was like, oh, chatbots got some spunk to her.
00:05:44
Speaker
Yeah, there has to be a dash dash force flag like, come on. That was unexpected. So anyway, we have a fun guest. We'll introduce him in just a minute, but it's all going to be on sort of like service meshes and stuff like that. You'll probably know the company, but before we get into who that is and what the topic is, why don't we dive into some news?

Wiz's Market Strategy

00:06:08
Speaker
Bhavan, why don't you kick us off? Yeah, for sure. So similar to last week when we covered like a lot of acquisitions and
00:06:14
Speaker
and funding rounds. We have some more this week, which is obviously good to hear for our ecosystem, right? So continuing from last time with security, we discussed a lot about those guys and how they acquired gem security and were in talks to acquire less work as well.
00:06:30
Speaker
They announced that, okay, they have raised another billion dollars at a $12 billion valuation. And I'm sure like obviously 350 million out of this billion goes to gem security, but still like they are, I don't know, strengthening their war chest and then just consolidating all the different smaller vendors to help them reach that billion dollar ARR goal that they have before they go public.
00:06:53
Speaker
Congratulations to everyone at Wiz, like billion dollars is not a chunked in your money. Like it's a substantial amount. Hopefully you spend it wisely. Like it's interesting that even in 2024, people can raise this much money, right? I know we saw a sneak back in 2021, raise a billion dollars at more than $10 billion valuation. But that was the COVID era and everybody was getting funded and everybody was getting a lot of money.
00:07:21
Speaker
with being able to raise billion dollars right now is good. So yeah, if you are in the Kubernetes security ecosystem, there are definitely vendors out there that are still hiring and still growing and still scaling.

Funding Highlights: Loft Labs & Harness.io

00:07:36
Speaker
Next up, one of our previous guests, Loft Labs, the company behind the open source project, V Clusters, raised Series A of around $24 million. They are just going all in. I think they found their product market fit or they found a product that the community loves and wants around virtual clusters, around V Clusters.
00:07:58
Speaker
Ryan, I know we discussed their partnership with SUSE around the ability to deploy vClusters on Rancher and then how they also integrated with a local CSP in Europe. I think this all just shows the power of the vCluster technology and congratulations Lucas and his team at Loft Labs for this new money. So exciting times for them for sure.
00:08:22
Speaker
Yeah, definitely a cool project, cool company. I think the whole virtual cluster, virtual Kubernetes cluster itself is super valuable, especially in sort of like the dev test and stuff like that. But I know they're pushing towards production more and more and that kind of thing. Yep. And then more funding rounds are, in this case, it's more around like raising an equity, sorry, a debt financing round. So harness.io, a vendor in the CICD or our
00:08:50
Speaker
continuous delivery space inside Kubernetes or inside the CNCF ecosystem, raised a debt financing round of $150 million. Again, looks like from a different article, this might be the last, quote unquote, private round that they raised. I know they didn't have to give up any equity for this round, just debt financing from Silicon Valley Bank and Harnes and Hercules Capital.
00:09:16
Speaker
But yeah, this basically gets them in a position where they can scale as much as needed before they are ready to go public. Looking at one of the articles, they also crossed the $100 million ARR last year. So these are substantial revenues and ARRs for sure. So I'm hoping once the floodgates open for different IPOs, we might see some of these companies being in the line or in the queue to go public and raise more money. Absolutely. Good for them.
00:09:47
Speaker
And then finally Akamai, I know we have had Steve and Russ on the podcast a couple of times, but yeah, Akamai is

Akamai's Strategic Acquisitions

00:09:55
Speaker
in the news. They acquired another security company or a security company called No Name Security. And this is an interesting use case or acquisition as well. No Name Security has been around for some time. And last, they raised some money in December of 2021 at a billion dollar valuation. So a lot of money, obviously.
00:10:17
Speaker
It sucks that now they got acquired for $450 million, but at least they got an exit. We all know that the valuations were pretty inflated during the COVID era or the Serp era. Still getting an exit and Akamai definitely would do great with it, but I just wanted to share that with our listeners. $450 million for known insecurity, now part of the Akamai security suite.
00:10:42
Speaker
Yeah. And speaking of Akamai, I know we just spoke about how not too long ago they acquired on that, right? Yeah. Yeah. And they're, they're just, I guess, buzzing away because they also announced recently this week, I think that they're acquiring the red cubes, Tommy project or company. And we've had them on the show to talk about their platform and
00:11:07
Speaker
I've had some hands-on experience with it. When we did a bunch of episodes on platform engineering, this is really about simplifying using Kubernetes and all the layers that go into it. It's really a layer on top of Kubernetes where you can
00:11:23
Speaker
Pick through sort of this marketplace and deploy things really easily and have sort of a zero to you know production style platform Within you know a day and probably sooner if you know if you're familiar and know the different layers and everything so I know I
00:11:41
Speaker
I had a good time using the product and it seems like Akamai is really pushing for doing a lot. I'd love to see, you know, fast forward like two years, what Atomi is going to morph these acquisitions into when it comes to their cloud platforms and everything, right? Security, platform engineering, storage, and everything they already had to begin with. So really rolling out like an
00:12:10
Speaker
On-prem solution with an idp or do you see them like enhancing their cloud portal with idp like functionality yeah i mean i think it could be definitely both right to me is such a sort of.
00:12:25
Speaker
a flexible toolset that I could see their customers on-prem definitely taking advantage of it. Their cloud stuff, it might take more to intertwine those things or maybe it'll be deployable from their cloud-based systems or something like that. But yeah, I'm excited for what that looks like. I think it makes a ton of sense just given how
00:12:50
Speaker
you know, how popular platform engineering is and how people want to get bootstrapped on that type of an environment. So at the end of the day, if you're building straight off cloud, you still have to build a lot of it yourself. So yeah, good move for them, I would say. Congrats to everyone over at. I know. And I'm excited, right? Like there we see, we are seeing a lot of new funding rounds and some consolidation in our ecosystem. It's like,
00:13:18
Speaker
There is a space for all of these windows and hopefully everybody can have a great exit or a good exit. And I don't know, we keep the products around. Yeah. Well, not everyone. This is the way it goes. But if you're in one of those looking to be acquired, I hope the best. Awesome.
00:13:43
Speaker
Okay, yeah, let's let's introduce our guests and talk about what we have in the agenda today. Sure.

Interview with Christian Posta

00:13:49
Speaker
So, Christian Posta, he is the VP and global field CTO at solo.io. So, solo has been around for quite a while. They're sort of all up in the service mesh space with the glue and ambient mesh and all these
00:14:05
Speaker
things that have been around for quite a while. They have great products and we're really bringing Christian on to get his expertise in sort of how Service Mesh has evolved over the time and sort of where it fits into the world today and maybe where it's going and what challenges and sort of hurdles it's overcome. So yeah, I guess without further ado, let's get Christian on the show.
00:14:28
Speaker
This episode is brought to you by our friends from Nethopper, a Boston based company founded to help enterprises achieve their cloud infrastructure and application modernization goals. Nethopper is ideal for enterprises looking to accelerate their platform engineering efforts and reduce time to market, reduce cognitive load on developers and leverage junior engineers to manage and support cloud mandated upgrades for a growing number of Kubernetes clusters and application dependencies.
00:14:58
Speaker
Nethopper enables enterprises to jumpstart platform engineering with KAOPS, a cloud-native, GitHub-centric platform framework to help them build their internal developer platforms or IDPs. KAOPS is completely agnostic to the community's distribution or cloud infrastructure being used and supports everything including EKS, GKE, Rancher, OpenShift, etc.
00:15:22
Speaker
Nethopper KOPS is also available on the AWS marketplace. Learn more about KOPS and get a free demo by sending them an email to info at nethopper.io or download their free version using mynethopper.com slash auth. All right. Welcome to the show, Christian. It's really great to have you here. Thanks for joining Kubernetes Bites. Why don't you give our audience a little bit about who you are and what you do.
00:15:46
Speaker
Sure. Thank you all for inviting me here. So my name is Christian Posta. I'm the global field CTO at solo.io. We can talk a little bit about what that title really means. But I've been here for five and a half years. Before that, I was at Red Hat. And I've been generally working in the integration and distributed system space for the last 20 something years.
00:16:14
Speaker
The last 10 or so have been involved in Kubernetes from the very early days at Red Hat. And then helping organizations modernize their infrastructure, their application architecture, and actually making progress. Because I worked at big banks and working in the past where I would see
00:16:40
Speaker
you know, organizations invest a ton of money in new technology and just end up in the same place that they are, you know, being started with. But so like, I want to see progress. The last five or so, actually, the last seven years have been working on what we what I think is a more interesting part of modernization is how do things connect with each other? And how do we secure that and all that?
00:17:03
Speaker
Yeah, I think that makes a ton of sense. I think just in the past year, we've been talking about multi-cloud and multiple clusters, like probably more so than the previous year on the show itself. And yeah, banks, I mean, they're slow boats to begin with. So any progress, I feel like, can be quite a win there. So yeah, I mean, tell me a little bit more about what field CTO really means, right? I've heard that term. I've seen it sort of like, you know, manifest in different ways at different companies, but I'm interested.
00:17:32
Speaker
So I guess, I guess a different organization from what I've seen treat the role of field CTO slightly differently. Some organizations might think of the field CTO as, you know, a general technologist who's really good at sales and reports to the CRO or something. Here at Solow, I would say actually when Edith Levine, who's a,
00:18:01
Speaker
founder of solo, when we were talking back in fall of 2018, I was like, all right, I'll come over, but I want to focus on technology. I really want to deal with customers. But when you join a startup, you can it was or I was employee number 10 or something. So you kind of have to do whatever the startup needs. You get a hat as a title. Yeah, you're guaranteed to get a lot more hats. So I've
00:18:30
Speaker
I've gone from helping build the field organization at solo.io, helping, you know, work with a deed on, um, you know, what customers see, what they, uh, what they're looking for, helping with the product. Um, you know, some of the evangelization parts, public speaking parts. Um, so I sort of gone around and done a bunch of, you know, I've worn a lot of different, different hats here.
00:18:59
Speaker
And it's just a catch-all title, I think, for what we came up with here. Sounds like a fun one, honestly. Yeah, it's been good. It's been a great ride. It's been a lot of fun.
00:19:10
Speaker
Awesome. Okay. So that was, I guess, a great intention to take, but I want to come back to the topic of for today, right? So Christian, I saw that you had a session at QCon Paris as well around service measures.

Service Mesh Origins & Benefits

00:19:24
Speaker
So yeah, why not like, like, like, they're glad to have you on the show to talk about it. But I wanted to start with like a one on one description, right? If somebody was living under a rock or if
00:19:34
Speaker
So service mesh and networking wasn't there for like, they were focused on storage, which most of our audience is like, what is a service mesh and how can it help users if they are building on top of properties? Yeah, good question. So there's the dictionary definition of what a service mesh is, right. And anybody can go Google that, figure that out. But I like, I like to explain it by way of, of example.
00:20:03
Speaker
Um, and I think some of the origins for why a service mesh like thing was created in the first place comes from Google. Um, and we're, you know, it's all the way we have a lot of the folks who originally invented Istio, which is one service mesh that we might talk about today, but, uh, that our employees here at solo, um, and watching what happened there. So they started off.
00:20:29
Speaker
Google Cloud and the API infrastructure that they built, they followed a very similar path that probably everybody follows when it comes to, hey, we're building APIs, we want to expose them to other parts of the organization, potentially partners, outside users, open up our API catalog, whatever, as a big centralized API management system. That's how they started.
00:20:57
Speaker
they started to run into the more APIs they built, the more varied of the APIs that they had. So not just REST, but gRPC and all these other formats. The criticality of these APIs, they realized that funneling all of those through a centralized system is not ideal. And they eventually did hit that not ideal state, which was outage.
00:21:28
Speaker
Big, massive outage. Probably to a halt, yeah. Exactly. And that affected them significantly. And so their approach was, and they already started doing this, all right, well, we'll just carve off the important APIs over here on this API infrastructure, and we'll put this over here. But then they realized that a lot of that still depended on shared pieces and dependencies that if those go down, you still lose everything. So what they said is, all right, well, why don't we just take this to the logical conclusion, which is,
00:21:57
Speaker
Basically, a little gateway and separate infrastructure for API management per service, almost per endpoint. That will give the best resiliency. We can tune configuration specific to specific APIs without worrying about noisy neighbors and all this stuff. We can scale, we can deploy it geographically, and so on.
00:22:22
Speaker
That's the approach that sort of pushed in this direction. And when people started deploying Kubernetes and microservices and all this stuff, we started to see a lot of the same forces at play here. And so now, so the answer is, it's a more optimized infrastructure for managing APIs at scale and doing things like securing them, rate limiting them, observing them,
00:22:51
Speaker
controlling their traffic, shifting traffic, being very load balancing and that kind of stuff, that either you put in a centralized API gateway, or you put directly in the applications. But with the service mesh, you get the best of both worlds, basically. No, I like that, right? Because if I'm coming new into the Kubernetes ecosystem, like services and different service types allows me to connect different parts of my distributed application together.
00:23:18
Speaker
The benefits that you just listed, like the traffic routing, the observability, the security, I think that's where Service Mesh becomes really crucial. If you are getting started, I think you need to figure out what your answer is. I know we have spoken, Ryan and I did an episode a year back at this point. We did our best. From a Service Mesh 101 perspective. I'm looking forward to this deep dive.
00:23:42
Speaker
Yeah, and I think that's a really good segue, Bobbin, so appreciate it. That's the next question I had, which is, you know, Service Mesh has been around for quite some time. I think the advent of just containers and microservices and more people using those architectures, we started seeing sort of the pain, as you described in that case, and needing a solution for

Service Mesh Evolution

00:24:05
Speaker
it. Service Mesh has been here, I think. I don't know the amount of years, you probably do, but
00:24:10
Speaker
how has Service Mesh sort of evolved over time since when it showed up in Cloud Native today, I guess? Well, there's been an evolution, I would say, on a couple of different levels. The first and probably most obvious is how it's built. Because originally,
00:24:40
Speaker
So the first official service mesh that we think about in this space was Linkerd. Yeah. And they came out with a model that's sort of a shared node proxy model. And it was basically they just took Twitter finagle libraries, wrapped it into a container and said, all right, this is everybody funneling your traffic through this. And deploy that on a node, on every node, and then Linkerd will connect that stuff.
00:25:10
Speaker
Um, but that had some drawbacks, um, from a security standpoint, from a noisy neighbor standpoint, from a, you know, there, there, there was a upgrade in operations, all this stuff. Um, and so, you know, shifting to side cars was a natural evolution at that point. Again, sort of mirrors the way that, you know, the original story that I told you about how Google did that. That's what that is exactly what they did. Um, and then things like Istio came out, which was probably the first side car.
00:25:39
Speaker
Service Mesh, announced in 2017, I think, May 2017. But Istio came and said, here's a bajillion features, you're probably going to need them. But that was very complex. And now we've seen the evolution of both on the Linkerd side, they significantly simplified their implementation.
00:26:08
Speaker
Istio has significantly simplified the implementation. And we've gotten to the point now where just this week, Istio released 1.22, which is significant because it's the first sidecarless service mesh that is now ready to be run in production.
00:26:32
Speaker
So that means you can get the benefits of the MTLS and observability and traffic management, all that stuff, without co-locating sidecars. Now, it makes it much cheaper to run, easier to upgrade, easier to onboard, etc. So that we can talk more about that. But so the first part is this
00:26:52
Speaker
simplifying the implementation of the service meshes to really focus on exactly the pieces that people need, that they need first, and then adopt the rest of the stuff. Otherwise it gets too complex. Anything networking, if you just throw everything that networking can do, is too complex.
00:27:14
Speaker
Yeah, I hear you there. And now, is all this change that's sort of, or the evolution, has it been sort of a catalyst because of the adoption of Service Mesh? Like, people started to adopt it and realized maybe they're only using certain features. That's the second piece is that people, first of all, back in Istio and Linkedi, all these things started to come out in 2017.
00:27:44
Speaker
people were not ready for that piece yet, right? They were still going through, how do we containerize? I remember at Red Hat, it was, well, how do we lift and shift? And then when we do that, we have tools already that do deployment, for example.
00:28:02
Speaker
So let's try to use this IBM tool. I forgot the names of them. And then we realized, okay, we can't really do that. We should automate things a little bit better and tie things into GitOps. Sorry, that was sort of a new idea. Then it was observability. People got to, all right, now we have our things we can deploy. Now we've got to understand what's going on. Our previous generation tools didn't fit that well. They really monitored based on IP address and all this stuff, which doesn't fit real well in Kubernetes.
00:28:31
Speaker
So a new generation of tooling to better fit that came out. Now this last piece, so now when you go down this path, eventually you're going to hit the networking and security pieces, but you kind of have to go down this path. So when Istio and the service measures came out in 2017, people weren't down that path yet. This idea of platform engineering wasn't really a thing.
00:28:57
Speaker
Um, but now as we've made it, we're seeing, and I would say here at solo from a commercial standpoint, we've seen different waves, I would say of, you know, spikes in very significant interest in service mesh. Um, and I think we're right now at another spike. And I think the ad, like things like Istio ambient, the side carless, uh, mode, uh, significantly.
00:29:25
Speaker
lowers the hurdle for adopting service mesh. So we're seeing a pent up demand as well as a spike anyway, but a pent up demand for what that might look like. So then we'll see from here.
00:29:41
Speaker
Gotcha. So, Krishan, one follow-up question, right? And I love that you already started talking about platform engineering there because that's where I want to go next. But we spoke about the per node deployment that Linkadee started with and then the sidecar mode. I'm glad that you shared that this week, STO is basically going the ambient mesh or sidecar less mode. So where is all this functionality provided, right? Like, is it running as a pod in the same namespace?
00:30:07
Speaker
Where does the service mesh functionality go if it's not in a sidecar? Oh, for my implementation detail? Yeah. That's a good question. So I mentioned that the original, you know, some of the previous implementations of service mesh started with a shared node local proxy. Now the challenges around that stem primarily around
00:30:35
Speaker
You don't know which workloads are going to be running on the nodes. And users want to configure the mesh and do all kinds of traffic routing, splitting, regular expression matching up. Like stuff that if there's conflicts on a particular node, because you don't know what's going to run at runtime, you're going to run into noisy neighbor problems, starvation problems. And we've seen this time and time is not theoretical. We've seen it.
00:31:04
Speaker
And the more that you allow users to configure the mesh for that kind of functionality, the more problematic it's going to be. So what we did with Istio is we said, all right, we're going to run a per node proxy, but it's not going to be any layer seven stuff, nothing that the user can configure. So that drastically simplifies what that implementation needs to do.
00:31:31
Speaker
And it just, all that, all the shared proxy, all that little agent needs to do is open a connection and establish MTLS. That's it. And that forms the foundation for what the mesh ends up doing. Now you want to get layer seven stuff. We have another concept called a waypoint proxy that, you know, once traffic goes into the MTLS,
00:31:56
Speaker
It can be forced through the layer seven proxy, which can live somewhere else. It doesn't matter where it lives. And so we get around some of those noisy neighbor problems, eliminate some of those problems. We do end up using a shared piece of technology, a shared agent, but it's no different than what your CNI is doing. It's no different than what the Cubelet is doing. You have some shared component, but it's so simple.
00:32:23
Speaker
And it's so focused on one little, it's one thing to do, highly locked down and hardened and all that. And we feel that that is an appropriate architecture to get the benefits that we then get with Istio ambient.
00:32:38
Speaker
Gotcha. And I like that, like you split layer four and layer seven functionality into different things because not everybody would want everything, right? And that's the, that's one of the big observations and big learnings that we, we've taken away from, from Istio is that, cause like I said, in 2017, when it was announced, it was traffic splitting and AV and canary routing and MTLS and all like a whole bunch of stuff. When we realized,
00:33:06
Speaker
Actually, I remember at Red Hat, we did a survey of some of the initial adopters that might be interested in Istio. In my mind, I thought, safe deployments, rollouts, canary releases, blue-green deployment, I thought that would be the number one use case. It wasn't, it was security. That was then and still today, when people look at Service Mesh, the number one
00:33:30
Speaker
you know, first use case that they want to adopt and use it for is, is the security pieces, MTLS, mutual authentication, that kind of stuff. That's good to know. I think we need to make a note of it, Ryan. Okay, switching gears to platform engineering, right? I know you already gave us a good segue. We spoke about how different personas exist and this, like,
00:33:51
Speaker
Everybody shouldn't be like, we spoke about it from an application perspective,

Role of Service Mesh in Platform Engineering

00:33:55
Speaker
right? You don't have to build in the logic in the application layer or in the infrastructure layer. I think I want to reframe that to a question around like a personas, like inside platform engineering, you have a platform engineering team that's focused on building IDPs and providing a set of services to the end developers. How do you see service mesh fit in? Is it the platform engineering team's responsibility or once
00:34:17
Speaker
they provide you a cluster as a service functionality. It's on the developers and the application owners to work or to deploy service mesh and configure all those rules. Okay, good question. In my experience, we've seen, so first of all, I've seen both models. I would, in my opinion, if I'm working with a customer right now, I would say do the former, not the latter.
00:34:47
Speaker
that things like Service Mesh, well, the idea that as a platform team, I'm going to put together a Kubernetes cluster and just say, hey, developers, go have at it. I think that's a really bad idea. And like I said, we do see this because it stems from the idea that, hey, well, you build it, you run it. And we've seen that taken almost too much of an extreme
00:35:16
Speaker
And so I would suggest the former, which is your end developers, first of all, shouldn't know anything about a service mesh. They don't really care about that, and they shouldn't. What they want, so let me give you a good example, because I kept referring to service mesh and security as the top requirement. And TLS, when people hear that, they think, oh, the encryption part.
00:35:43
Speaker
We'll get everything encrypted, check some boxes on compliance, and that is important. But there's another part that developers might care about that the platform could offer and make it simpler for developers. So that is today, I know a lot of organizations, when they initiate service to service communication, they need to authenticate. So not the user comes in, they sign in, do their OAuth stuff, whatever, and that's all good.
00:36:13
Speaker
But for service to service, what they end up doing is passing some sort of JWT token or API key or something that says, hey, service A is talking to service B. So I'm service A, here's my JWT token. Service B, when you get it, you know that I'm service A, because I'm giving you this token. There's a lot of challenges to doing that right. I've written some blogs and some talks on that, but in the short,
00:36:41
Speaker
That token is the secret material. You're putting that over the wire. Somebody captures that somehow or gets logged somehow. Anybody can replay that. There are things you can do to mitigate replay, but now it's on your developers to have extremely good and tight JWT handling hygiene so that audience is set correctly, expiration set correctly, that you renew them, you use different JWT for each service, et cetera, et cetera, that gets complicated. Now, if you can say that, hey,
00:37:11
Speaker
We have mutual authentication baked into the platform. My virtue of MTLS and ICL also uses Spiffy, which is a workload identity framework. We get automatic mutual TLS. You don't have to pass these job tokens back and forth to prove your service identity. That simplifies things quite a bit for developers. And they should know about that capability. They don't have to know about the service measure, how all that works. So that's one example.
00:37:40
Speaker
of how the platform can offer a place for developers to deploy their APIs, deploy their services, simplify some things, and not have to know about how all that stuff works.
00:37:55
Speaker
Sort of a sweet spot, you know, you mentioned over rotating giving developers too many too much responsibility and the jaw token thing as a perfect example because we all know they're not gonna set their expirations Probably guilty of it myself
00:38:14
Speaker
Um, but you know, I guess there's the platform engineering team that comes back to them to sort of set up the secure by default and sort of, um, default. And if you want to do more, if you're that kind of developer, maybe you can enable those things too. Yes, absolutely. Yeah. You got to find the right balance. Yeah. You don't want to. Oh, over, you don't want to swing in, in either direction to extreme. Yeah. Yeah.
00:38:37
Speaker
And Christian, this might seem like a really basic question, right? Let's say I want to extend this platform engineering conversation, like, okay, I have a platform through my CI CD workflows, a developer deployed an application, they didn't have to worry about authentication or security between our communication between different parts of the app. If I'm an administrator or that the platform admin, right, who is configuring these policies and enforcing security or traffic routing,
00:39:04
Speaker
How do I know, like, how does a developer tell me like, okay, this is what these are the services that need to talk to each other? Do they define it through a community's construct, a YAML file, a service mesh object? How does a developer, without having to jump on a Zoom call, tell me like, okay, this is what, this is the components that I want to communicate with each other. Yeah, so that I would say is,
00:39:30
Speaker
You know, the service, the service mesh itself is not overly opinionated about how that gets done. Well, what we end up seeing is a couple of different things, either people, uh, they build their, either they have their own configuration files already, wrong formats, and they might specify dependency type information there. Um, and then that gets pulled into CICD and then everything gets built off of that.
00:39:56
Speaker
But some organizations say, nah, you have to talk to the security team to make sure that that is approved first. Maybe they'll take that and they'll have some automation to get approval from the security team and so on. Some others, they built in their IDP, they built a workflow that automates some of the approval processes and things to allow certain communications.
00:40:25
Speaker
And then those at runtime, at execution time, whatever those, whether it's your proprietary config or a custom workflow or whatever it is, can then generate all of the Istio configs that are necessary to implement those policies. So generally the policies are organization specific, how they agree on those policies, and then the enforcement is implemented by Istio.
00:40:54
Speaker
OK. Thank you. That makes sense. I want to come back to a point you made earlier, which was that this world of sort of like networking and service mesh blend and sort of overlap in certain scenarios and do it to a very good point because they are related. But one thing Bhavan and I have done in recent episodes
00:41:18
Speaker
is talk to other networking companies as well. And the idea of connecting resources outside of Kubernetes comes up more. So like an external database or another cluster or those kind of things, sometimes it's done over tunnels and those kind of things. So does Service Mesh also do these things, work with the networking pieces? What is the Service Mesh's role in those scenarios? Yeah, so it works with
00:41:47
Speaker
the other networking pieces. So layer three connectivity, so the IP layer connectivity, it doesn't handle that. It will live on top of that. And connecting into VMs is a very important use case because even the folks who use Kubernetes have probably a lot more stuff running outside of Kubernetes.
00:42:15
Speaker
Istio has been able to support, for example, Istio has been able to support VMs for a while. There've been some, I would say over the last year or so, maybe a year and a half, we've smoothed out some of the integration with VMs because we see this use case a lot more. So connecting with VMs, definitely first-class use case, the mesh supports.
00:42:43
Speaker
To do that, you got to integrate with Spire for the workload identity stuff, which is we see a lot more people starting to adopt Spire and starting to adopt workload identity. Those pieces are converging very nicely. Maybe it's difficult to run a full-blown sidecar, for example. This is where Ambient comes into the picture and significantly simplifies
00:43:09
Speaker
the extending the reach of the MTLS piece, because now instead of a full-blown sidecar, you can just run the little MTLS agent in more places, I would say, and then connect into the mesh easier. The last bit is, well, what if you can't run an agent? What if you can't run, extend the mesh all the way out to, you mentioned database or something, but there's still,
00:43:38
Speaker
Or a use case that's coming up a lot more is around making calls out to other SaaS providers, for example, OpenAI or, you know, LLMs. So controlling and managing egress traffic has always been a under the radar type use case, the service mesh helps with and things like
00:44:09
Speaker
controlling usage, rate limiting, all that stuff, you know, straightforward, but things like credential hiding, right? If you're giving credentials to all these different clients and services that need to call outside APIs, you know, those could get compromised. The developers could take those keys when they leave to go to a different company, et cetera, right? So can you create a secure tunnel from the apps, from the APIs internally?
00:44:35
Speaker
to an egress point that then handles potential mapping, right? So that's a very important use case. Things like, even basic stuff like when we're talking to an external database that has to use TLS and it has to come from a known network endpoint, known IP address. So forcing the traffic through an egress point at layer four and layer seven
00:45:02
Speaker
you know, addressing the MTS pieces and all that stuff, then the developers don't have to worry about that. So yeah, both from expand the mesh out to as far as you can get it, and then manage the egress traffic from the mesh to external endpoints, all very important pieces. Yeah.
00:45:24
Speaker
I think that's super useful, right? As Ryan said, there are people that are running part of applications on virtual machines that are still traditional virtualization stacks.
00:45:34
Speaker
Maybe if I'm running in AWS, I'm using RDS or something, right? So the ability, like this gives me a control over how my applications are talking to these external, to the Kubernetes cluster components. Christian, you brought up virtual machines running outside. What about virtual machines running inside Kubernetes? Like I know with Kubernetes and OpenShift virtualization,
00:45:55
Speaker
even your previous employer Red Hat really is pushing or sharing stories around how OpenShift Virtualization is successful. Can Service Mesh work with that today if people are evaluating what I thought said? Yeah, so this area that admittedly I'm not super deep in, but I did ask
00:46:15
Speaker
whenever. So John Howard, who's number one contributor to Istio, is a solo employee now. I asked him, he said that he knows of people that have been successful running that. And there are some specific settings inside of Istio today, so you could take it and run it today with CubeVirt. But that's the extent that I can comment on it right now. I feel like with the agent that you explained before, it's
00:46:41
Speaker
Essentially like a virtual no use for virtual machines you could see that agent running there or even you know when you were talking about sort of extending the cluster before i can easily think of sort of and use cases right whether that's some sort of compute or more containers are those kind of things but i feel like that.
00:47:00
Speaker
fits into the scenario. Yeah, absolutely. And that Kubernetes use case that I was referring to was with sidecars as it is today. Ambient changes the options there too.
00:47:12
Speaker
Yeah, because like, so I've deployed like keyword VMs, but I don't know how to like add a sidecar to a virtual machine object, right? Like keyword deploys a pod, which spins up a VM object. So that would be interesting. But again, as you said, sidecar less and ambient mesh mode, like, that's awesome. Like, I don't have to modify my VM object, I can keep migrating them over from Rev or OpenStack or VMware. And then just, yeah, you use a service mesh that I've already deployed.
00:47:40
Speaker
For folks that are interested, like I said, I can't speak too knowledgeably about it here, but if they're interested, come to the Istio Slack. Like I said, the people have got it running successfully. There's a couple of different options for doing it. The configs are all there. We can help.
00:47:56
Speaker
The power of community. Yeah, yeah, yeah. Well, we'll give we'll give you more of a chance towards the end as well to kind of, you know, throw any other resources out there as well. But that's a good idea for those specifically interested in virtual machines.
00:48:14
Speaker
Speaking of sort of other stacks or other integrations, you know, when we spoke before this, you had mentioned the cake stack and I had never heard of it before.

Introduction to CAKES Stack

00:48:26
Speaker
So I guess we'll start with sort of what is the cake stack? Why does it exist? And where does service mesh fit in there? Good question. So earlier, I was talking about, you know, sort of the the pieces, the what do they call I guess the
00:48:46
Speaker
Maslov's hierarchy of needs, sort of, you know, your containers need observability, and eventually you're going to need networking. The pieces that we see, so we as in solo, I think working with our customers and, you know, taking things to production. What we end up seeing as sort of vital for the networking story is
00:49:12
Speaker
the layers that we identified in the Cakes stack, C-A-K-E-S, C starting with, okay, well, actually, if you're really starting, you're starting with Kubernetes, that's the K part, right? Yeah, gotcha. Kubernetes, you're modernizing, you're building platforms, Kubernetes is the foundation. Now, you need the containers to be able to communicate with each other. They need to send packets back and forth.
00:49:42
Speaker
So you need a C and I, that's where the C comes in. And just like the lamp stack, you can pick one of the pieces, the P for example, in lamp, originally I think was Pearl, but then it could be PHP, it could be Python, it could be what, right? Just that those pieces are known to integrate well and work well together.
00:50:04
Speaker
So in the cake stack, the C is for CNI, but CNI could be a specific one like psyllium or calico or, or another one, right? Doesn't even have to start with C. Um, but so you got Kubernetes, you have some CNI to enable layer three networking, uh, between and sort of course grade network policy, uh, in the CNI, but then you need, uh, you know, more, more layer seven application layer connectivity.
00:50:34
Speaker
You need MTLS, you need load balancing. For example, I don't mean just pay one end points faster than the other, but I mean like smart zone aware load balancing. Hey, I'm going to keep things in the same zone and spill over to another zone or other region only if I need to, right? Keep costs, control costs in terms of availability. And so that's where the mesh comes into the picture. And in the cakes stack, okay, Kubernetes C, C and I, A is for ambient.
00:51:05
Speaker
Because we want a low profile, you know, cost-conscious way of solving these problems. And that's where the A, where Istio Ambient comes into the picture. The E, so C-A-K-E, the E is for Envoy proxy. But what that represents is really, you know, an appropriate gateway or ingress of traffic into a cluster. Because you need that too.
00:51:33
Speaker
an API gateway or just an ingress to get traffic into the cluster, Envoy proxy was purpose-built for doing that. There exists other reverse proxies and for various reasons or another, Envoy is best suited, we think, for that ingress layer.
00:51:55
Speaker
For that piece, can you use basically the gateway API it kind of built? Yeah, exactly. So, you know, the Kubernetes gateway API, which is when GA back at KubeCon Paris or so, is a good abstraction for driving an ingress or a proxy or a gateway, of which, like I said, we believe the Envoy is best suited for that. The last bit is S, so cakes is S.
00:52:24
Speaker
And that stands for spiffy inspire. So the workload identity pieces, like I said, these all kind of work nicely together. They, and I think there are two or three principles that like all kind of form around. And that is number one, being driven by declarative configuration, extremely important, uh, in this, in this cloud native way of working, working and in our get ops type workflows, et cetera.
00:52:52
Speaker
The second is standard integration points. We're not saying like, if you adopt this, you're going to be completely not working panacea, right? We know that there are a big chunk of things we can do.
00:53:07
Speaker
But for other things, we can just, we can call out using standard integration points, things like open telemetry, uh, integration for telemetry collection, calling out to different authentication systems, maybe using OAuth or OIDC, um, you know, calling out into rate limit, calling out to OPA, Carverno or your own custom policy agents. So knowing that.
00:53:32
Speaker
We're not trying to solve everything because we know that there's a lot of unique and nuances to how you implement networking, but providing standard integration points. And we think that's different from what the proprietary vendors have done and much more flexible, much more beneficial.
00:53:53
Speaker
So cakes is sort of the open source stack of networking components that come together to work well to solve modern networking problems. And like, since you're part of this community, right? Like, has somebody already taken like a domain like cakes.org or cakes.ai? I don't know. That would be fun. That is a very good question. I am not sure. Okay. We're just into the really delicious looking sticker maybe.
00:54:24
Speaker
So at the end of the day, this cake stack really helps organizations based on what you've seen work really well together, become sort of more efficient in using this type of like standard stack, even though they can kind of mix and match a little bit of here and there.
00:54:41
Speaker
that's beneficial to the organization because if the comes with a lot of things that work well together, then you're kind of ahead of the game, hopefully. They work well together and they put you on the path to solving the problem. Like I said in the beginning, you can take old technology and try to fit it, not old in a bad way, but old and it wasn't built for this certain problem.
00:55:09
Speaker
It was built with different problems in mind. And you can take that stuff and try to shoehorn it in, but you're going to run up against challenges because of the mismatch. And these pieces were built for this era. They were built for this type of environment, and they would better fit the problems that you're trying to solve in networking today. Gotcha. Got it.
00:55:35
Speaker
And like, we can't wrap up a podcast before without talking about AI these days. So like, here's the question for you, Christian.

AI in Service Mesh Monitoring

00:55:45
Speaker
Like, I'm not talking about like, are you building like an LLM for inside solo? This is more around how AI workflows and AI tools are helping maybe improve work, improve service measures, right? So like, if, if I have a service mesh deployed, do you see or are there already projects or work being done?
00:56:05
Speaker
or do you see in the future where I have an AI agent per cluster or per my multi-cluster environment and it is monitoring for like spiky traffic like components that were not supposed to talk to each other and now they're sending out requests like somehow helping me make my environment more secure or make it better like do you see that happening from an AI ops perspective? So I see
00:56:32
Speaker
infrastructure like the service mesh as being a crucial sort of point of monitoring, right? You can pull back a ton of telemetry about what services are talking with what other services, which ones are failing. I just did a deployment, what's been the impact. And if you pull that information back,
00:57:00
Speaker
You can put it into a model and start to predict anomalies or predict things about that. That's one angle. The other is if you can predict and you can get good at predicting that, then the service mesh also represents a point of control.
00:57:24
Speaker
And, you know, being able to then predict and then make changes and control, uh, do, you know, traffic pattern analysis and oh, this, this doesn't look right. Uh, this is about to fail. So go flip these other switches and maybe contain this thing or what, right. You can get, you can work toward,
00:57:49
Speaker
becoming smarter about how you deal with the infrastructure and availability and cost and all of these other things. So having that connectivity tissue there from an observe and control standpoint is extremely powerful. And maybe you have like a service mesh digital assistant, we'll call her service mesh Sally, and you can ask her, you know,
00:58:18
Speaker
Which, which one of your services is making up the most traffic or something like that? Service mesh, Sally, that's a, that's a mess. Nice. Do you see these services or these AI agents or Sally, for example, take over for you and be proactive at blocking things that things might happen? Or do you still see like generating events and then, and a human has to take over and fix things? Uh,
00:58:48
Speaker
I think certain things that can help humans. Running and debugging and understanding what's happening in a distributed system and all the things that can go wrong is fairly complex or very complex. I think AI can help there, but I think the humans still need to
00:59:17
Speaker
Because if something decides, all right, we're going to change the traffic and do all this stuff, and you take something out, that's going to be a big problem. If for nothing else, the executives want to be able to point at somebody and say, hey, you did that. Just kidding about that. But yeah, I think humans are still a very important
00:59:39
Speaker
clog in that big picture right there. I certainly hope so. Or at least I'd love to see some stats on how many times I'm using human in the past year, the word human in conversation. I feel like we're talking about ourselves a lot more these days.
00:59:54
Speaker
Anyway, so, uh, you know, we do want to give you a chance to sort of throw out some places where people can maybe find you get in contact with you or find out more about some of the projects you mentioned or even like the Slack space you talked about or other community spaces. So if you have any, this is a perfect time to throw them out there. Sure. I appreciate that. Um, so yeah, I'm on, I'm on LinkedIn. I'm on Twitter. I frequently speak at some of the, some of these large conferences like KubeCon and
01:00:23
Speaker
and others. Actually, there's a new conference in Boston from the InfoQ folks, I think at the end of June that I'll be speaking at. And yeah, generally the community access points like Istio, Istio's Slack, Solo's Slack. Yeah, those are the big ways. And I'm always happy to, people reach out
01:00:51
Speaker
Or actually I put my contact information up on a slide at various conferences and then people come up after me afterward and they say, Hey, can I contact you? I'm like, yeah, that's, that's exactly why I put my stuff there. Please, please contact me. Happy to chat. Yeah. In, uh, in more detail on, uh, on any of this stuff. But, um, yeah, reach, reach out anytime.
01:01:15
Speaker
So for locals who might be listening to this podcast, InvoQ Dev Summit Boston, June 24th, 25th, is that the one? That's the one. Yeah. Perfect. All right. We'll put a link to it in the show notes if you need more information as well. Yeah.
01:01:29
Speaker
Cool. Well, Christian, it was really a pleasure having you on Kubernetes Bites. I know the more we do these types of interviews and sort of just talks, I learn more every day and probably have just a giant running list of things that I got to go research myself, but it was very good and hopefully our listeners thoughts on too. So thanks for coming on the show. Absolutely. My pleasure. You know, great questions. Appreciated the interview here. Thank you.
01:01:57
Speaker
All right, Bobbin, that was yet another great conversation with Christian. I think I learned a lot. I feel like I've been out of the service mesh space for probably a while. I know we did the one on one episode. We'll link that. You know, I'm curious actually to remember what we what we covered in the episode. I'm like, can't really remember. But yeah, what were some of your takeaways from this one?
01:02:20
Speaker
No, I think like last time when we did this, right, I agree that it was a while back and it was more of a one on one episode. We did cover the transition that you were mentioning. We covered I think ambient mesh was still kind of new back then. And right. We didn't get away from sidecars and everything. Yeah, we did talk about that.
01:02:38
Speaker
And we didn't even bring up STO and its sidecar less architecture or sidecar based architecture back in one episode. So it was good to hear that even STO now supports this ambient mode where you don't have to worry about restarting your application boards just to inject this sidecar into your as a service mesh component as part of your application stack. So I think that was great news. What I took away from this episode was
01:03:05
Speaker
Christian, right? He's with his global field CTO title. I think he talks to a lot of customers and the perspective that he brought in around platform engineering and how it's a mix, right? We can't shift left a lot or even go back, right? We have to find somewhere in the middle where it's not all on the developers. It's not all on the administrators. Like, okay, you can have,
01:03:29
Speaker
like Istio can be installed as part of the platform whenever you are deploying a cluster, but then the developers should be able to define their own policies. And I know we discussed that as part of like the Istio configs and developers can define which services are supposed or not supposed to talk to each other to other services, things like that. So I think it is that shared responsibility model whenever people are building these different IDPs and integrating service mesh as part of that. Yeah, absolutely.
01:03:55
Speaker
Building on top of the whole talking to customers thing, I think those, you know, the years and years that, um, you know, Christian has sort of had experience both at Red Hat and beyond and at Solo, right? The whole idea behind seeing how customers are implementing it, seeing how customers are struggling, what things
01:04:16
Speaker
are too much, right? They, you know, we talked about, um, you know, steel initially being, or linker D being, um, you know, doing too much, you know, having, having too much in there, um, and really seeing how that's boiled down over the years and kind of coming back to the cake stack that we talked about, right? You know, having a CNI layer that, you know, technically you can choose from the steel layer, ambient mode, um, Kubernetes and Envoy and Spiffy, right? I think,
01:04:43
Speaker
it shows definitely the maturity of sort of how it's evolved over time, both through seeing how much, you know, people really want to configure themselves, obviously platform engineering and sort of the later pushes pushed into making sort of that a bit more of a reality of what the, you know, the quote unquote golden path looks like, but also getting sort of the stack that's sort of a well known working starting point with flexibility. And I think that speaks to
01:05:12
Speaker
the concepts behind platform engineering itself, because that's not a one-size-fits-all thing. You can't just throw any platform engineering. There's still guardrails that you're defining or maybe the platform is defining itself. And the stack itself is saying, here's a known working layer and layers, I should say, that work well together and gives you guardrails for where to start and all those things.
01:05:39
Speaker
Yeah, I think that's, that's, that's a lot to take away. I think that just shows where we've come since, you know, early days of service mesh. No, and I agree, right? Like the cake stack, apart from it being a great sticker to have at the next move. It sounds like a delicious sticker. I just think that's a good idea, Christian. Just that these things have been kind of tested together by the community. So if I'm an organization looking to get into or implementing service mesh, I know
01:06:04
Speaker
that all of these things have been kind of tested by the community. I don't have to go and figure it out on my own or figure out what other components I need. I think having a standardized framework definitely helps. That's why the LAM stack that we discussed was so popular back in the day. So yeah, this is awesome. You say it back in there. I mean, that's true. I guess it is a little bit back in the day.
01:06:25
Speaker
at least back in the day for me I don't code anymore right so like at some point I did but like not cool well I hope you all enjoyed that episode I think Christian had a lot of great things to say and hopefully you'll go reach out to some of the communities well again make sure to link those
01:06:45
Speaker
spaces he talked about and you know if you're local to the area Dev Summit Boston sounds like a cool place to go learn more and hear him talk if you want to but yeah as always please share the podcast with your colleagues and your friends maybe not your family if they're not into this kind of thing but
01:07:07
Speaker
Join our slack. We're always looking to interact with you there get some ideas I think one thing I'd love to hear is if you were if you are currently or in Have been in a situation where maybe you know You were part of a team that over rotated on the whole shifting left or maybe you have some ideas I think that'd be a really cool conversation to have on this show and to give that perspective of sort of like being in the weeds of it and
01:07:31
Speaker
Because we hear about it, we know it exists, and I think we're still trying to find the right place, and it's different at every shop. So come talk to us. I think that'd be a lot of fun. Cool. Well, that brings us to the end of today's episode, Bobbin. I'm Ryan. I'm Bobbin. And thanks for joining another episode of Kubernetes Bites. Thank you for listening to the Kubernetes Bites podcast.