Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Redis on Kubernetes image

Redis on Kubernetes

S2 E12 · Kubernetes Bytes
Avatar
367 Plays2 years ago

In this episode, Bhavin interviews Brad Ascar, Principal Product Manager at Redis, focusing on all things containers and Kubernetes. The discussion dives into the higher-level architecture of Redis, how to deploy a Redis cluster on Kubernetes using the Redis Operator, and what are some of the benefits and challenges of running Redis on Kubernetes. We also talk about how you can upgrade, scale and manage your Redis clusters on Kubernetes, and how you can architect a Geo-distributed Active-Active Redis cluster for your applications! 

Show Notes: 

1. Redis Operator

2. Redis Use Cases

3. Redis on Kubernetes

News: 

1. Mirantis integrates Lens IDE with Docker Desktop

2. Coinbase adopts Kubernetes

3. Dynamic Kubernetes cluster autoscaling at Airbnb

Recommended
Transcript

Introduction and Host Setup

00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management. My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.
00:00:29
Speaker
Good morning, good afternoon and good evening wherever you are. We are coming to you live from Boston, Massachusetts.

Redis on Kubernetes Overview

00:00:36
Speaker
Today is June 10th, 2022 and hope everyone is doing well and staying safe. Let's dive into it.
00:00:45
Speaker
So I think that was my best Ryan impression I could come up with. As you can clearly see, it's me, Bahamut talking. We don't have Ryan for this week's podcast, but he will be back for the next one. So don't worry about it. It's just going to be me. But still, we have an awesome guest plan for you where we learn all about Redis on Kubernetes. But before we dive into the topic for the podcast today, let's talk about a few news articles or a few blogs that came across
00:01:13
Speaker
my search history, I guess, and discuss what they are. So the first thing that I wanted to talk about was that Mirantis announced the integration of the Lens IDE for Kubernetes with Docker Desktop. So Lens IDE was built by a startup
00:01:32
Speaker
that Mirand is acquired called Contenta, a year after it bought Docker Enterprise from Docker Inc. in 2019. So I think at this point, Lens IDE is already being used by a huge amount of developers in the ecosystem and just integrating that with Docker Desktop and making sure developers can create that sandbox and can create that small cluster on Docker Desktop, use Lens IDE along with it.
00:02:02
Speaker
is a great step to make sure the developer productivity goes up.
00:02:07
Speaker
So that was one. The second one is around Coinbase finally deciding to join the Kubernetes party. So I think I didn't know about this, but a couple of years back, Coinbase published a blog post detailing why Kubernetes is not part of their technical stack, why they felt that any problems that would come along by adopting Kubernetes would outweigh any near term benefits. And this was a blog that they published a couple of years back.
00:02:34
Speaker
Now they have a new blog on their website where they have now seen that Kubernetes ecosystem has matured. It's a more stable code base. The security concerns that they had do have, the concerns have reduced. There are better solutions out there. There are managed services that they can rely on. Kubernetes definitely will help them with running things at scale because I think from mid 2020 or the peak
00:03:04
Speaker
the pandemic, Coinbase has seen a huge increase in terms of the resources that's needed to service the increased number of users that are trading any form of cryptocurrency. So I think this is an interesting blog around how they decided to choose Kubernetes as a part of their compute platform moving forward.
00:03:28
Speaker
So I think we'll keep an eye out of any future blogs where they talk about how the migration is going or how the migration went if they were successful at it or not. So we can get some lessons learned talk at maybe KubeCon North America or next year at KubeCon Europe.
00:03:44
Speaker
But talking about migration to Kubernetes, I came across another article where Airbnb actually migrated to Kubernetes a few years back. I think three or five years was the time duration when they started adopting Kubernetes and they moved from manually orchestrating EC2 instances to running their applications on Kubernetes. And this blog specifically talks about how they have
00:04:09
Speaker
I thought about different cluster auto scaling strategies to make sure that their applications always have enough capacity to run and they describe how they went through different stages for the first stage was running homogeneous clusters with manual scaling second was running multiple cluster types which are independently auto scale and then the third one the third stage was just running heterogeneous clusters and each node group
00:04:35
Speaker
being auto-scaled and what were the challenges with each stage and how all of this has led to them building their custom GRPC expander to help them with cluster auto-scaling. So it's a good blog around how big organizations like Airbnb are running Kubernetes in production and how they are
00:04:54
Speaker
managing not just their capacity, but also the cost that's associated with running a Kubernetes cluster.

Redis Deep Dive with Brad Ascar

00:05:01
Speaker
So we'll have those links in the show notes. But with that, let me introduce our guest today. So today we our podcast is going to be around Redis on Kubernetes. And to help us learn more about the topic, we have Brad Ascar, who's a principal product manager,
00:05:18
Speaker
at Redis and he's focused on all things containers and Kubernetes and I'm really looking forward to this conversation. So let's get Brad on the board and introduce him. Thank you so much Brad for joining us on this podcast. We are really looking forward to talking about Redis on Kubernetes. Why don't you take a moment and introduce yourself, what you do at Redis Labs and what your focus is.
00:05:43
Speaker
Thanks, Bhavan. So my name is Brad Ascar. I'm principal product manager here at Redis. I'm responsible for all things Kubernetes and containers here. Awesome. That's great. I think we found the right guy. So let me start with the higher level question first, like what is Redis and how is it different from all the different databases out there or the databases that we have been talking about on previous episodes of this podcast? Yeah. So Redis is a few things actually. Redis is an open source database project.
00:06:13
Speaker
Redis stands for Remote Dictionary Server, so the first two letters of each word, as a lot of people don't know where the name came from. And it started out as a better network way of doing mempeshni, right? So that was kind of the initial way beyond that nowadays. It's an in-memory, sub-millisecond, highly scalable database.
00:06:31
Speaker
Redis is also a company. So you speak Redis Labs, Redis is now the name of the company and expanded the work basically of the open source to make an enterprise, right? So the product from us is Redis Enterprise, all the things that enterprises care about, very high scale, security, easier methods of installation, all of those things, right? So that's really the Redis Enterprise product.
00:06:56
Speaker
People think about us for caching. That's what I thought about before I came to Redis. That's how they used it, and architected it, and used it in solutions personally in various jobs that I had. And that's one thing that we do, but it's so much more than that now. It's a multi-model database. And the multi-model means that we do things not only just the caches, strings, bitmaps,
00:07:21
Speaker
in things like hyperlog, logs, geospatial indexes, things like that, but also document database like Redis JSON and Redis graph, Redis time series. And then beyond that, we have modules. So those those last few were modules. We also expand modules to do things like Redis bloom for bloom filters, if you know what those are.
00:07:43
Speaker
Redis Search, so you can do searching on all these different data types. Redis Gears for doing cluster-wide programmatic access close to the data, so you don't have to pull the data at something else. You can actually do the processing of the logic there in the database and do it all in some milliseconds.
00:08:03
Speaker
It's really at the end all about speed, and you access it all through the same API. So instead of having to have a different API for each kind of document, each kind of database type, or about their data model, you actually can do it all through the same API. And at the end, it's all about speed. Redis is speed freaks. I never realized how much speed freaks they were until they came here, and it's all about the speed.
00:08:25
Speaker
Gotcha. Like, I'm glad that like, we invited you on this podcast. And I did some research because just looking at like, okay, what Redis is, I always sort of it's in memory database, okay, let's just move on. It's like, learn more about like, what everything that's involved as part of Redis. And I didn't know Redis stood for remote direct dictionary server. So that's another key takeaway, I guess. But
00:08:50
Speaker
Talking about Redis Enterprise, can we get a 10,000 foot view of the architecture? Let's keep Kubernetes out for a minute. Let's just talk about how Redis actually works.
00:09:01
Speaker
So from a high level, and this is easy because Redis actually existed before, Redis Enterprise existed before Kubernetes, right? So at a high level, Redis has clusters, sounds familiar in what you would expect from a modern database. Also has availability for and durability, rat zone awareness for containing blast radius and voltages, HA, geo-redundancy. And while it's a Kubernetes talk,
00:09:28
Speaker
From an architecture standpoint, we do this in a bunch of different ways. We do it as something that you can install on virtual machines or physical hardware. We do it in Kubernetes, of course, that's why we're talking here. We also host Redis as a hosted service. We're a hosted service database company as well, so a bunch of different ways. But from an architecture standpoint, it's really all about the cluster and it will sound very familiar as we talk a little bit more about Kubernetes. The design is very similar because you have similar needs.
00:09:58
Speaker
Okay, gotcha. Hosted service of Redis. That's something I didn't know about. Okay, perfect.

Deploying Redis on Kubernetes

00:10:03
Speaker
So let's move on to the obvious question, right? Like why Redis on Kubernetes and how do we get started with Redis on deploying Redis on Kubernetes?
00:10:11
Speaker
Yep. So Redstone Kubernetes, of course, works anywhere that Kubernetes works. That's one of the advantages of doing it in Kubernetes, right? Is that it works there. So whether you're hosting your own, your managed Kubernetes and AKS, GKE, EKS, all of the various hosted platforms, wherever. And the reason for doing it in Kubernetes and the benefits are several. And there's challenges too, right? So as I described before,
00:10:36
Speaker
It has its own control plane as its own data plane. Sounds very familiar. And for the same exact reasons and patterns that Kubernetes uses these patterns, right? These are known patterns. Kubernetes didn't invent these patterns. These are patterns that became quite obvious in large enterprise software. And then the challenge for us is which pieces of the control plane that you give over to Kubernetes
00:11:02
Speaker
we can do this outside of Kubernetes and then do these kinds of things. What is it that makes it more Kubernetes and makes it a better experience inside? From a benefit standpoint, there's a lot of benefits to what we do. So with Redis Enterprise, it runs in the same way that Redis Enterprise runs.
00:11:20
Speaker
We have a level three operator and controllers for the solution. So it deploys the staple set, uses operator pattern for maintaining and watching all the things that are going on and pulling all the levers, being able to do all of the normal day-to-day functionality that you expect people to do from an administration standpoint. So when I see this, do that, right? So you can build a lot of the smarts into what you do at large scale,
00:11:45
Speaker
writing the operator so the operator can take care of that. And then of course the controller. So we have numerous controllers, one of which watches the cluster itself. Make sure that the cluster is configured. How do you want the cluster to be configured? You're telling it to size up. Do all the things that you're expecting the cluster to do.
00:12:01
Speaker
And then we also have a controller to watch the databases. So when you create the YAML and says create a database with all these parameters, well, that's its job to make sure that that gets kicked off to the API and it does all of the pieces and then to make sure it stays within the configuration that it's supposed to stay in. And it allows you to do all the normal Kubernetes things in the way you expect to do Kubernetes things. The YAML files that describe objects, all of these objects that I talked about are all CRDs.
00:12:30
Speaker
in the system, right? So we take advantage of all the stuff that Kubernetes does for the very reason that they were created, be able to make it as Kubernetes like as you can within that environment. And ultimately for people to be able to do other things like namespacing it. So one of the advantages that you can have with Redis inside of Kubernetes is
00:12:52
Speaker
A single Kubernetes cluster can have a bunch of Redis clusters within it, which means if you've got different departments or if you've got pre-production and production in the same clusters, whatever your combination of what you do and how you namespace to do that, you can install us to do that. So you can install some single namespace, you can install a cluster to be over the entire cluster, a couple of namespaces, whatever actually works for you and the way you've configured, we're configured to be able to work
00:13:18
Speaker
So it makes it a really easy way to do it. And then everybody can have their separation of concerns and ultimately do things like GitOps patterns where you're just checking stuff in and out, right? And so it makes it really easy to maintain in that way. Gotcha. So does the Redis operator, I should have done my research, but is it open source? Anybody can get started with it just by downloading it? Redis operator is open source.
00:13:43
Speaker
In fact, almost everything that we do is open source. So the Redis open source project, of course, that we build upon, and then all these pieces and modules and everything else are open source. So there's very little that's not open source from us. And then when I deploy the operator as part of the CRD deployment and definition, do we cover just Redis Enterprise or Redis Graph, Redis time series, everything that you mentioned at the beginning of the episode, right?
00:14:12
Speaker
Everything that I managed to talk about at the very beginning, including right now, I'm in private preview on a specific feature, which is Redis on Flash, where we can actually extend the database so that having a really, really large database that's only in memory,
00:14:28
Speaker
can get cost prohibitive. For some people, that doesn't matter. They need it all there because they need that speed. Or for some use cases, you need speed, but it doesn't always have to be that speed. And then what we do there is put the indexes and some of the stuff in memory, and then the rest can be on flash storage.
00:14:43
Speaker
so on durable storage underneath. That expands the use case even beyond what I was talking about so that you can have really large data sets that get into lots and lots of terabytes and terabytes of data.
00:15:01
Speaker
From that standpoint, it's also useful even for really large data sets because RAM is expensive. Yep. Oh, it sure is. Talking about these huge databases, terabytes and terabytes of data, can we talk about a few customer use cases? If you can share some names, that would be great. If not, we can keep it anonymous and just talk about use cases.
00:15:24
Speaker
Yeah, so customer use cases, right? So there's so many solutions out there for customers. On our website, it's really easy to see, because it's really by your use case. So you can drill into the use case or the industry, and then based on what it is, we've got the references. We're really heavily in the financial services, retail, healthcare, gaming, pretty much everybody that needs to do things at high speed. And things like retail,
00:15:50
Speaker
For things like the feature store for i am not for protection so that instantly when you swipe that hard they can instantly make the decision whether this is fraudulent activity or not right you need to do that really fast gotta do that real time you gotta have something that acts on that in some of the second time right you can't take seconds and seconds that messes up transactions.
00:16:12
Speaker
Gaming for things like leaderboards, heavily used in the gaming sector, financial services, everybody that's doing anything at really high speed, trading, all the financials. I definitely can't give you those names there because they don't want to tell you who those are, right? So we've got some public references out there, but pretty much every sector that you're in, we're there. And we've been there.
00:16:36
Speaker
Gotcha. No, and fraud detection use case does make sense, right? You have to get that decision. I don't want to, like you don't want a scenario where a customer is standing at a point of service station, puts that fraudulent card in and then gets that transaction approved because they never got a feedback back from their system. So yeah, we need that to be blazing fast and make sure everything works.
00:16:57
Speaker
And in AI and ML, you got kind of two halves. You have the first half where you train stuff, right? And that happens offline, that's slower, bigger data sets, you're crunching everything else. But once you've actually distilled that down, those models and that information, that's the feature, that's the store that you

Scaling and Monitoring Redis

00:17:15
Speaker
need. They need that store to be really, really fast. And we found some great, great use cases there, and some published use cases. In fact, we just had a blog on our
00:17:24
Speaker
on our blog posts, specifically about that use case, talking about the work that was done there and the speed that we can get. Okay, okay. I'll make sure I find that and put it in our show notes as well. Okay, so I wanted to learn more about how the architecture has changed for running Redis on Kubernetes, like what Kubernetes objects do we consume? You said that you deployed as a stateful set, but if you can get into any more details, that would be great.
00:17:53
Speaker
Yeah, so when we get down there, there's not a ton of change from regular Redis, other than Redis itself had a concept of nodes, just like Kubernetes had nodes. So in a non-Kubernetes sense, you had a bunch of virtual machines, and then you put them, and then you created node, each one of those was a node, because you had the Redis process running on it. For us, that actually just turns into a Kubernetes bot.
00:18:17
Speaker
And this is where you can have a lot of them rather than them consuming the entire virtual or physical machine. You basically containerize it, right? So the same idea. So that's the only place where we kind of have a collision on naming. Other than that, all of the normal Kubernetes stuff that you'd expect from a staple set running inside of a pod with the controllers and the CRDs and all the things that go there. Installation is pretty easy. Installation for us is a single bumble file. You tell it to go, or if you're running someplace
00:18:48
Speaker
Say you're running an OpenShift and you're using their OLM or something like that and their catalog, you just install it and it grabs the operator, does a complete install for all the objects that are in. And from there, most everything that you can do, you can do in the Kubernetes way by modifying those objects. So when you have a database, you can change settings on the database and do things there.
00:19:14
Speaker
You can only happen at create time, right? As they make major decisions about what's included, like which modules are included at create time, right? But everything else that you could do from the API and you can do it in the Kubernetes way, modify that Kubernetes object and off you go.
00:19:31
Speaker
Okay, perfect. So how do we handle upgrades? Is it just like, okay, we change the custom resource, provide the new version and the Redis operator does a non-disruptive upgrade for me? Is that it? Exactly. That's one of the beauties, right? Is that the upgrade path is really easy.
00:19:48
Speaker
No version of the operator comes out and you make a modification to the Rec, the Redis Enterprise cluster to the new version of the underlying containers that make up the solution. You do that and then it takes care of sequencing all of that and making sure that you don't have any data loss of doing them one at a time upgrade as it goes through the upgrade. So it makes it super easy to do the upgrade and to manage that. And it has all the logic in there for doing
00:20:17
Speaker
the combinations of how many you can do at one time. If you've got a really large cluster or you've got a lot of clusters to do, it makes it super easy to do there. The other thing I'll say is with all of that, while you've got all these pieces that represent all these objects for Redis inside Kubernetes, if something happened to the operator, if something happened to the controllers,
00:20:40
Speaker
Redis still keeps running. Now you might not be able to use YAML files to change its behavior, but Redis itself is still running, its API is still running, all the tooling that normally talks to Redis is all still there. So it also makes it easier because during the process of changing what controls Redis, Redis itself is unaffected. You're just changing a control plane outside of its control plane, which is one of the advantages of us having our own control plane.
00:21:05
Speaker
Yeah, that makes sense, right? As you said, if control plane goes down, your data plane shouldn't be affected, your applications shouldn't go down. So yeah, I actually never thought about what happens to custom resources when an operator goes down, but I think that's a question that I'll add to my list for people that are guests on the part. So anybody listening in and want to come on, do make sure, be ready for that question. Well, that's one thing about custom resources. Custom resources,
00:21:33
Speaker
And by definition, when you have a custom resource, you have multiple versions. As you're doing versioning, it's one file that has all those versions. If somebody does something wrong, something gets corrupted or whatever, and you do that, and then you install one that potentially causes a problem, depending on what you're doing could cause you problems, right? And so having that control plane outside of that control plane allows you to suffer errors at that level without impacting the operation database itself.
00:21:59
Speaker
Gotcha. Okay. And how do we fix that? Like just out of curiosity, like if, if my operator goes down, my database is still running, how do we, how do I reconcile everything? So it looks good again.
00:22:10
Speaker
So while it's down, while the Kubernetes pieces are down, database is still running ahead, doing whatever receiving and sending whatever data. Once actually you fix whatever that problem is, the job of the controller, of course, is just like the controller in Kubernetes whose job is to make sure that
00:22:31
Speaker
What's in the YAML file is the ultimate configuration of what's under the covers. Exact same thing for us. Same behavior for us. That is, if this says there's three copies and what's running us two, then when it comes back up, it says, oh, there should be three copies, right? And it forces it back. Controllers are actually pretty stupid. They just say, make the real world look like the YAML file. That's the job of it.
00:22:54
Speaker
Okay, thank you. So we spoke about upgrade processes. How do I scale it up, right? I might have started with a really small instance, but now I need more capacity. How do I scale up or scale out my Redis cluster? Yep. So you've got two things that you do. Number one, it's the cluster itself, right? So you add more raw capacity by expanding the size of the cluster. Now, maybe that you expand the size of your Kubernetes cluster first, and then you expand the size of the Redis cluster.
00:23:23
Speaker
amount more of whatever. Could be that your Kubernetes cluster has enough room for whatever we're asking for, right? It depends on how big you're scaling. So increase the cluster, that allows you to run more shards. And then that's the other piece for us. It's the shards. So when you have a database, you have a number of shards. When you have a database of a certain size, you're not going to want all in one place. You're going to want to be able to actually shard that thing out. And when you do that sharding,
00:23:51
Speaker
That's how you increase that only capacity, because now you're using more memory and more boxes. That's also how you increase throughput and performance of this system as well. So as you scale out, it starts running faster and faster. So more throughput is available, right? Up to massive scales, right? Millions of operations per second. And that requires you to scale out in the number of shards, right? Because you got to distribute that work across a larger number of shards for what's going on with the database.
00:24:21
Speaker
Okay, gotcha. So are there like Grafana dashboards or some sort of monitoring tools that you have built into the operator or just plugins built into monitoring systems that can help me monitor my Redis instance and identify the need to perform a scale up operation?
00:24:40
Speaker
Yeah, in fact, in the design of Redis itself, it was already made itself available for it to be scraped by Prometheus. And of course, when we did the Kubernetes pieces of it, we passed that on through to there, and then all the other things about what's going on at the Kubernetes level for all the other pieces that run inside of Kubernetes. So pretty easy, very easy to install that and grab a dashboard. Boom, we've got some out-of-the-box dashboards that will give you all of that information. Okay, perfect.
00:25:10
Speaker
Next thing, I think we covered upgrades, we covered scaling. How do I build or architect for resiliency or high availability or how do I protect against some node failures or how do I take backups? Like just cover everything there. Yeah. So.
00:25:27
Speaker
And this is where it really gets interesting because we're really designed for this. And this is the difference between the project and the enterprise version, right? The enterprise version has a bunch of things that are added on to handle things at large scale, to be able to handle things like failover and all that stuff, right? It's really designed around those structures. So out of the box, HA, pretty simple. You tell it that you want the database to be HA.
00:25:51
Speaker
and it'll create two shards, a leader and a follower, right? That normal structure, how many followers you want to have, depending on the resilience that you want. For HA, we also totally understand everything that's necessary to understand about affinity and anti-affinity and what's going on. So we give you the ability to do rack zone awareness. So if you're running a public cloud, they have zones, availability zones, when you do something, we automatically place the pieces far apart from
00:26:17
Speaker
as far apart from each other as possible, right, to make sure that when you have an incident and you lose an availability zone, you have a blast rate, you can try to contain that blast radius.
00:26:27
Speaker
inside of it as well, because you might be running this thing on your own on-premises for whatever. We have rack and zone awareness. You tell us what your topology is and what the rules are for your topology. And then when you do something and have multiple copies, right, so for HA, then we make sure that the parts are in your blast radiuses, wherever you design your structure for when you're doing that. Beyond that, which is all about, you know, kind of innovator kind of kind of things, you also then start getting into geographical.
00:26:58
Speaker
disaster, right? And then how do you spread yourself geographically? And for us, that's the ability to do active active with conflict free database technology. And that basically allows you to go
00:27:16
Speaker
geography separated, maybe different continents, then of course, you have to do something in a different way because the latency is there or not, so millisecond latency, you're not in the same data center, right? So you have the ability to do that. And you have the ability to do it as primary, secondary, or active, active, so that you could actually be working at the database closest to where you are and closest to where your customers are.
00:27:40
Speaker
That's one of the things that people really like about us as well, is being able to do that and using the same data, there's the same data, but separate

Infrastructure Flexibility and Resources

00:27:49
Speaker
it, right? So that you can do that and actually being able to be active on both sides and then you
00:27:58
Speaker
we figure out which ones were done when, what were the actual impacts to what changed, and the rules that you follow if there is some sort of conflict. So that at the end of the day, you don't have any conflicts and changes that were done distributed.
00:28:13
Speaker
And so that's the way that a lot of people are using us for their highest availability use cases and for the lowest latency use case. So they may go geographic and they go city edge, right? They could actually go all the way down to use cases like folks in telecom where they literally want one of these things in every 5G tower, right? Those kind of configurations as well.
00:28:37
Speaker
So whatever your architecture needs to be, whatever your level of HA, geo-redundancy, redundancy, I mean, some people use our HA even just for too close data centers in a metro area for doing that, whatever it is that you want to do there.
00:28:59
Speaker
Okay, no, the longer distance active, active architecture is something that, okay, that's interesting to me. I definitely need to go and learn more about it because yeah, usually when you hear active, active, it's mostly, oh, you need to meet like our industry standard might be like a 10 millisecond latency requirement to make sure all your data actually gets copied over. So if you're able to do this across geographies that, okay, that's awesome.
00:29:25
Speaker
Yeah, and it's not an easy problem to solve. It's one of the big things that we work on and it really came from customers in their use cases. And the fact that this is what you really need to be able to run large distributed global systems, right? And a lot of very large banks, a lot of very large trading company, like all sorts of stuff runs on that.
00:29:48
Speaker
Okay, so I guess if we can, a couple of more questions around this because this is just so interesting.
00:29:56
Speaker
Like are these geographically distributed instances running on different OpenShift clusters or different Kubernetes clusters for that matter? And can I use the same operator and custom resources to deploy this? Or how do I architect this? How do I run this on Kubernetes? Yeah. So from an active active standpoint, the other side is another Redis cluster, whether or not runs in breadth.
00:30:23
Speaker
Different flavors of Kubernetes. Maybe you're doing something at the edge where you tend to run in one cloud, but this other cloud doesn't have a geography that you need. But this other cloud vendor does have that geography. You could have it in there. You could even have it in their managed version of Kubernetes. And you're running in a managed version of Kubernetes. You're running some of it on-prem.
00:30:45
Speaker
We don't care. We run in all the Kubernetes, of course. And in some Kubernetes, they have additional functionality or additional opinionated versions like OpenShift that has an opinionated version of what they do. We run there as well and take advantage of some of the functionality. Like they use routes instead of the aggressors in a lot of ways. We do that sort of thing as well. But from a geo redundant
00:31:07
Speaker
It can be any cluster anywhere, in any flavor of Kubernetes. There's some restrictions, of course, you got to be close on the versions of Redis itself, depending on its capabilities. But that's really the only thing that limits you, is just making sure you got versions of Redis that are close enough to each other, that there's not a major version or something different between dependencies.
00:31:28
Speaker
Okay, so like if I got that point correctly, right? I can be running in US and I can have Redis on Kubernetes, but if I want to expand into EME, I don't want to spin it up on Kubernetes. I can use your managed service or I can use one of those hosted services or run it on VMs and still have those two Redis clusters talk to each other. Exactly.
00:31:49
Speaker
Oh, wow. Okay. Okay. Super powerful. Because you run into all sorts of interesting things. When you start, when you start having to go to different geographies, different countries, you've got some some country specific requirements on basic data residency, and that's kind of stuff. And not every combination is available in all those places, right. And so being flexible like that allows you to really custom tailor your solution. And it could be that you do almost everything all almost exactly the same. And then there's one off.
00:32:18
Speaker
we can handle the one off. Okay, gotcha. So I think my next question is around like, how do people learn more about Redis on Kubernetes? How do they get started? I know operators like a great place to do that. But any any pointers that you can give us a give our listeners.
00:32:34
Speaker
Yeah, if you go to redis.com slash solutions, it's a great way to start because when you go to solutions, you start seeing the use cases, industry stuff, probably closer matches to what you're looking for. And we're kind of, you know, in each one of those areas, we have a different way. Of course, you could just follow the solution out of Kubernetes and go specifically to our Kubernetes documentation, tells you the architecture, how it is and isn't the same as Redis otherwise, the node pod kind of thing.
00:33:02
Speaker
fully explained in the architecture there, allows you to do that. So a lot of good stuff. We've got a YouTube channel, of course, that's got a bunch of these use cases that you can go and see what we do there. We do not only webinar recordings and stuff like that, but specific technology recordings. And in some cases, you know, specific trainings that we do that we actually put that training out there. So some of the same training I would consume like when I came on,
00:33:27
Speaker
I consumed a lot of training because this is a lot of stuff was available there as well.
00:33:33
Speaker
Gotcha. And I like that these trainings are available publicly because similar to how you said, right, when you joined Redis, you watched those just on my side, right? When I joined Portworx to get ready for the interview process, I did look at all of their training videos or lightboard videos that Ryan had on the Portworx YouTube channel and got up to speed on what the product does and then I was able to kill it in the interview itself.
00:34:00
Speaker
All of those videos are super helpful. Yeah. And it was the same thing here at Redis, right? Redis is so wide, so deep. It's got a dozen years behind it, right? So a lot of capabilities. And for us, it's easily weeks and weeks worth of training to bring on product managers and solution architects, right? To really understand what's going on. And we just touched a little bit on the Kubernetes pieces of it, right? The product itself has so many other capabilities.
00:34:26
Speaker
But that's what makes Redis great, right, is there's so much there, there's such level of depth. And I say from a company standpoint, we're hiring like crazy, we're remote friendly. We got people all over the globe. If you go out there and look at our careers right now, I think there's like 20 different locations that are right now there in our career section. So we're a very, very distributed company.
00:34:50
Speaker
Okay, gotcha. So I think that's my entire list that I had for you. Is there anything that we didn't cover today that you wanted to talk about Redis on Kubernetes? No, I think we hit all of the major points, I think that we've
00:35:05
Speaker
Just scratch the surface, but I think that's a good overall summary of where we are and what we're doing in Kubernetes. We're doing more exciting stuff in Kubernetes. Like I said, that Redis on Flash is a new place that we're working on. And we're always looking to expand the capabilities and looking to expand what we do in Kubernetes as a differentiator to even what we do with Redis everywhere else.
00:35:29
Speaker
Okay, perfect. I think we'll have to get you back whenever you GA Redis on Flash to talk more about it. Yeah. Perfect. And I'm looking forward to meeting you at KubeCon North America. I'm assuming given your role, you will be there. You will be the point version.
00:35:46
Speaker
Okay, perfect. With that, I think this was an awesome episode around to just get started. As you said, we just scratched the surface, but it still gave us a good overview of how Redis works, what are the different components, how it works on Kubernetes, how it gets deployed, how do you scale and upgrade and build our architect for high availability. So thank you so much for your time, Brad, today.

Episode Conclusion

00:36:09
Speaker
I'm looking forward to many future conversations. Whenever I need to ask or talk about Redis, I know who to reach out to.
00:36:16
Speaker
I appreciate that, Bob, and it was very nice having a conversation with you. Yeah, thank you so much. Thank you. Awesome. I think that was a great episode that we had. I think we covered a lot of ground in those 30 minutes, and Brad helped me answer so many different questions I had around Redis. Starting from, if we just jump to the takeaway section, the first thing I learned is Redis, as a company, does
00:36:43
Speaker
so much more than just having that Redis Enterprise Server or a simple Redis cluster. There are things like Redis Craft, Redis Time Series, Redis JSON that build up the multimodal database that they have. And then in addition to modes, they can combine things like Redis Bloom for Bloom filters and Redis Search. Even though I don't know what Redis Bloom means, that's something that if any of our listeners are using it, they know that's something that's available. I will definitely go and check it out what these additional modules are as well.
00:37:12
Speaker
The second key takeaway that I had was just operator. All the different databases that we have discussed on this podcast have an operator that go along with it that make the installation deployment ongoing management really easy. I was glad that Brad was able to talk about how easy it is to upgrade, how easy it is to scale your Redis cluster,
00:37:35
Speaker
using things like cluster, like expanding and adding more nodes to your cluster or just doing basic sharding. So both of those are important. The third key takeaway was around that geographically distributed, active, active Redis deployment where you can run Redis on Kubernetes or you can even have Redis on Kubernetes on one side and maybe a managed Redis service on the other.
00:38:01
Speaker
and you can still have that connection between the two clusters. So that is something that was new for me. I will definitely go and learn more about it. I'm pretty sure, like once I do, I'll also put a link in the show notes. But with that, I think we can end this episode. Thank you so much for listening to our episodes whenever they come out.
00:38:21
Speaker
We really love all the engagement that we have. We always see our numbers increasing. Thank you so much for listening. I think if I can ask you a couple of things, please feel free to leave a comment, like, share, subscribe to this podcast. Give us five-star ratings on the podcast up of your choice. And if you can, over the weekend, maybe next week at work, share this podcast with two more people from your team. Help us expand the audience as well.
00:38:50
Speaker
With that, this brings us to the end of this episode. This time it's just far in and thank you for joining another episode of the Kubernetes Bites. Thank you for listening to the Kubernetes Bites podcast.