Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Powering Decentralized Cloud with Kubernetes image

Powering Decentralized Cloud with Kubernetes

S2 E31 · Kubernetes Bytes
Avatar
371 Plays2 years ago

In this episode of Kubernetes Bytes, Bhavin and Ryan talk with Vishnu Korde, CEO and Chief Architect of StackOS. The hosts explore the topic of "DeCloud" or decentralized cloud which aims to create anonymity through the use of cross-chain open protocols allowing individuals and organizations to create a decentralized computing layer for the internet allowing teams to bypass traditional infrastructure management and cloud compute silos. Learn how Vishnu's company StackOS is tackling this problem and how they are using Kubernetes as a orchestration layer to provide application deployment into this DeCloud.

Cloud Native Data News of the Week

StackOS Links

Recommended
Transcript

Introduction to Kubernetes Bites

00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management. My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.
00:00:27
Speaker
Good morning, good afternoon, and good evening wherever you are. We're coming to you from Boston, Massachusetts. Today is October 12th, 2022. I hope everyone is doing well and staying safe. Let's dive into it. Bhavan, how are you?
00:00:43
Speaker
I'm doing good. I'm doing good. Thank you for asking.

KubeCon Event and Personal Plans

00:00:45
Speaker
Like it's the weather has been great. Like I know it's a bit chilly. You have to put a light jacket on, but it's awesome to go out for walks. As always, as I said in the last episode, like KubeCon is in week and a half. So just trying to keep my head above about what are at this point. But yeah, once once KubeCon is done, I think I'll plan for like at least two days of doing nothing. I'll just take a break.
00:01:09
Speaker
Yeah, KubeCon is always just so fast-paced. Everybody's on all the time, although it is a lot of fun. I'm looking forward to it. I'm going to have to get some kind of Detroit pizza. I know usually I go to the first thing I do is find a good noodle place. I'm going to have to do that. Maybe I can meet you somewhere. We'll see. I did that this past week at my Octoberfest. I found myself a good noodle place.
00:01:35
Speaker
and had a good bowl noodles first time I got in there. It's just a thing I do. I enjoy it very much.
00:01:43
Speaker
It was great.

Conference Experiences and Post-conference Relaxation

00:01:46
Speaker
You know, the conference is definitely unique in the sense that it's sort of social slash thought leadership slash intersection with technology leaders. And it was great. The first day was really rainy, really cold. They had like a boat tour that, you know, it had plenty of indoor space, so it was fine.
00:02:06
Speaker
And then the whole conference was perfect. As you said, this time of year is fantastic. The talks were interesting and also intriguing at the same time. And I think the value of that conference is just really having quality conversations and meeting great people in between those talks and in between where you go at night or to the dinner and those things.
00:02:29
Speaker
Oh yeah. Hallway tracks are always the best, right? Yeah. The highlight, I would say, from this one. Yep. Nice. Awesome. Sounds like you had a great time. Yeah, not bad. I mean, the weekend was good too. We did a little apple picking, very cliche of the Northeast thing to do this time of year. But I got a couple cider donuts and I'm a happy guy. Nice. What do you do with so many apples? What do you make? We make apple crisp.
00:02:56
Speaker
In this family, sometimes we'll make an apple pie, but apple crisp is sort of our way to go. I also have a dehydrator. So I use that mostly for like jerkies and things like that, but sliced apples make a good apple chip. If you're into that kind of thing, my daughter is so like just making a whole bunch of those. She's a happy camper as well.
00:03:16
Speaker
That sounds yummy. Yeah. Cool.

Interview Setup with Vishnu Korde

00:03:20
Speaker
So today is a really interesting topic. We are talking with Vishnu Korde is the CEO and chief architect of StackOS. If you haven't heard of StackOS, just wait. We'll introduce him and we'll dive into everything StackOS. A very cool topic, I think, for today because it sort of blends this Web 2 and Web 3 world.
00:03:40
Speaker
that a lot of people may not be familiar with, possibly. But before we go into that, I think we have some news we're going to dive into.

Cloud Native Updates and Innovations

00:03:50
Speaker
Bhavan, why don't you kick us off? Yeah, sure. So from a news perspective, Apache Kafka has a new version of Kafka 3.3. The reason I highlighted this specific release is, for a few releases, Kraft has been a consensus mechanism
00:04:07
Speaker
in addition to zookeeper that was under the works. With this release, craft becomes production ready for your new Kafka clusters. So if you are planning on deploying new clusters, you can now use craft instead of spinning up a side-by-side Zengaster instance or a Zengaster cluster, not Zengaster.
00:04:23
Speaker
Zookeeper cluster, sorry, Zincaster is the tool that we used to record this for. Zookeeper cluster for consensus mechanism. And if you have existing Kafka clusters, I think the release 3.5.0 will be the bridge release that will allow you to upgrade from or modify your clusters from Zookeeper to Kraft. So something to keep in mind if you are using Apache Kafka today.
00:04:47
Speaker
The second article that I had was on Red Hat hosted control planes. I know they announced something earlier this year, earlier this calendar year. It's still not GA yet, so this is not a GA announcement. But they have a good interesting blog article which compares why hosted control plane
00:05:02
Speaker
Is important and they have a really neat analogy with like nuclear reactors and how you have a control rod that can help you control your nuclear reactor and they compare that with how you need your control plane to be always available for your applications that are running.
00:05:18
Speaker
and how hosted control planes can help you scale up and scale down as you have more or less applications on your OpenShift clusters. Check that article out, we'll have a link in the show notes, but the GA is still planned for early 2023. I'm pretty sure we'll bring it up again once it's generally available.
00:05:34
Speaker
And then finally, I think InfluxDB and Telegraph, which is their open source monitoring platform, they have a hands-on blog to talk about how they integrate with K3S for those edge deployments. I know we covered this in a theoretical scenario when we did talk to somebody from InfluxDB.
00:05:54
Speaker
I just wanted to highlight this, and if people like that episode, they now have actual steps on how they can implement this with their K3S clusters and then connect back to InfluxDB cloud. So we'll have that in the show notes as well. That's it for the news from me, Ryan. Up to you.

Postgres and Cloud Native Storage Trends

00:06:10
Speaker
Great. I like that nuclear reactor analogy. It certainly wants that thing, being able to pull that nuclear reactor rod out and in. I feel like I've recently watched a documentary or a film about, oh yeah, it was Three Mile Island that I really didn't know a lot about to be honest, but it's on Netflix if you're interested. Anyway, just made me think of it.
00:06:32
Speaker
So my news is the first one is about the CNCF Security Slam. So this is presented by CNCF and Sonotype. But the whole idea around this is to improve security posture around certain projects. I think as a collaborator and or contributor of this event, you'll be able to contribute basically directly to projects. Projects are still being sort of submitted. But I love everything around security that really focuses on
00:07:01
Speaker
using tools and contributors and community around improving security. I know they will actually use a couple of CNCF tools, CLO monitor as one of them. I'm not familiar with it, but if you're into security and or want to get into it, it sounds like a good event to go check out. The next piece of news I have here is that an article
00:07:25
Speaker
on the new stack, which is five years of Postgres on Kubernetes. And this caught my eye because I was sitting here thinking, wow, it's really been five years of Postgres on Kubernetes. Like we're already here. We've been talking about sort of persistent storage and stateful applications on the podcast for almost a year and a half. And it feels like, wow, you know, we've come a long way. And this article does a good job of sort of talking about the community, what it was during 2016 and the early days of
00:07:55
Speaker
Kubernetes, remember like 2015, it was still super early days for Kubernetes. And it just really kind of hones in on sort of where we've come in the ecosystem. So good read. And I think relevant to this podcast, if you're working with Postgres or interested in sort of starting with storage or Postgres, go take a look. It does a good job at sort of highlighting some of those things. The next one is really about the cloud native storage market.
00:08:25
Speaker
And maybe some may be Bob and you're familiar with this already, but there's sort of the global cloud native storage market size. And there's a couple of reports out there that show that it's expected to reach a certain amount of billions of dollars by 2028. And that's really showing a growth of 2020.
00:08:43
Speaker
22.3% CAGR. And if you're not familiar with CAGR, and I'm not unless you are working with this stuff every day, that's compound annual growth rate. So between the time period until 2028, it's annualized average rate of revenue growth. So lots of opportunity basically that translates to lots of growth in this market for cloud native storage, keeping in mind that even this article
00:09:10
Speaker
focuses on that there is specific use for cloud native environments. And when storage solutions are applied to that, it's considered cloud native storage. So it's pretty broad in terms of definition, but I think just add some validity to maybe some folks in this ecosystem that are working directly with storage in Kubernetes and cloud native.
00:09:31
Speaker
41.9 billion is a big number. Even though you said it covers anything and everything when you connect storage to a cloud native environment, but still a good enough number and I'm happy that I'm in this ecosystem already. There you go. If you're new to it, come join us, I guess.
00:09:50
Speaker
The next one is from IBM.

IBM and OpenShift Data Integration

00:09:54
Speaker
So IBM, I know you heard this announcement as well. IBM basically announced along with Red Hat that they are going to be doubling down on OpenShift data foundation that is previously known as OpenShift container storage. Also, the core technologies of that are Ceph,
00:10:13
Speaker
I believe Rook and Nuba and some other technologies that Red Hat has really built as a strong solution for the storage space within OpenShift. And IBM has something they've had in the space called Spectrum Storage. And basically this announcement is saying we're going to double down on the technologies within ODF.
00:10:33
Speaker
as the future of what spectrum storage is today. So I think a consolidation slash new beginnings slash what that actually means for customers that may be on spectrum storage or how long that'll take or just how soon we'll see changes. I think it's an interesting article though, and really just shows, I think to me, the job that I think Red Hat and OpenShift have done with ODF, which has always been a very interesting and powerful
00:11:04
Speaker
Yeah, and people might forget that IBM actually acquired Red Hat last year, right? They have done a good job of keeping those two entities separate. IBM does its own thing and Red Hat does its own. So when this article came, I was like, okay, this is the first thing that I've seen because of this new management structure. An entire storage business unit from Red Hat is now
00:11:24
Speaker
Moving to IBM and and the products are being the foundation for products are also being changed So we'll see how it plays out in the next few months. But yeah, this was definitely an article that caught my eye
00:11:36
Speaker
Yeah, so the last one I want to talk about is from a startup called Lucidity.

Lucidity's Cloud Optimization Tools

00:11:43
Speaker
I don't know if you have heard of them before, Bobbin, but an Indian software development startup that released basically automation software to right size cloud block storage.
00:11:54
Speaker
And I brought this up because it's not the first time we've heard of software doing this, right? We often hear of over-provisioning, over-utilizing sort of things like snapshots and not really having a good sense of where and how much your storage is being used and ultimately costing lots of dollars. So there's companies out there that are really looking at
00:12:21
Speaker
The cost of the sprawl that happens in cloud and this is not the first company that comes out and really target sort of automation for reducing sort of the waste in terms of lowering your costs out front and increasing utilization so if you're.
00:12:42
Speaker
They claim, I think, if you're 35%, they can get you to 80% utilization and those kind of things with their software. So again, we're going to be talking to some folks in the cost space, I think, on the show coming up. And it's all just relevant. I think we're seeing a lot more trends towards this type of use, especially as we go and are in this inflation recession period, doubling down and looking at this type of thing from their operations perspective.
00:13:11
Speaker
we will include that article as well.
00:13:13
Speaker
All

Kubernetes Security Audit Discussion

00:13:14
Speaker
right. Yeah, one more thing, right? Before we move on to the actual interview, this is something that I listened to on the Google communities podcast. First of all, thank you, Craig Box, like you have done a great job over the years, you definitely inspired me to like do a podcast in the communities ecosystem. I know he's moving on from the podcast and on to better things. But yeah, great job on the podcast. But something that he highlighted was there is an external security audit for communities that's going to
00:13:41
Speaker
happened in october twenty twenty two or later this month which will basically come out with what the project has done and where can we improve things but there is a new blog post on kubernetes.io that we'll put on in the show notes as well which spoke about
00:13:56
Speaker
What a similar audit in 2019 showed us, like what were the different things that they highlighted and what has happened to fix those issues. So definitely, it's not just a vanity exercise of having somebody review your open source code. There is actually work being done to fix most of those issues as well. So there is an audit happening in 2022. And I'm hoping we hear about those results maybe later this year or early next year. Something else I wanted to add
00:14:25
Speaker
Yeah, I think we'll definitely be covering the outcomes and output from that report if we can on the show. I think it'd be a perfect one. Good timing too with our last episode on the cloud native security as well. Yes. All right, so let's get Vishnu Korde

Understanding StackOS and Web3 Integration

00:14:42
Speaker
on the show. I'm excited for this conversation.
00:14:45
Speaker
Vishnu, it's so good to have you on Kubernetes Bites. We're excited about today's conversation. Why don't we give our listeners a little bit more information about yourself and kind of what you do at StackOS? Sounds good. First of all, thank you, Ryan, for having me here. It's a pleasure just to talk with you. I've heard a lot of good things about you and your channel. So yeah, quickly talking about myself. My name is Vishnu Kode. I'm the CEO and chief architect of StackOS.
00:15:13
Speaker
And for most people who are aware of StackOS, they know us as a decentralized cloud. How I like to really talk about that thing quickly is StackOS is a decentralized cloud, also called as Dcloud, the web3space, which is basically responsible for creating
00:15:30
Speaker
equivalent services which AWS GCP and Azure provide for the web3 world. So that is what StackOS is doing. It is a long history. We've been developing this product for over five years. And I have a history of myself as I've worked with AWS. I'm trying to build the AWS product when they return their very, very nascent stages.
00:15:58
Speaker
And then we worked with a lot of these larger billion dollar plus companies across the US. I would like to really share about myself as I've worked in all the four time zones in the US, in the mainland US. And last year we released some capital and then I moved to India to grow or expand our engineering team.
00:16:22
Speaker
So right now I'm here as a CTO sits in Boston and the team is totally decentralized, right? I'm just heading to the ethos of our three. So yeah. I like that. Not only does the technology come decentralized, so does your team.
00:16:39
Speaker
Nice, yeah. So, Vishnu, actually, I wanted to learn more about StackOS, right? Like, what's decentralized cloud? Why is it different? Why was there a need to do something different than what AWS and Azure or Google Cloud already have? Absolutely. So, when you talk about web3 space, you know, it's more about, you know, how do we get this ecosystem where you are able to deploy any applications without the need for the sharing of identity?
00:17:07
Speaker
Many of the projects are organizations and also the future DAWs. DAWs are the decentralized autonomous organizations. They do not have an identity to share with AWS or GCP or any of the cloud providers. Stack versus is a framework which allows people to deploy applications without having to know or learn about how the DevOps works.
00:17:29
Speaker
Just a quick background, as I said earlier, in my introduction, I have worked on, I've helped build cloud services for really, really large organizations, right?
00:17:39
Speaker
And in my history, you all are experts in Kubernetes and overall DevOps as well. You all know that to get a good architecture set up on AWS and to expand that, it requires at least a few weeks before when, before which you cannot really go to production. It takes architecture development cost, your infrastructure DevOps engineering cost, your psychops cost.
00:18:03
Speaker
And then, you know, operations, just making sure those probabilities, why it may sound, you know, easy to scale, it has its own nuances to really grow. So, you know, all that thing adds to the expense to go to market, but also along with that, it is an expensive DevOps operations and cost.
00:18:23
Speaker
So what StackOS does is that it basically obfuscates the need for building any of these DevOps tooling and natively integrates with a DevOps or developer experience where they just focus on developing the code and the application gets deployed across the world in all different clusters.
00:18:39
Speaker
I'm not having to manage their own infrastructure while also ensuring that it is decentralized, you don't have to share that for organizations which do not have an identity, they do not have the need for, they can easily use a platform like us.
00:18:56
Speaker
Yeah, I think this is a really interesting concept. And maybe a follow-up question is, since the decentralized clouds allows folks to share their computer resources, maybe you can talk a little bit more about what that cross-chain open protocol as you have on your website actually means. How do you accomplish allowing that developer experience through that decentralized compute for everyone?
00:19:23
Speaker
Sure. So how this thing works is that when you talk about open protocol and cross chain, one thing to know is that Web3 is basically an editor where there's no such less ownership. But then you look at using these products or platforms, you can interface with Web3 through different chains which are there. The biggest one which all of us know about is Ethereum.
00:19:51
Speaker
and there are other chains which are becoming very popular like Polygon, Avalanche, Polkadot, and so on. As these chains start coming in, Stacos is not a layer one blockchain protocol, it's an infrastructure protocol. What Stacos does is it provides compute to these native chains. Now, if you look at it this way, smart contracts which are built on these native chains,
00:20:16
Speaker
They are good for storing data, more like a decentralized database for arguably, let's put it that way, a ledger, decentralized ledger, which basically stores data with a smart contract. Now, what it cannot do is artificial intelligence, machine learning, and so many other toolings, which really make a real product usable.
00:20:37
Speaker
So what I think what happens is that as these new Web 3 protocols are built, or meaning the layer one chains are built, they all require compute. So what Stackhouse is in the mission of providing that compute layer for all the different chains like Ethereum, Polygon, Avalanche, Polkadot, and the rest.
00:20:56
Speaker
So that's kind of, you know, how we look at it as an open, you know, as a decentralized cloud for cross-chain, you know, and also, you know, let me give you an open protocol that means anybody can contribute. And in fact, to that note,
00:21:13
Speaker
almost 95% of the entire community, you know, of all the marketing side of StipeOS is actually from the community, right? So we bring people from the community to contribute and then they escalate to be people who run the team as well. So that's kind of, you know, how we call ourselves as an open protocol and, you know, having cross-chain functionality.
00:21:35
Speaker
Gotcha. So again, personally, I have more questions around how you work with these different chains. But let's keep this conversation to Kubernetes, right? Since it's a Kubernetes podcast.

Deploying Applications on StackOS

00:21:48
Speaker
Yeah. So let me ask you this question. What does an application on StackOS mean? Are these applications similar to applications that
00:21:58
Speaker
companies or organizations usually run or are these decentralized apps like are these based on smart contracts as you said and only cater towards those web 3 use cases or those decentralized use cases right now.
00:22:12
Speaker
Yeah, it's a great question. So again, putting things from my experience, running these large organizations, almost a big amount. So let's say for a mid-scale organization or startup to a mid-scale organization, if they want to run an infrastructure which is going to be 24 hours monitored, they require DevOps engineers, at least for three DevOps engineers to manage the infrastructure.
00:22:34
Speaker
that accounts with the US, it accounts almost like three-fourth of a million dollars in annual cost, right? Just the base salary. Everything else is more. So, what happens is that all these web-to-organizations, means traditional organizations, they already feel the burn due to these infrastructure.
00:22:55
Speaker
developers or DevOps in many organizations. And also these guys are expensive, first of all. Second thing is that their reliability is not necessarily always good. And again, I don't have to explain this to you already, but organizations, really large organizations in the US, including current bureaus, they have had some stupid mistakes which have led to all these social security IDs just being available for public.
00:23:24
Speaker
So that's the thing. So even though you may have resources, good developers, you thought of good DevOps engineers, they don't essentially bring the value which you expect them to. So these organizations, they also feel the need for hiring the infrastructure not have to be managed by themselves, something like an offloading, while making sure that applications can be deployed with ease.
00:23:47
Speaker
So as I said earlier also, StackOS is an open protocol. Anyone can anonymously deploy applications. So we have companies in the web2 world who are using StackOS. From the data which we have, about 20% of the users of StackOS actually are web2 companies and the 80% are from the web3 just because we've been talking with them.
00:24:10
Speaker
we're more engaged with them. But yeah, I think that's what we're seeing. What we're also seeing as a model is that there are service provider companies or consulting companies like Infosys and others, of course, like Infosys, not Infosys itself, but who are serving their customers or their traditional customers while using compute from StackOS. So we are seeing this migration where people in both the worlds are actually using StackOS. I see.
00:24:38
Speaker
Now, I know, you know, you can provide sort of compute as a, maybe as an individual, as an organization.

Managing Compute Resources in StackOS

00:24:46
Speaker
Is there a mix of, I guess, what you see and how those sort of compute, how compute is provided? Is it mostly individuals? Is it mostly organizations or what's that mix?
00:24:57
Speaker
So the way the concept of the architecture around stack is that there are subnets and these subnets have individual clusters. So what happens is these clusters, the subnets can be very, very defined. It could be a fixed definition of subnet which has got compute or Kubernetes clusters providing compute only by organizations.
00:25:21
Speaker
There would be a subnet which would be individual. Anyone randomly can just add to the compute. They always have to accept that there is going to be a good chance that, you know, that compute will be, you know, may not necessarily be at the level that people want, enterprise-grade compute. So, you know, because an individual just manages other individual work.
00:25:41
Speaker
So for those reasons, there are other subnet which is very geo-specific. For example, GDPR compliance, those clusters, they are bound by locations based in the European region. So it does allow, it's a flexible structure where people who are individuals can provide the compute.
00:26:02
Speaker
which should be served by or used by people for the test environments and we have customers who are using production environments and they're using the ones which are provided by these and known enterprises as well. One of the largest, I believe, utilized subnets is the authority subnet, which is owned by the foundation. But there are multiple foundations for it in the computer.
00:26:31
Speaker
for the authorities of that. So that's kind of how that is structured. So if we are asking individuals or local organizations to provide compute, are there SLA requirements? Because if you want, again, I know we'll get to the community's questions, but based on what you said, if we want customers to use StackOS as the decentralized cloud,
00:26:54
Speaker
Are there SLA requirements from these decentralized locations from people that are providing this infrastructure? And a follow up question to that is, are there incentives to providing my infrastructure to you? And then what happens if I don't meet my SLA? Like what are the penalties associated with it?
00:27:12
Speaker
Right, so there are multiple multi-phased questions. So let me, let me understand one more. So are there any essays provided or which we are holding accountable to? The answer to that question is, I mean, what we do is that we track, monitor what the variability is and that is made public for the user.
00:27:39
Speaker
So when users are deploying an application on them, they know what our variability is. And also, one thing as an example I would give, even when AWS went out a few months back, Stack was up and was serving traffic like there was nothing there. So even though these clusters are independently operated, these clusters are at a network level also connected with each other.
00:28:05
Speaker
So what happens is that you know even if one of these just open the falters it you know it kind of.
00:28:15
Speaker
applications more to a different company's cluster within that subnet. So that's how it works. And usually people who are operating on a subnet, they're different people in those words. So that's kind of how resiliency becomes, even though it may have independently, sometimes clusters to work.
00:28:35
Speaker
That's how it's managed. But there are many things about enforcing. So the second part of the question is, is there any penalty or enforcement if somebody falters? So as I said, the subnet is a subnet road. And that is governed by a local DAO, a local decentralized autonomous organization, which can decide the penalties, which they can apply on these cluster operators.
00:28:57
Speaker
And that definitely comes in multiple fashion as well. So let's say if an individual completely stops working, the DAO has the power against a smart contract. It has the power to reduce whatever commissions they're getting.
00:29:19
Speaker
or they would be able to completely withdraw their, you know, so whether these clusters or these clusters are added to the subnet, they are decently folded in the stack versus NFTs. So, you know, a dark matter NFT. So that would be withheld by the DAO.
00:29:37
Speaker
So there's a lot of penalties for not being up at all. And it gradually reduces, there's a penalty which reduces in a smaller fashion depending upon, again, the doubt decides what is the implication of not being able to be active.
00:29:56
Speaker
But yeah, there is, again, DAO decide that and it's not necessarily, it's not something which we as a network do it, the local DAOs, which are local communities, they decide what should happen.
00:30:11
Speaker
Right. I think this is so interesting. Although you did mention the word Kubernetes and we have to go back to it because, you know, a lot of our listeners are like probably asking themselves, you know, what does this have to do with Kubernetes Spites?

Kubernetes and Docker in Decentralized Services

00:30:25
Speaker
And I think we're ready to get there. So if you've been waiting, let's dive in. So
00:30:31
Speaker
How does, I guess, StackOS use both Kubernetes, which I know is kind of like part of it, and then also Docker and Docker images? Let's dive in there.
00:30:43
Speaker
Sure. So, you know, conditionally, I know, you know, I do dash one, dash two, dash three as well, you know, but, but, but in that tree world, usually you talk of nodes, which, you know, which would run different chains, right? But in StackOS, it is, people don't add nodes, like, you know, in simpler, not instances.
00:31:05
Speaker
people who are familiar with AWS. So they don't run instances for running Docker containers. What they do is when they add compute, they add entire cluster.
00:31:19
Speaker
And when I say clusters, these clusters are basically instances within the Kubernetes system. So it's a Kubernetes cluster which is connected within the subnet. So what happens is when you have these multiple Kubernetes clusters come in, they create a network amongst themselves and any of the pods which are deployed, Docker images which deploy as pods on the Kubernetes clusters.
00:31:44
Speaker
they are deployed on within distributed amongst different kungalese clusters within the subnet. So that's kind of how architecturally it is structured and that's where kubernetes comes into play. And that's how Docker being for the most popular containerized mechanisms of application development. So it just comes natively with that as a workflow.
00:32:13
Speaker
Okay, so each subnet that you mentioned is a Kubernetes cluster that users can get access to. Is that how it works? Each subnet has got multiple people adding their independent Kubernetes clusters.
00:32:27
Speaker
So subnet would have like 10 different Kubernetes clusters, right? And when people are deploying applications, they don't deploy to a specific cluster. They deploy to a subnet and the subnetwork decides where the container or the port should run within the 10, amongst the 10 different clusters.
00:32:48
Speaker
Gotcha. And that intelligence is built into StackOS. So once, if I'm a user of StackOS, I submit my workload, which is built for Kubernetes. I select a subnet that I want to deploy it on, which is region specific or geography specific. And then StackOS decides which out of the 10 Kubernetes clusters in that subnet, that workload gets provisioned on. Is that accurate?
00:33:10
Speaker
Correct. So it's a stack versus intelligence, but it is controlled and managed by the DAO, right? So the local DAO. So that's something that has a DAO again, as I said earlier, the decentralized autonomous organization. That means it's a kind of a smart contract, which, you know, which there's a relationship between these Kubernetes clusters, right? And a great relationship between Kubernetes clusters that what is going to be the pricing, the cost of deployments, you know, what was going to be
00:33:34
Speaker
the SLA, quote unquote, SLAs, which will be penalized if they don't adhere to. So that's kind of what the subnet DAO will decide, what are the recommendations each contributor must adhere to. Okay, so Vishnu, I think my next question is around like what features or design concepts that Kubernetes had that made you choose Kubernetes as that infrastructure layer, right? Like why choose Kubernetes for this subnet based architecture?
00:34:03
Speaker
So when you talk about Kubernetes, it has one of the largest open source communities out there. Most robust. The toolings are immense. I think every organization which is coming to this ecosystem or containerized ecosystem, they end up using Kubernetes.
00:34:22
Speaker
And honestly, they have the most experience with that as well. So we decided to stick to our strengths and also trying to leverage the community which is the most robust out there. But we talked about technical features. I think for any of these toolings which we require within Kubernetes,
00:34:42
Speaker
You know, many of those are actually, you know, are open source projects which are built around the Kubernetes ecosystem. And it's very easy to just tie it up with those hub charts and deploy on stack or on these clusters to really get some of those, you know, basic functionalities which were required for StackOS to operate.
00:35:00
Speaker
So, you know, that's kind of, I think the existing ecosystem, the people, it's very easy to debug, you know, issues if there are any, you know, and I think those are the few things, it's more operational related, honestly, than technology related because, you know, I think it's, you know, Google is really good at what it does.
00:35:18
Speaker
Okay, gotcha. So I think one of the things that you mentioned earlier when we were talking about the StackOS subnets was if a subnet goes down, there's obviously obviously the penalty to the user who's providing that infrastructure resource. But you also said applications are field over, like, how does that orchestration work between different subnets?
00:35:37
Speaker
Right. So within different subnets, a deployment or a workload is sent out to multiple subnets as well. So there's a failure when I said the penalties are there amongst the clusters, right? So within the subnet as well.
00:35:52
Speaker
But if an entire subnet goes down, there's a failover mechanism or not necessarily failover as such, but it is always round-robing between different subsets as well. So this concept called a beacon node on the stack versus architecture. So when people connect with their application, they don't necessarily connect directly with those subnets.
00:36:13
Speaker
But what they do is they route through a beacon node and a beacon node's job is to be able to route it across multiple subnets as well. So you have a multi-tenancy kind of an ecosystem which is built out around that.
00:36:30
Speaker
where it just all drops between those different subnets. And if one subnet goes down, that's taken offline, and the active subnet is the one which is serving the traffic as well. So that's, again, that's something which people can choose to bring in. But in most cases, people are just, from a pattern we've seen, people are pretty happy even with the subnets because they have failover. It's not necessarily failover. I mean, they're completely distributed amongst different clusters within the subnet anyway.
00:36:58
Speaker
So like if a cluster goes down, you know, other clusters are still serving the traffic, so they don't really have to drop in the code. Okay, so like as a user was deploying my application, I have to select whether I want like a highly available architecture that's distributed across subnets or I select if I want to just run it on one specific subnet. Correct. Exactly. Okay, gotcha. Cool. Thank you.
00:37:22
Speaker
I feel like the use of Kubernetes also does a good job of bridging this Web2 and Web3 world. I know the Web3 world can be a drastic change to many if they're not familiar with it. And I feel like Kubernetes actually does a bit of gluing here, speaking from just my personal experience of coming more from the Web2 world.
00:37:43
Speaker
I don't know if that's intended, but something I definitely see. That's actually true. But let me tell you this. StackOS is currently the world's largest, the most utilized decentralized cloud. Over the past few months, we have sold over 68 million requests over the next couple of months.
00:38:04
Speaker
Right. And the reason is that, you know, because even to the people for whom Google is difficult, you know, it makes it easy for them as well. So, for instance, right, all of them do is a Docker image. And stack versus has a user interface, people can just pick their interface, plug in their Docker image.
00:38:27
Speaker
the ports within exposed, their compute requirements, the CPU memory, name and storage bandwidth. And you define that, and then you enter a button to click a button, and the application is live in less than 30 seconds. Yeah, it definitely hides a lot of that complexity. Exactly. So that's kind of the goal, I think. We have tried to rely heavily on, and we have had some good dividends all of that.
00:38:55
Speaker
Makes sense. Now you mentioned storage as well as CPU and memory, and I'm curious in this sort of decentralized cloud world, we talk a lot about persistence here, applications that need data to be persisted

Persistent Storage with Kubernetes and Web3

00:39:07
Speaker
across. How does that work in sort of the StackOS architecture?
00:39:11
Speaker
Yeah, great question. So the data is, you know, so when, so the waste, the web tree is kind of structured is that, you know, it tries to, tries to rely more on applications are not data centering, right?
00:39:29
Speaker
So what Stackverse does is it focuses itself purely on the compute while also giving a way in which you can add persistent storage at a cluster level.
00:39:43
Speaker
So what happens is that, and then there is an option which we are currently building, which basically would be making the data migration happen while the application is in runtime. So that's a new feature which is coming in. But right now, when you deploy an application, you can choose to have persistent layer. And the persistent layer works out of that.
00:40:04
Speaker
Now, as an extension to this, there is a feature which people can leverage, which is that you can use the decentralized storage and use different
00:40:19
Speaker
services, I'm just kind of remembered from the name of the service was, you can actually mount those volumes or the S3 equivalent services on web3 in a way where you can connect, use that as a file store. Like you can see your image or a pod and pod can mount on local storage. People could mount on S3 equivalent service
00:40:48
Speaker
So...
00:40:50
Speaker
You know, that's kind of how we kind of allow people who want to use and leverage that architecture. And the ones who require like real quick response times and, you know, and really, we have a very high persistence as well. Maybe we just provide them with persistent storage, which is, you know, connected to the pod and, you know, and that's, that's, I think it's just straightforward mechanism, how that thing works natively in communities. Yeah, I think, does that answer your question?
00:41:21
Speaker
Yeah, I think, you know, basically, you take advantage of what Kubernetes offers natively, and you allow users to basically build on that. I think a follow up question is, is there a world in which I guess this, you know, StackOS intersects with sort of a more decentralized data, like something like IPFS or content addressable?
00:41:40
Speaker
That's what I was talking about. So storage for Web3, I meant IPFS, Filecoin, you know, works on IPFS. All these guys are our partners, actually, IPFS. And Filecoin is all our partners. Protocol Labs, which builds that. They're all friends of ours. And we kind of, as I said earlier as well, so when you have the storage layer, you know, bots can mount directly on, you know, IPFS. You know, that is file-based as well. So you can, which is kind of like an S3 bucket.
00:42:10
Speaker
experience on IPFS, so your application or your parts can mount directly on using Firebase on the IPFS. So that's kind of also an interface as an architectural option which they are provided with and that's I think that was very simple.
00:42:33
Speaker
So for IPFS, I didn't know that IPFS had a way to work with Kubernetes. Is that something that you and the IPFS community have built together? And is it available as a custom resource for Kubernetes? I don't know how that works. Can you help me better understand that as well?
00:42:53
Speaker
Sure, sure. So what happens is like when you talk about a pod, a pod, you know, basically is a service, right? And IPFS is a data S3, is kind of like a storage, right?
00:43:05
Speaker
So an application which is built can easily talk to the, you know, can push data to, you know, to IPFS, right? So if you can use any of the nodes, right? No services like in Fiora to push the data onto IPFS, right? So using it as an S3 bucket is very, very straightforward, right? Where you just S3 bucket in AWS for the ones who are not very familiar. You can just push your server or your data directly as you do an S3 bucket.
00:43:34
Speaker
But then I think I'm sure you're already aware of that. The second thing which I was referring to is that you can mount pods on AWS S3. I'm not sure if you're familiar with that, but you can mount it. There is a service which you can enable, which allows that thing to happen. And again, that's not very intuitive, but that's how there's a functionality for that. So that's an open source tool actually, which we built out.
00:44:03
Speaker
So you can use that, right? And functionality, again, it has to, for that service to work, it has to work as an S3, AWS S3, because the requirement for that service is that it should be S3 compatible or API. So what it does is it mounts to IPFS via Filebase. And Filebase is a kind of a service in Web3 space, which converts IPFS to an AWS S3 kind of a storage.
00:44:35
Speaker
Gotcha. So I think we would really appreciate if you can share some links so that people can look into this more. Absolutely.
00:44:43
Speaker
Yeah. I'm still having a hard time understanding how the translation works, but then, okay. So you're not using the Kubernetes persistent volume constructs. You are just using S3 buckets or similar buckets using the S3 API for your applications to persist data across these different zones or subnets. Correct. Exactly. So, you know, I'm trying to see, I think I understand the Google that I'm talking with you. I think it's, there's multiple tools like S3FS-Fuse,
00:45:12
Speaker
It is around those services which actually creates that. Again, I'm just trying to Google so that I'm forgetting that exact name which we have been using. But these are all services which allow S3 to be used as a mount point for ports. And again, in our case, Filebase is a service which converts or provides APIs like S3 on IPFS directly.
00:45:40
Speaker
So the boards are mounted across different clusters or even subnets. They all have the capabilities to fetch data from the same source. OK. Gotcha. So for your development pipeline, do you use CICD environments at all?

Future Enhancements and Developer Onboarding

00:46:00
Speaker
How can people publish their applications to these stackware subnets?
00:46:09
Speaker
So what happens is that we are in the works. I think it's in the last year testing phase where people can directly
00:46:21
Speaker
integrate with your GitHub repositories, for example. For now, the way it is designed is that if you have an image with you, you just have to go to the UI, modify the tag, and automatically that version is deployed in a decentralized manner on the subnets and the clusters.
00:46:42
Speaker
What is the first phase of this next release which we have is going to allow for automatically pulling the latest image which has been pushed. So we kind of leverage the pipelines which data already has by itself to publish it to our art factory or any of these or Docker Hub.
00:47:05
Speaker
repositories, Docker repositories, and then the clusters will identify the change and it will automatically pull in the new version and deploy that. So that's an integration piece which is going to release as a first part of the release. We are following that up with the following release which is going to allow native integration with the repository itself. So as someone commits the code, the stack was where you have already provisioned
00:47:32
Speaker
resources and some of them are all provisioned resources. They'll be able to pull in the data, pull in the files, all the committed files, build it on StackOS and then publish it along with deploying that application on subnet. So that's a functionality which is second release which is upcoming.
00:47:56
Speaker
Yeah, I imagine that's going to be something that's widely used, especially with this concept of sort of taking away the complexity of Kubernetes and how things are scheduled. That environment for developers is going to be key. I think I could sit here and talk to you about this for another two hours, because it's a very interesting topic. We're getting towards the end of our time here. And I want to make sure we give a chance for you to tell us where folks can get started, whether they're a user,
00:48:24
Speaker
developer or a consumer of StackOS? First of all, I think for us, the biggest challenge I'll tell you, which we have, is it's a decentralized network. Anybody can just go deploy applications. We have about 1,557 today morning deployed applications on StackOS. Now, the problem is that
00:48:52
Speaker
though, just anonymous people just go deploy and they don't reach out to us, right? So people would love to really connect, engage with you guys, you know, help us grow, help us understand what your pain points are. So we can actually help work that out.
00:49:10
Speaker
But again, the easiest way to reach out to us will be using our Twitter or our Telegram channels and also Discord. I'll probably share that with you, the links with you. And what we can do is people can just get in there, ask the questions they have to and stuff.
00:49:33
Speaker
However, for the ones who don't want to reach out to us directly all the time, they can use the docs at stack-os.io. That should take them to our documentation as well. Again, I think it's called b-o-c-s.stack-os, s-t-a-c-k-o-s.io.
00:49:58
Speaker
and that should take them to the documentation which will help them go through the process of deploying an application on StackOS. We are also coming out with the StackOS Academy, which is going to have an entire experience of getting, you know, hand-holding and deploying an application on StackOS and getting that part as well.
00:50:17
Speaker
Also, people who are interested in building in the web3 space and helping us grow along with you guys as well, we do have bounties for supporting and contributing to our network. So again, I think for that, the best place to reach out is Discord channel. It's moderated by several people and we'll be happy to address some of the questions and maybe help you grow as well.
00:50:43
Speaker
Yeah, I think that sounds great. It sounds like one of the biggest features is also a pain point for you and an enemy. In the culture of community, I think that's great that go reach out, ask for feedback. Hopefully our listeners are intrigued. And if you are using StackOS, please reach out to Vishnu or us and we'll connect you, whatever works. And we'll put all show notes and links in the podcast show notes as well so you can get to all the things that Vishnu
00:51:12
Speaker
just mentioned. Vishnu, I'll end it here and it was a pleasure having you on the show. A really interesting topic and maybe we'll follow up this with another podcast in the future.
00:51:23
Speaker
Thank you so much Bahavan and Ryan. It was a pleasure speaking with you guys as well. And thank you for just posting this. It was an amazing and amazing conversation. Honestly, I can tell you this, nobody has gone to the details of Stacos as you have. So I really appreciate, you know, spelling the word here. So great. Yeah. And of course, I'll also share the documentation in regards to the people who are interested, a documentation about how you can mount
00:51:48
Speaker
a port to an S3 bucket, like mounting, mounting, not using it as a storage, but as an other kind of store. So yeah. Perfect. All right. Perfect. All right. Take care then. And we'll talk soon. Bye guys. Thank you.
00:52:03
Speaker
Okay, well, that was really enjoyable for me. I think Vishnu had a lot of interesting things to say with how the intersection of what they're doing with StackOS and Kubernetes had to do with sort of our ecosystem that we've been used to, especially the blend between Web 2 and Web 3 that I think I saw most obviously when we were talking about Kubernetes in that Web 3 space. I don't know about you, Bhavin, but what did you think about the conversation?
00:52:32
Speaker
No, I think it was a really great episode to like start anybody's journey with Web3 or I'm pretty sure like people who are playing buzzword bingo at their home while listening to the spot has got all of those terms like Web3 and DAOs, decentralized organizations, DAOs, whatever they stand for, and blockchain and Ethereum and Polygon and all of those are like, yeah, we did a great job. I think if we put these hashtags, this will be like our most popular episode out there.
00:52:58
Speaker
But a Web3 bingo, a Web3 drinking game, whatever one's your favorite. That sounds more fun. But I really like the concept of decentralized cloud, decentralized infrastructure and using Kubernetes as that base, right? Like it's no longer talking about renting out physical servers or using physical servers or using virtual machines. It's about Kubernetes being that starting layer and allowing users to just deploy their applications on specific subnets
00:53:28
Speaker
so that there are no data silos, no infrastructure silos. It helps you spread out those things across different CDNs or subnets. That definitely was an interesting use case for me. Instead of you building your applications for VMs, just build it on containers and the future of infrastructure looks something like this, where you have Kubernetes clusters that are available to you to just start deploying. So that was definitely an interesting takeaway for me.
00:53:54
Speaker
Yeah, absolutely. And generally with this space, I think there's all something that most of us can relate to is that our feelings towards sort of the centralized organizations that own a lot of our compute and data, right? The Facebooks, the Googles, the Amazons of the world. Is that the future? How do we feel about that? It kind of blends this socio-topic with technology. And I think that's what's very intriguing about decentralized cloud.
00:54:23
Speaker
And then blending in, I think, with a lot of what we understand with Kubernetes and how Kubernetes is being used for the compute layer adds a very real aspect to being able to relate to it. Not only do I think that part of it is very interesting, but I think it was a very smart choice for them to use something like a Docker image as a way for entry point.
00:54:46
Speaker
Something that I think the Web3 community is very hard at is getting people interested and that are new, right? It's a lot to understand. I think it's pretty different in terms of how we think about Web2. So taking something like
00:55:03
Speaker
this infrastructure as code slash Kubernetes cloud native ecosystem and saying, Oh, you can get started with, you know, a web three technology and company like StackOS providing these services by just using a Docker image lowers a lot of barriers. I think even in my head to say, you know, how would I use something like this? He said, Docker image was like, Oh, I got that. Right.
00:55:25
Speaker
So that's a great way for, I think, them to connect, or at least connected with me in that way. And then a lot of the conversations around how StackOS uses Kubernetes and how they actually tie into a lot of the persistence angles that we talked about on this show, not only being able to mount your storage and use a directory within that container to persist data, but also
00:55:52
Speaker
where that world is going, right? We understand today, you know, the CSI ecosystem, the plugin ecosystem, the software defined storage ecosystem, the cloud native storage ecosystem, and we've talked about a lot on the show, but there's a future where, you know, things aren't so tied to the infrastructure, right? A lot of the abstractions Vishnu talked about, right, around many communities, clusters, how do you actually fetch the data that your application needs and what does that look like? So,
00:56:21
Speaker
I thought that was really interesting conversation around the future of content addressable storage and IPFS and how that may work with concepts today that we're familiar with like S3 Fuse and being able to mount buckets as file systems and things like that. A space I know I'm going to take a look at and really pay a lot of attention to.
00:56:39
Speaker
Yeah, for this for me was definitely like a learning experience. I have not paid attention to this ecosystem outside like the blockchain based information. I didn't ever spend time in thinking about how these two connect together. So that's why like I think I asked so many questions to Vishnu this time, just to get more details out of him. But yeah, I think this is something that we definitely need to follow up on and see what other interesting use cases exist.
00:57:05
Speaker
Yeah. I would love to have Vishnu back on the show. That's for sure. And ask a lot more questions. Hopefully he can handle that. I'm sure he can. So very awesome to have him on the show. Cool. So we have a exciting next couple of weeks, as Bhavan mentioned earlier.
00:57:23
Speaker
KubeCon is coming right around the corner and we're going to do a different than normal sort of episode during that week. And hopefully we'll be able to put it out sooner than later, but bear with us. We're doing sort of a live from KubeCon thoughts from folks on the floor, thoughts from us.
00:57:42
Speaker
We'll put it together as soon as we have it available. If you are at the show, come find us again. I know a couple of folks have reached out to Bobbin and I to make sure, hey, we'd love to meet you guys. We'd love to meet you too. So please come find us somewhere on the show and or ping us, DM us, whatever it is. We'd love to have you on the show and talk all things KubeCon. And with that, that brings us to the end of today's episode. I'm Ryan. I'm Bobbin. And thanks for joining another episode of Kubernetes.
00:58:14
Speaker
Thank you for listening to the Kubernetes Bites Podcast.