Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Dagger.io Deep Dive with Co-Founder Sam Alba image

Dagger.io Deep Dive with Co-Founder Sam Alba

S4 E18 · Kubernetes Bytes
Avatar
1.3k Plays3 months ago

In this episode, we dive into the challenges of modern CI systems and why they often hinder productivity. We explore Dagger, a programmable CI/CD pipeline engine, with insights from Sam, a former Docker engineer. Learn how Dagger addresses CI complexity, speeds up workflows, and enhances portability between local environments and CI.  

Show Links

  • Dagger.io 
  • https://docs.dagger.io/  
  • https://docs.dagger.io/adopting/#join-our-discord
Recommended
Transcript

Introduction to Kubernetes Bites

00:00:03
Speaker
You are listening to Kubernetes Bites, a podcast bringing you the latest from the world of cloud native data management.

Meet the Hosts: Ryan, Bob, and Shaw

00:00:09
Speaker
My name is Ryan Walner and I'm joined by Bob and Shaw coming to you from Boston, Massachusetts. We'll be sharing our thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud native ecosystem.

Boston Greetings and AI Jokes

00:00:30
Speaker
Good morning, good afternoon, good evening, wherever you are. We're coming to you from Boston, Massachusetts. Today is September 20th, 2024. I hope everyone is doing well and staying safe. Bobbin, I'm back. Yeah, I know that. I'm so pumped. How's it going? ah you about to I'm doing good.
00:00:47
Speaker
Like I told you ah before we hit recording, hit start recording, right? Like doing the podcast isn't as much fun. ah Doing it alone is isn't as much fun as doing it with you. So, man, I missed you over the past few episodes. I'm glad you're back. I still think you should have had like a big head, just like a printout version of my head or something. okay Maybe next time where there's one of us on a show, which I'm sure it'll happen again. I'll just i'll get a printout of Bobbin head so that you can still be there.
00:01:16
Speaker
I'm sure we have at this point, we have like, what 80 episodes, we can train an AI model to just like, I don't know, say something about national parks. If I'm not there or say something about mountain biking. if you Here's our 80

Episode Preview: Cloud Native News

00:01:30
Speaker
episodes. Yeah. Mr. Chatbot. Can you make a persona for Bob and and and have him talk about the woods? That is scary. Like we had, so all of our voices are there, right? Like there's a, oh yeah. we're yeah Yeah. Startup called 11 lads. They can mimic your voice. So like you can actually, they can actually use this as training data and come up with a Ryan voice. Oh yeah. and I have a code word with, with my daughter because like or a code story technically okay where if, if only, yeah only it's not anywhere out there. The theory is no, I knows about it. Yeah. So tell me about the, that story. And then if you can't bring it up, your face, your AI, I mean,
00:02:11
Speaker
Or she just doesn't want to say it, but yeah, okay. Yeah, exactly. I'm just gonna end up hanging up on people I love, but you know, I'll know they're not AI or AI.

IBM's Strategic Acquisitions

00:02:21
Speaker
Better saved than sorry, dude. Oh, man. so Anyway, we have a pretty good episode lined up for you today, but we're gonna break into the news, and it's gonna be a little bit little bit of both of us this time. Yes! Why don't you kick us off, though?
00:02:39
Speaker
Yeah, i did I did want to start with like a couple of acquisitions. We so we finally some see some movement. right like We saw ah Fed cut interest rates this week and now suddenly people are like, okay, let's let's get get the ball rolling. So the first acquisition that we saw was CUBE cost getting acquired by IBM. So CUBE cost, again, we have had Sean on the podcast to talk about our CUBE cost and open cost work and how they help customers with Kubernetes cost management and all the different features. So if you want to learn more about what they actually do, go back and listen to that episode. But they're now part of IBM's ah FinOps suite of products and IBM has been doing this, right? Like they have acquired a couple of different companies, Turbonomics and I think Aptio.
00:03:21
Speaker
odd that ah that form or that provides these cost management kind of controls to users. I think they are just adding cube cost to the same portfolio specifically for Kubernetes. Like the the previous solutions didn't do as much for Kubernetes. So a good exit for cube cost. I know they were a young company. They definitely had partnerships with AWS and Azure cloud. So like this this one just makes sense to me. IBM ah expands their multi-cloud portfolio with like Red Hat

Veeam Expands with Alcyon

00:03:49
Speaker
and then then this FinOps portfolio as well.
00:03:51
Speaker
Yeah, you know, IBM has been making a lot of moves. And this one, you know, it kind of surprised me. um I think it's awesome for those folks. Yeah, I don't know what episode that was they were on here, but, you know, they were up to a lot of really awesome stuff yeah when it came to.
00:04:11
Speaker
ah integrating for cost and I think that's that's a that's a metric that we've taken note of is complexity, cost, security, right? um So it does make sense for like bigger companies to start tackling these these kind of problems. So congrats. IBM just shouldn't mess these acquis acquisitions up. I know they are in the process of acquiring HashiCorp. They didn't mess up Red Hat. like Red Hat still does its own thing. and does it really well. So I have hope, I have hope that this ah but this company doesn't. Well, yeah. and And I know you mentioned already, but like, it's not, well, the way they worded it, right? It's not IBM. It's like they're, they're acquired by Aptio, the IBM company. Yeah. So it's really a targeted at you wherever Aptio is settling in, in their, in their IBM journey.
00:04:58
Speaker
No. So that's that's first acquisition for me. ah Second one is Veeam acquires a a really small startup. I think they were just seed funding. Oh, no, they they

Percona's Everest and Multi-cloud Challenges

00:05:07
Speaker
were series A funded ah called Alcyon. So the the backstory is, if you don't know, Alcyon was co-founded by the Kastin co-founder. So the Kastin co-founder sold their company to Veeam in 2020, left Veeam in a couple of years, started a different company and then sold it back. Man.
00:05:23
Speaker
yeah need and the selling your startup to the same company in a span of four years. That's that's awesome. But ah niage that's true. Yeah. One thing to note is here is I think Neeraj who's the CEO of ceo of Alcion and Gaston as well will now join Veeam as the CTO for Veeam. So ah Again, for people that are not tracking Veeam closely, this might be interesting because the Veeam CTO left Veeam six months back and went to Snyk, which is a vendor in the community's security ecosystem. So they had an interim CTO for the time being. And I think this ALCION acquisition looks more like an aqua where they are like, need us to come back. We need you to be our CTO. I have to say, I haven't read, I don't see anything about that, like what they bought them for. Yeah, they haven't disclosed a price, I think.
00:06:18
Speaker
Yeah, so the Series A funding that they raised was around 21 million dollars. And ah even when they announced Series A, Veeam was like a major and investor in them anyway. So this was, as you said, like more of that garage kind of a thing where, oh yeah, we we already own.
00:06:33
Speaker
5% of the company. And now we're just paying all the other investors off and giving you some extra money to join us. But no good addition to the Veeam cloud data protection portfolio. I'll see on was trying to do something cool with Microsoft 365 backups and AI enabled backups and all of those things. So ah yeah, Veeam is just growing stronger and stronger by the day. Yeah, no, i'd I love ah what they've been up to. Honestly, Veeam is really kind of change themselves to be you know super relevant um with a couple of these. Yeah, it's not a VMware backup tool anymore, right? Yep. Yeah, exactly. So which is hard to do, especially Veeam's been around a long time. Oh, yeah. Cool stuff.
00:07:12
Speaker
Okay. And then, ah third, the piece of news for me isn't like an acquisition or seed funding or funding grounds, but Percona, a vendor in the database ecosystem, ah we have and we spoke to their previous CEO, um Peter, and how Percona was providing operators for MySQL and MongoDB and Postgres.
00:07:32
Speaker
yeah yeah but Gabrielle is EDB, like still part of the same community, but yeah. Competition, man. Come on.

CNCF's Artifact Hub Introduction

00:07:43
Speaker
no ah So we're going to have, Everest I think is a cool, um like a UI based tool that you can deploy in any any Kubernetes cluster. Again, this is just a couple of lines right from from your website. It is a multi-cloud tool that can run on-prem as well.
00:07:57
Speaker
UI-based, once you have the UI deployed, you can select whether you want to deploy like a MySQL instance, a MySQL cluster, Postgres cluster, or MongoDB cluster. And then it it uses those operators under the covers, deploys those databases. ah Because it is from Percona, they have their own set of monitoring and management tools called Percona Monitoring and Management. So they have obviously integrated Everest with those things as well. So if you are a Percona customer,
00:08:24
Speaker
I think this is like right up your alley. One cool thing is the entire thing is open source. like they They have open sourced the entire platform. So if people want to actually look at how they're doing things, they can as well. So ah interesting announcement in the data and community space.
00:08:38
Speaker
Yeah, absolutely. I mean, any multi-database tool like this that isn't behind a huge paywall, awesome stuff, and MySQL Mongo Postgres, obviously very, very popular. um i still I still think, like i' really to our listeners, I really want to talk about more multi-cloud, multi-cluster use cases, because I feel like we've been talking about that topic for years.
00:09:00
Speaker
And really, I don't know, maybe it's just, ah we haven't really had a lot of like specific use cases or things like that. But I feel like it's been this, like, is it a pipe dream still? Are people doing this? I mean, some people are, I know that, but anyway. I think some, yeah, I think it's not a majority. Like I haven't seen a lot of customers doing it.
00:09:17
Speaker
and on a multi, like they they might be deploying the same instance or same application across multiple clouds, but you don't have like a stretched deployment. ah Like the one that we were talking to Patrick from data strikes around Cassandra and how they do these multi region deployments for Cassandra. It's complicated. Yeah, it is. Yeah. and i This is something I remember that you had mentioned the CAP theorem, like at some point you have to give up on something, right? So you can't have like a multi region or a multi cloud thing and still have the availability, the consistent availability that you need. So yeah, trade off. Life is full of compromise that is including our professional. Those were all the news

Interview with Sam Alba from Dagger

00:10:00
Speaker
articles from Iran.
00:10:00
Speaker
Cool, I just had one more to add, which is Artifact Hub. Something I wasn't super familiar with is now a CNCF incubating project. If you're not familiar with Artifact Hub, it really was created to kind of bring together all the different types of cloud native artifacts, whether that's a Helm chart or some kind of policy.
00:10:19
Speaker
or, you know, container image, um it really kind of goes towards that. It reminds me of like what artifact really it was in like the Jenkins popular area era, so to speak. But anyway, tons of different I think it says like 20 plus artifacts and supports you can go check out their documentation. yeah Again, I've never heard um or used it myself, but it does seem like a really helpful a thing because we we are dealing with a lot of different things and whether that's in a GitHub repo or a Docker registry or like, do we just shove it in a container because we want it to be there? I think it's pretty cool. So check it out. I was reading through the list from the link that you had specified, like, man, it covers everything from Helm charts to code DNS plug-in repositories to backstage plugins and Argo templates. So yeah it's not just your container images or it's not but like your grandmother's Docker hub, I guess. That's your grandmother's Docker hub.
00:11:17
Speaker
Also, there's a bunch in there I don't know, like Tinker Bell actions. What's that? That sounds fun. And then, I don't know, what was another one? Inspector with a K, gadgets. Measure your gadgets. Like, that's fun. What are these things? I want to know. Anyway. Yeah, if you are working on those projects, fingers, right? Yeah. Come tell us about your Inspector Gadget and your Tinker Bell action.
00:11:40
Speaker
nice if you would like to sometimes i think like this isn't real life but anyway fun stuff you know i'm you know i've been ah a fan of really good name name conventions on this podcast i'm just surprised you haven't brought up hugging face yet so no we have we've talked about hugging face little in this episode oh you're right ah It is an artifact hub of some sort. Yep. All right, fine. All right, that is the end of our new segment. We do have a really awesome episode today. We have our guest, Sam Alba, co-founder of Dagger. You might have heard of Dagger before. It's all about CI. We'll let Sam kind of dig in and tell us all about it, some of the folks from Docker running.
00:12:22
Speaker
a dagger now. So I hope that interests you. And without further ado, let's get Sam on the show. All right, welcome to Kubernetes Spites, Sam. Please introduce yourself, who you are and what you do. Hello, yeah thanks for the opportunity. um And so I'm Sam Alba. I'm the co-founder of Dagger. And before that, before starting Dagger, so we launched the product in 2020.
00:12:50
Speaker
And before that, I spent almost a decade with my co-founders Solomon Hikes and Andrei Lutari at Docker, um building Docker and all and everything around it. um Yeah. Awesome. No, i like we are so glad to have you on the on the podcast, Sam. like You have been part of this, e or you guys built this whole ecosystem up with Docker. So um um ah I love to get your thoughts on the topic of discussion today. So let let me start there. right let Let's talk about CI or continuous integration. What is broken with CI today that led you and Solomon and all your co-founders to start Dagger today? so like Can we talk about like what's broken, what works well, and where does da Dagger help? Yeah, absolutely. so so the yeah I'll tell you the the story on how we we started Dagger and how that that idea came to mind. and
00:13:44
Speaker
Basically, initially when ah we we started Dagger, so early 2019 after leaving Docker, we we really wanted to to work together. We were not really ah interested into fixing CI particularly.

Dagger's Programmable CI/CD Vision

00:13:59
Speaker
What we started to do was to talk with a lot of people. We talked with ah you know people having challenges with their infrastructure, secure with their developments, with running applications at scale. and We were really looking for something that we could help with, ah you know and not necessarily linked to containers. yeah so So what happened is ah we we basically did hours of interviews. We talked to a lot of different people. And and over time, we there was this problem that was that was coming back over and over again. We could almost guess what people were were going to to say next at some point.
00:14:41
Speaker
And that problem was really around CI, CI CD, internal platform automation. People have different terms for the same problem. And that problem is really um how to bring my application to production. What do I need to do and automate after my my hands lift the keyboard after I'm done coding, what happens next. and And we really did a lot of research in there. And and what we found out was everyone is ah pretty much pissed with their CI. The CI system has so much importance today because we we we often talk about the apps, but not often about the CI. Unless
00:15:31
Speaker
Except when CI becomes the problem and is going down the team, then CI becomes the topic, but it's usually a bit late. yeah And so what we thought was it's kind of insane that you have access to all of those development tools, build framework, um you know deployment tools, manage services, to run your application on um on on on any sort of infrastructure. infrastructure But when it comes when it comes to link and glue all of these things together, you have pretty much bash and yes like and that that starts with a you know workflows in in YAML.
00:16:10
Speaker
that I'm sure pretty much everyone listening to to to this has been you know playing with. yeah And so so, yeah, we we thought that the CI needs to be programmable. So that's really what Dagger brings, which is like programmable. So Dagger is a CI-CD pipeline engine that you can program with the same language than than you usually use for your application.
00:16:38
Speaker
OK, so I think you guys took a different approach, right? Like you fixed with containers. You saw the application side of things like, OK, this is a consistent way to package your application code and get it to production. But then I think based on the user interviews that you mentioned, people were still having issues getting it to production. Like, OK, I can package it up, but how do I get it to production? And ah Dagger hopefully solves for that. So no, that that's great. ah I think, ah Ryan, go ahead with your question.
00:17:06
Speaker
Sure, yeah, yeah. i So i I want to set the stage a little bit, because I want to talk about more about how sort of Dagger works and and those kind of things in a bit. But I do want to kind of talk about what does, and and I'm sure this varies across different customers and use cases, but like what does a typical CI pipeline look like when you walk into the ah the mess that is you know lots of bash and various APIs and those kind of things. um I want to set that sort of visual component to sort of what that looks like and talk up a little bit about push and pray. I know you mentioned that before. So that when we dive into like how that changes, it makes a lot of sense. Yeah. So while the first difficulty is
00:17:49
Speaker
All pipelines are different.

Exploring Dagger's Features

00:17:50
Speaker
you cannot You cannot transcribe one pipeline from a company to another. That never happens. That's why the CI code is never reusable. When when you're a DevOps or platform engineer and you work for a company, you're in charge of the automation. And and by the way, that's another problem, small parentheses, but it's it's actually a different role. It's not developers. Developers need to be focusing on their app. And it's usually a different role as the company gets larger.
00:18:19
Speaker
um that that deals with all of this automation. And we we think actually that developers first will be able to to program to but to work and program that automation as well as their app.
00:18:30
Speaker
And so what a typical pipeline looks like is usually get they have they have you know several things in common. So so usually it starts simple. you know You can take an example of a pipeline with ah you know I want to run my my unit test after. you know I want to take my code, build a container, for instance, that I'm going to run on Kubernetes later on. And so before I build that container, I want to trigger the run of my unit test.
00:18:59
Speaker
And I want to, um well, not just run unit tests, but also integration tests, a linter. And so you have several civil steps that you want to apply, and you create um a graph of dependencies that we call ah usually a DAG, so a directed acyclic graph. Hence the name of Dagger, by the way. It's a tune that creates DAG. And so the um So yeah, and and those pipelines are oftentimes different, you know, different type of languages, different type of tools that you need to use. And they have something in common, which is
00:19:38
Speaker
Well, they don't they don't usually run entirely on a local machine. So what what it means for developers is that you need to open a pull request. You run a bunch of things more or less manually, locally. And then someone does yeah you will find the pi request and this is when the things happen. This is where ah the the whole integration happens, really.
00:19:59
Speaker
And this is when um you know you you you pray that it's going to work exactly like it works on your laptop. So that's exactly you mentioned the push and pray that we we often talk about. That was coming from one of our users. ah One day two less I told us, oh, actually Dagger um removes that push and pray. I don't have to. I can basically have ah the gap the the guarantee that things are going to run on my CI exactly like the run on my machine because actually the CI can be run on your local machine so that that's one thing that
00:20:31
Speaker
that pipelines without Dagger share, which is the decentralized infrastructure. you Another aspect is why reusability. I mentioned that, but you know oftentimes you you cannot reuse particular pieces of automation. And so Dagger solves that by um by providing modules. So when you and you can install a module that extends your pipeline. So a module can be, well, I took the example of a Linter or or like a Norgos integration or like a Terraform module that, you know, gives me the ability to to manage some infrastructure or grab some information from the infra. um so So, yeah. and
00:21:17
Speaker
Yeah, please no. So I think ah we we before we get get to head into into the a dagger different component discussion, right? I wanted to. and I wanted to get like a one line overview of what Dagger is, and then we can talk about like what are Dagger functions, what's Dagger cloud, what's Dagger engine, and how all of these things fit together to help solve all the different stages that you listed out, like the the you the unit testing, the linting, everything that needs to be done before code can be promoted to production. but Can we start there, like take a step back, talk about what what Dagger is and what are the different components?
00:21:52
Speaker
Yeah, for sure. So Dagger, something I didn't mention, Dagger has two two main components right now, an engine, so that's the programmable CI, CD engine that I was i was describing. It offers um three different SDKs right now that we officially support, one for TypeScript, Python, and Go. So basically what an SDK gives you as a developer is the ability to to um to have access to all the primitives of the Dagger engine. So creating containers, installing modules, calling functions on another module, but also ah doing direct calls like, yeah, I want to you know build that Dockerfile, fetch that git repo, um all all of those primitives. So the engine is open source.
00:22:38
Speaker
um and And then later on, if you if you need visualization of your whole pipeline and centralized cache also, there there is a cache service that we we provide as part of Dagger Cloud.

Integrating Dagger with CI Systems

00:22:53
Speaker
So Dagger Cloud is a service that you can you can enable later on. ah That part is not closed source.
00:22:59
Speaker
and um um Yeah, so so the the way the way it works, usually the way you you start as a user is um you you start with the dagger engine you know um on your local machine, start assembling your pipeline and digging into um you know solving the the particular automation problems that you have with your CI. Oftentimes, we have users with an existing CI that they you know they they don't want to replace everything and move over.
00:23:31
Speaker
So um you don't have to. um You can start start simple. And then ah you you basically integrate, um you run Dagger from your CI infrastructure. amazing that that's That's how the ah the the the whole onboarding flow happens.
00:23:51
Speaker
So you you brought up like reusability and even the ability to run CI pipelines on a developer's laptop before they they run it somewhere else for a later part of the workflow. How is this actually running? like What do I need to run the Dagger engine? like is it how If it's running on my laptop, do I need to run it as containers? Or do I need to like point it to and an engine that's hosted somewhere by my platform team inside my organizations? How does that work? like ah from an infrastructure perspective. Yeah. So as soon as your your um pipeline runs locally and you're you'll fine with the result, then it comes to, okay, now I need to run it as part of the you know the the centralized ah ah CI that I usually already have even before Dagger. and So you can run Dagger as part of your CI. So that Dagger is totally agnostic
00:24:45
Speaker
from the rest of your CI infrastructure. so Some people run Dagger directly on bare-metal machines. um but you you so For instance, I'll tell you how we run Dagger at Dagger because we yeah we use Dagger for shipping, testing, building Dagger. We use a GitHub action controller on on a Kubernetes cluster running on Amazon.
00:25:09
Speaker
Okay. And so, so here we basically leverage GitHub for triggering the workflows, ah basically saying, Hey, I want to run that dagger pipeline when there is a change on, on, on main, for instance, on the main branch, or I want to run that.
00:25:25
Speaker
pipeline when um there is a new pull request, for instance. So so there is a full integration with GitHub. And then, basically, the whole execution is done is done by Dagger. But you can run it alongside the rest of your CI. So technically, you could in the case of GitHub Action, for instance, you could run specific things on on GitHub Action and and have one or several Dagger pipelines ah alongside it.
00:25:54
Speaker
And the same applies to, I didn't mention, I was taking as an example of how we do it for ourselves, but we have users using using Jenkins, so-called CI, ah GitLab runners. So so yeah, basically, Dagger doesn't intelligent't enforce um what what's um you know how to design your CI system. That's that's the whole thing also. you know We didn't want to introduce an entire different way, telling people, hey, you you have to replace everything you did so far, like everything you... So so instead, we wanted to provide primitives, so we could save people time and and and make that CI reusable and and and running locally and faster. Also, I didn't mention that, but we could dig into how we make it more more performant as well.
00:26:44
Speaker
Got it, got it. Now, so I know you mentioned sort of the the various components of Dagger. One thing I did have sort of an interest in, and I was just curious myself, is are there best practices to writing these Dagger functions right that interact with these various places that you can run it, whether it's on your laptop or not? like um How much do you shove into a Dagger function?
00:27:09
Speaker
ah guess is really the real question so yeah that's ah That's an interesting question because um ah function is technically so a function runs inside a container. and and First of all, a dagger function is nothing else than than ah you know when you when you create it. It's nothing other than a function in your language of choice.
00:27:30
Speaker
ah so it's it's a function that so you write the basic function function basically in your code and that function will be run by dagger and it will be it will run inside a container now the tricky thing is um a function can call other functions and functions can call um also other functions that are in within modules and so that's that a function when you call the function on the when you make your first dagger dagger call, this is the entry point of the pipeline. So you can think of a function as um as a more or less a node in the graph that I described earlier, but also the entry point of the pipeline. So this is when you can, for instance, decide what's the entry point and what's the output what's the what what are the inputs and what are the outputs of your pipeline. um and And that's different for from everyone. you know yeah The input can be
00:28:31
Speaker
a code repository and the output can be an OCR image that has been published on the registry, for instance. But for some other people, an output could be the result of my test, for instance, right? So so yeah, the dagger gives you the primitives and and then what to answer your question on.
00:28:50
Speaker
what What should fit in the function and what should not? It's pretty much the same question as when you organize your code and you you write some some Go or Python, like, what should I put in my function? Well, your best practices obviously, you know, keep them simple, reusable, um not not not too many things in there, but technically that there is nothing that prevents you to make them super large. Yeah, I'm just picturing the like the same problem we saw with how like how much to shove in a container image. right You'd just be like, whoa, your dagger function is doing too much. but So you you kind of mentioned you can have sort of input and output. and And functions can call other functions. I think I saw that. or Or maybe that's called chaining in in Daggerland. But you know are there best practices for or how do you understand which functions are kind of being called
00:29:40
Speaker
when you might be calling something you didn't necessarily write or you're consuming from somebody else. Yeah, I think i think what's important is to separate the code that can that you want to be reusable and the code that you you don't necessarily want to be reusable. Like, for instance,
00:29:58
Speaker
like to give a particular example. Oh, and then one one one one thing that's important too, a function does not necessarily map to a container.

Designing Efficient Pipelines with Dagger

00:30:07
Speaker
So what you said about you know how much things you could put into a container, you could can actually instrument several all containers from a function. um And so so the um yeah to give to give an example of reusability, like for instance, there is this,
00:30:26
Speaker
Golang CI links that we use, for instance, for linting the go code. So it's basically a wrapper on top of other linters. When we use it as part of Dagger, there is nothing specific to Dagger. So it makes sense at some point to have a Golinter module.
00:30:42
Speaker
because we know that we're going to reuse it somewhere else. um Then as um you know on on the other side, when we build um you know Dagger in a certain way and we call other modules and there are there are some some things that are very specific to Dagger and the way it gets released, the way it gets tagged and versioned, um then at some point we have some functions that we know are not reusable and it's totally fine. So it's really a question of um having the tools so you can ah you can make the same kind of architectural decision that you would make when you build your application. um that' That's exactly the kind of the things you cannot do when you have like you have all of this this complexity in a large YAML workflow on any CI system of your your choice.
00:31:30
Speaker
you Usually, um i mean none of it is reusable and it's it's hard to debug it, et cetera. That's really the advantage of making this pipeline programmable in your opinion. I guess on that point, you know is it is do you ever see, like um you know you mentioned sometimes there's CI teams and then there's some application teams that probably write their own CI to some level.
00:31:53
Speaker
Um, you know, maybe they're using YAML, maybe they're, but they're bash masters as I like to call them. Um, but you know, do they ever resist needing to kind of take on learning Python or go or TypeScript for this kind of like benefit that they get? Yeah. Yeah, for sure. I mean, that's, uh, there's there is usually, uh,
00:32:14
Speaker
you know, some tension sometimes between, you know, certain ops team and some development teams. And and yeah, and we we see that. I think what's important is that when when you design a product like Dagger, it's important to to to keep in mind the two types of profiles. They are developers writing pipelines and they they they want sometimes they have different needs in terms of UX, you know what they expect from the CLI. And then you have the operators who basically take a pipeline and have to run it at scale.
00:32:46
Speaker
And the sometimes they have different needs. In small companies, it's sometimes the same person as well. you know yeah in small solid And totally fine. And and so to answer the point on the Bash Masters, and I actually like that term. So we wanted to make Bash a first class citizen. So basically calling calling the CLI um like like like so So technically, when you write a dagger function in any language of choice, you can actually call that function with arguments from the CLI. ah So yeah actually, in ah in our doc, we give examples for, hey, this is how it works in Python in Go and in your CLI. And it's very important that dagger can, you know, um
00:33:39
Speaker
integrate well with ah the existing tools that you have. And if you have like a gigantic orchestrate deployments.sh that you want to maintain, you can actually have some dagger code in there that calls other functions. OK. Sam, I think to to better understand this, right I want to ask you a very generic question. And I know this will the answer is always it depends from organization to organization. but What would a typical ah DAG look like? right What are the different nodes in the graph that form a DAG? Let's start from a developer that's writing some code on their laptop. How does it get to production? like What are the different steps? And are all of these different nodes defined as functions? Is that the way to think about DAGR and DAGR functions? Yeah, yeah, absolutely. I mean i think it's the, yeah. i um to To your last last question, yes, I think the the
00:34:37
Speaker
thinking ah about each step as functions is the right way to think about it. And so talking about each steps, ah it's really about, so so that's the part that that's really specific to each company. And so you're right that each organization has a different answer. And so so usually it comes down to what's your product? What do you what do you deliver? what What's the automation process?
00:35:01
Speaker
ah The input is always the code pretty much. The input of the pipeline is there is a code change. ideal Ideally, you don't want to run the pipeline. again, entirely for every single code change because there there might be some parts of the pipeline that don't need to be rerun or re-executed. And so Dagger solves that with and and with caching.
00:35:25
Speaker
So the idea is if there is a node, so Dagger knows what nodes need to be re-executed with each run. So if you run several times, you know, over and over again, and your pipeline is pretty large,
00:35:36
Speaker
ah you have all of those nodes. You don't you don't need to rerun all of those containers over and over again. When there is no change needed, it it can be the the the the output of that specific node can be taken from the cache directly. And and you don't have to care about that complexity. Then in terms of well specific nodes, we we I mean, specific DAGs, we actually talked about a few, you know, linting, running integration tests, running stress tests. There is also oftentimes integration with CD components ah that Dagger doesn't aim to replace any of that. They basically integrate and provide modules so you don't have to rebuild the integration yourself. Like for and a good example is Oracle CD.
00:36:24
Speaker
that we use internally at ah Dagger. We use it for rolling out. ah So the the pipeline, I'll take again the, and I don't know if it's a good example, but but how we do things at at Dagger. Yeah, that would be awesome.
00:36:38
Speaker
we build dagger so so When we build Dagger, so there is you make a pull request to Dagger as ah anyone can. It's a's open source. um Basically, the the pipelines there is a Dagger pipeline that gets kicked in. and so we We will run the easy part. Ideally, you want your pipeline to fail fast.
00:36:58
Speaker
and So if there is something that goes wrong, you you would like that you would like to know fast. You don't need to to execute the whole thing. So usually you start simple with um you know the ah static evaluation, like linting. You run that early, the code um making sure it builds. you know At some point, there is something that builds the binaries if it works.
00:37:19
Speaker
um We can, given that the pipeline is containerized entirely, you can actually build Dagger and run Dagger, ah run the binary in that pipeline and check that, you know do some codes and see if that works. okay We mentioned running the unit test. Sometimes you have two nodes that don't depend on each other, so Dagger actually detects that as well and and and run them in parallel.
00:37:43
Speaker
oh And then you have, ah as soon as you you're good with the validation part, this is usually when you you start with the the the build that we'll get you'll get into the publishing side. So this is when, you know for instance, in our our case, we we generate a container that so the the the dagger engine itself runs within a container. and and so we we generate, we build a dead version of that container every time there is a code change. So we can rerun some other tests or actually release it if this if it's the pipeline doing the release. um So that happens, for instance, when we create a new tag on Git you know with a version name. So there is a specific pipeline that runs a few extra steps to to actually cut a release. And yeah, so so there are there are many steps. i mean
00:38:41
Speaker
you know I'm sure there are yeah at least, ah I don't know, 30, 40 steps. i I have no idea. like again let's That's also something that you should not, as a developer, when you you you write your you maintain your CI, you don't want to keep in mind everything that needs to run. What you want to know is, you know if my pipeline fails, why? you know I need to know why. What did I do in my code change that that failed the pipeline?
00:39:10
Speaker
and show me the URL in my code. And so i can i can I can go investigate and fix it. And so there is a lot of work that we were doing on Dagger Cloud for for doing, basically bringing that information really quickly to to developers.
00:39:27
Speaker
OK, thank you for walking us through that phase, right? So I think I have a follow up question. Like in in terms of you said there can be like 30 to 40 nodes as a developer. I think I'm already with all the shift left companies and shift left methodologies. I'm already doing a lot of things. Is there a way I can like reuse some of the functions or reuse some of the modules that other people in my organization are working on or have built instead of me kind trying to come up with all the 30 nodes in in the DAG? Yeah, yeah, so. um So usually, so as as I said earlier, like the it's important, first of all, then when you write a pipeline, so when you write a dagger function, ah you
00:40:10
Speaker
It's usually the developers who decide what can be reused or not. because yeah And say the same thing with code, right? Like if you write any any piece of code in any language, not all of it is reusable. It really depends on what you want to make usable. so So the same applies with the Dagger pipeline. um In the example of a module, like for instance, I mentioned the August CD. So there is an August CD. There is a catalog of modules. I didn't mention it. i um If you go to daggerverse.dev, basically you you you can search for modules and you can see all the modules that you you can use with Dagger. and so Let's say that a good one that I mentioned earlier is ArgoCD.
00:40:54
Speaker
so When you want, for instance, to to to trigger a continuous deployment on August CD, you don't necessarily need to... um you You don't want to to rebuild that ah the integration yourself. Ideally, you can this is when you can pick something off the shelf, same for yeah like a li or like ah Go build. you know like You don't want to pass all of several arguments that that Go build takes, so just give me a module that builds a Go...
00:41:25
Speaker
ah Go code efficiently so that those are examples of reasonable modules when you have a reasonable modules the way it works ah To publish it and share it with your co-workers. Is it just put on it in a bit repo? And publishing it on the dagger verse is just a way to make it discoverable and ah The module itself lives on a Git repo, and you can install it from Dagger, so another person in your team can install that module from another Git repo. Whether it's private, because they have access to it, or public, I can just install Dagger install and pass the URL of that Git repo containing the module, and Dagger will basically grab the module and build it on the fly, basically. ho hat That's how it works.
00:42:08
Speaker
That's great. And I just while you were explaining that, right I navigated to Daggerverse.dev.

Dagger Modules vs. Pre-built Images

00:42:14
Speaker
And I see there are some community-based modules. There are some partner-based modules. but I'm assuming there are like, I couldn't find it in in time, but I'm sure there are Dagger-maintained modules as well. Like what what are these different levels and ah what kind of support can people expect? Yeah. So the um So we launched the Daggerverse recently, first of all, and indeed we have those, you identify them well, which is which means we we did a good good work, good job. So we have indeed three different tiers, the core module that we we shipped. we We don't have a core um a library, you know, a core library of modules yet. It's something that we
00:42:55
Speaker
on the right now, and basically having not just like ah modules that we built and we maintain, but also like a catalog of modules that you could pick from that that can basically a lot of strong primitives that you you could build from.
00:43:10
Speaker
So again, we're working on that, and there will be other dagger-provided modules, ah partners or modules that other companies maintain. So for instance, we have a bunch of modules from companies who build a product, and it makes sense for them to ship it as a as a ah you know as ah as a module that they would use. We did the same thing at at Docker back in the days with official images. and and you know partner images that are on the Docker Hub. The same thing applies with Dagger modules today. and Then um the community modules are basically modules that so some of them are actually coming from our team to Dagger users. We make no difference. We want to publish a module and and make it available to everybody else. That's what you can do. You can use them as examples too.
00:44:09
Speaker
So for instance, if you're building a module and you're not sure about the best practices, you can ah you can check the docs, but also the um some of the modules from partners or from from the diagram modules, and that can give you ideas on how to structure the

Dagger in CI/CD Tool Integration

00:44:24
Speaker
code. yeah Thank you.
00:44:26
Speaker
Gotcha. So I was just thinking, as you were talking about the Daggerverse, and figured I'd ask the question that wasn't on our list to begin with, just to throw you a little loop here. I think maybe other people might have it too. is you know I'm familiar with using GitHub actions that have pre-built Docker images or container images with some kind of action inside of it. Could you explain a little bit of the difference between something like that and what Dagger is doing?
00:44:55
Speaker
ah Yeah, yeah, yeah. So basically a question is about... So like so in the Daggerverse, we have sort of a pre-built step or or yeah usable piece of Dagger function. In GitHub Actions, I could reference like upload artifacts or something that is also pre-built in a container. I know they're different. I just don't have you know the expertise to tell you how they're different. yll So there are two differences between a Dagger module and a published Docker container, for instance. ah In the case of a Docker container, you write it's a built image that that is hosted somewhere and that you you pull in and and run. ah You don't necessari necessarily have access to the source that was used. but Most of the time you do, because there is a Git repo with Docker 5, but still, the container that you use is ah is the built version.
00:45:50
Speaker
um The difference with a Dagger module is, first of all, it's not just one container. it can be it's's It's actually a pipeline, technically. It can be a pipeline of one. It can be one one container. you know And it's it's also the source. So the an example would be, um I don't know, let's pick a Terraform module. you know A Terraform module could, Terraform Dagger module,
00:46:17
Speaker
could use several steps. you know For instance, running several commands when you you do like an even like ah the module could technically grab Terraform, build it on the fly, and run it and expose some some native types ah for for mapping the Terraform CLI to to the ah to the dagger SDK. um it's actually So all modules are taken from source. It doesn't mean that they they they have to they can't rely on build containers. So a module could also refer to an official image. ah you know and So so technically um yeah it's technically a piece of pipeline.
00:47:01
Speaker
like your pipeline, that is shipped from the source. So that's a big difference is the dagger will build the dagger module on the fly when using that module for the first time. It's fine to do it this way because again, the dagger will cache the result of the build. So every time you run that module, you will not have to rebuild it again. It happens only for the first time.
00:47:25
Speaker
Okay. Got it. No, that's perfect. Speaking of other CI tools, it's a perfect sort of segue. You know, um Dagger works just reading the docs and things like that with other CI tools. Can you kind of talk about maybe a couple examples of how that might work? Maybe one we've all heard of, Jenkins, or maybe another one like GitHub Actions like we were just talking about.
00:47:49
Speaker
Yeah, GitHub Action Circle CI. Pretty much any anywhere anywhere you can you you can run ah container runtime, you can run Dagger. So so it's really the... you know the the The CI that are mentioned on the um the the docs are CIs that we we documented, basically. But it's not it's not a limitation in the sense that you I'm thinking about Bill Kite. I don't even know if we we have this one on the docs, but that can totally work with Bill Kite as well. and And I think most CI systems today ah can run containers. the the only the key The key difference there is that
00:48:32
Speaker
Pipelines ah and workflows instrumented by CI systems were pre-date containers. you know Jenkins was designed before Docker even existed. and so there there is a um I think there is a key difference on on how Dagger leverages containers from from other ah pipelines runtimes, included in in other CIs. But yeah, technically you can run dagger everywhere that that can, where you have access to to a container runtime. So Sam, like what does that mean? right Like, because I agreed that Jenkins didn't, je Jenkins has been around ah ah for a long time and they had to add like the Jenkins X functional functionality to to support containers and Kubernetes as well.

Role of AI in CI/CD Pipelines

00:49:16
Speaker
yeah Jenkins already helps me. Like if I define a pipeline, it deploys containers on a Kubernetes cluster that I have
00:49:23
Speaker
attached or associated with Jenkins and and deploys containers ah that has my application code and tested. Am I using Dagger to have like a unified way to for for that specific step? like And I can just rip and replace Jenkins with CircleCI or something else. ah But for a normal developer perspective, it remains a consistent experience. Is that the place where Dagger fits in?
00:49:46
Speaker
Yeah, so the the as a developer, if you the company that you're you're working for is using Jenkins. It's very likely that you will not run Jenkins locally, right? So there is always this this thing that we described earlier about I'm done with my code, I run some some manual checks, and then I submit this to to Jenkins, and hopefully it will work. yeah In the case of Dagger plus Jenkins, the the big difference is you take the what what Jenkins will run, which is a Dagger pipeline, and you can run this Dagger pipeline exactly in the same way that Jenkins will run it, but on the other side of the machine. And then later on, assuming that
00:50:30
Speaker
um the whole um jenkins what the The whole thing that Jenkins runs is has been daggerized already. If you want to move away from Jenkins and use something else, well, it's it's very simple because then it's it's about moving ah from from one to the other. I mean, there are obviously some things to consider, like how to do your storage secrets, how do you manage your infrastructure. It's not a trivial task to move from, and and you know, to to move to migrate infrastructure.
00:51:04
Speaker
ah You get the idea basically make this at least the logic and the pipeline itself agnostic from everything else underneath.

Dagger's Future and Community Growth

00:51:13
Speaker
Okay, no, I think that that makes sense. Thank you so much. And how does it work with CD tools like Argo CD? You said that obviously there is a module that's maintained and that's available. Argo CD gets kicked in when like there is there is some trigger and it then goes with the continuous deployment phase, right? yeah How does Dagger fit into that ecosystem? Yeah, so so Dagger right now really takes care about the um you know integrating all of the tools together. so
00:51:42
Speaker
and doesn't It does not replace the... So in the case of AlgoCD, to answer your your question more more directly, the AlgoCD takes care of a continuous loop of rolling out a change, monitoring what's happening, deciding what to do next. There is a logic that is very um that that that is very specific to the deployment phase.
00:52:11
Speaker
Um, August city usually, I mean, can, can, uh, run other things to, to trigger the build, for instance. But usually you, you, you deal with August city and you introduce August city in your, in your automation to, to deal with rolling out an artifact that has been built. And so that's the part that, um, world dagger will need to tell August city when that artifact is ready. And basically like, my you know, I run.
00:52:38
Speaker
I grab the codes, run the tests, run the interior validation, all of that. And then I will handle that over to Oracle CD. And that's that's really how the integration will happen. um And Dagger doesn't aim to replace the you know the the whole deployment logic that that happens after the the code has been built. Technically, you could implement that as a Dagger function, but we don't recommend people to implement Argo CD in Dagger functions. Instead, just install a module and and call to Argo CD. That's what we we usually see.
00:53:14
Speaker
and That makes sense for me, yeah. Nice, nice. So it was I was just thinking like, while we were talking, creating Dagger functions for a use case I have, so I'm i'm looking forward to it actually. so um which Which brings me to sort of ah one of our few last questions here, because we're we're getting close to time, is is sort of what is Dagger up to, right? What's what's coming next for Dagger?
00:53:39
Speaker
Are you thinking about Gen AI like everybody else in their grandmother or you know you know what else is going on over there? yeah so
00:53:50
Speaker
Well, ah there are actually two two different questions. ah AI, yeah so I'll start with AI. turn that's ah That's clearly, ah we get that question a lot. anyway there is a ah Some people are doing cool stuff with it. So for us, AI is It's just bringing more use cases for pipelines and automations, like an AI app is an app, ah you know, chart it's the ICD. It's just you have more constraints to deal with usually, like sometimes larger data sets. If you do like an app that is doing inference on the model,
00:54:29
Speaker
um which means your your test suite can be more complex because you have just more more constraints. more You need more things to be available to run your tests, whether you're talking about this you know weights of a model, ah whether you you talk about giving access to getting access having access to GPUs in your test suite, we got that as well. For instance, last year, we added support for GPUs within your your Dagger pipeline, exactly for that use case.
00:55:00
Speaker
um and so so that So yeah, we have a bunch of AI developers um using Dagger and building automation. And so for us, it it really does are really bringing a new set of problems that we need to support and and we yeah and so so that that's one thing. And then you have GenAI as how would how we could leverage GenAI within the product. And so we had some experiments there, to be honest, like we had some community demos around generating unit tests ah for your code. So you take your code and you log out your value pipeline. Basically, the pipeline itself adds tests and run them, which is kind of cool. We had also demos about generating pipelines, so that your pipelines internally.
00:55:52
Speaker
so So, yeah, interested about that. We had also demos for, you know, ah having a new interface for docs. I feel that this one is kind of all over the place nowadays, but, you know, asking asking an a ah model about how to use X, Y, or Z instead of looking myself within the docs.
00:56:13
Speaker
So, so that's another one. Um, so yeah, that, that's, uh, that's, and then you asked, um, where, where dagger is going. So we did a lot of work in the, in the recent months on, on, on performance reliability. Uh, basically dagger is growing and with growth, we see more and more people being, um, uh, you know, having a lot of requirements for production for running on product. Basically things needs to be secured and it's to run faster.
00:56:43
Speaker
and assistant, they need visualization on when things fail, ah need visualization on when things get cached, how long it took, why, you know, all of that observability. so So we did a lot of work on that in the recent months, and we we still have a long way to go.

Conclusion and Reflections on Dagger

00:57:00
Speaker
ah We are still working very actively on that, working with a lot larger organization on, you know, um basically bringing those features, making sure they have what they need for production. production For instance, private modules. We launched the support for modules earlier this year. Some larger teams said,
00:57:19
Speaker
Awesome. But ah we cannot use modules publicly from the community. We need to deliver our own modules privately. So we introduce the support for private modules. So yeah, and but basically, um you know the roadmap is is as you can imagine with a um you know and ah user-based growing with more more requirements to our production.
00:57:42
Speaker
so I think if I can go back to the Gen AI use case, right? In addition to all the POCs that you described, I think I can clearly see like a use case where ah to generate unit tests or to generate documentation, I can use a different dagger module and point to a different LLM that's hosted and and compare performance and choose the best one or use preference. Use my own preference. I think that that that sounds awesome to me instead of just relying on like Copilot or something like that to give me unit tests.
00:58:13
Speaker
that's right That's right. And there is also another thing about you know using machine learning over time. And I see more and more use cases around that on infrastructure to understand the patterns. LLMs are great at finding anomalies. um And you know in ah in a large pipeline and that automates your your supply chain, your software supply chain,
00:58:40
Speaker
and Over time, you you if if things drift away from, you know for instance, if you if your build is taking 10% more than two weeks ago, ideally, you want to be able to strike that over time and identify the anomalies, and maybe even suggest changes, suggest improvements, ah you know especially with containers. you I'm sure you saw that a lot in the last few years about ah People who are asking how to securely build images, how to make them secure, how to avoid leaks of secrets. And I think there there there is a ah super large areas that has been barely explored and where AI could help a lot to assist developers into making the the right decisions right away.
00:59:29
Speaker
yeah OK, OK, so this has been an awesome discussion. right I want to like to start to wrap things up. So I'll ask a two part question. One is, if somebody wants to try out Dagger or learn more about this as a follow up to this discussion, where can they go? And the second is, clearly seems like Dagger is growing. So are you guys hiring? And if yes, where where can people find and apply for jobs at Dagger?
00:59:52
Speaker
Sure. So um I'd say same answer for both questions. Go to dagger dot.io. and And so if you want to try dagger, you will quickly get directed to the docs and the quick start. There is something you can try out quickly. And and then if you're looking for for and applying for position at dagger, there is also a carriers page.
01:00:13
Speaker
ah or you can apply. There there are so some some questions, some some forms you you need to fill. And you can always find the team also on Discord. So what you can find on the website, dagger dot.io, is our Discord server. We have thousands of of people in the community um sharing what their problems. you know There are they are platform engineers, developers, infrastructure few engineers. And it's really a great place to to come and ask help.
01:00:41
Speaker
um the you know the whole Dagger team is in in is in there. Actually something I didn't mention also, all the conversations we have about building Dagger happen on Discord as well. We don't just have the code that is open, we also have an open process for building Dagger where people can participate and it's it's quite active.
01:01:02
Speaker
that's That's awesome. Man, ah and I would like to thank you so much for joining us for this podcast. and This was super helpful. Thank you for taking all the different tangents with us and going into different levels of details. I'm sure our listeners will find a lot of value from this episode. So thank you so much for joining us today. Thank you. Thanks for the opportunity. Thank you.
01:01:23
Speaker
All right, Bobbin, that was quite the conversation. I know I feel like we could have probably talked to Sam for two more hours. Yes. Dagger. Dagger seems like a really interesting project. I know i I said to him that I had a few use cases. Use cases, which I i think I'm going to go play around with it, which is which is fun. I will say, you know reading through the products and stuff like that, I wasn't entirely sure like what the value of the product was. Same. Yeah.
01:01:53
Speaker
but Talking to Sam really helped, so thanks, Sam, and hopefully our listeners will get more out of it as well. um But yeah, and in terms of takeaways, I really like the idea that Dagger is talking about reusability, right? Not only um across platforms, and you can run it on your laptop and it runs anywhere else, like that that is all good stuff we've heard for with containers and Docker for a long time.
01:02:19
Speaker
Um, but with dagger, I love this idea that you can sort of write a function, you know, a dagger function they called and share that thing. Right. Because there's, there's not a, you know, to, to Sam's point is a lot of CI is like, you have these unicorn SREs or CI, uh, dedicated people are part of a DevOps team who like have this crazy bash script or like understand how it really works. And you have this small group of people or, or a person.
01:02:48
Speaker
who you're like, you know, have to go troubleshoot some stuff. But ideally, you have this, you know, pipeline, this chain of kind of reusable functions that make it really kind of simplified. And and I get that that's a big part of their value prop. but um Not something I've personally been you know able to use in real life, even i mean even to using yeah GitHub actions. right I've been doing a lot more of that lately. And that in and of itself is like, you know you're managing that file and you're like, all right, if I edit it, do I have to change it when? And it's it's a little complicated, um which is why I brought it up. I was like, how does it differ from that? So really cool stuff. um The Daggerverse, I have to go explore it.
01:03:33
Speaker
and see because it seems like a pretty wide range of like what people make available in the Daggerverse just looking through it. It's not like super simple functions. It can be like, hey, run a web server for docs. Like it seems like it does a lot. So yeah. No, agreed. Like Daggerverse is sound super cool. ah I also like the fact that they they are not managing or maintaining the source code inside Daggerverse. So like Daggerverse is just a pointer to other places. Like, oh, this is like your Google search repository. And then the the actual articles or the actual content is living somewhere else. So you are actually pointing people to their own repos. That way you can also see if it's a trusted source or not. And I don't know, it makes things easier rather than Dagger taking on the responsibility of like, yeah, these are all the trusted modules. If it's on Daggerverse, this is something that we support.
01:04:22
Speaker
ah So I liked that model. ah For me, I think that it was the ability to understand so that if you, if you listen to the interview as well, right? Like I, I pushed back on Sam and asked him to like, give us an example. Like what are the different phases or different nodes in this directed acyclical graph or DAG, which is what the name of the company after it looks like. And I was glad that he described like the unit testing, linting, security scans, the, the a container image build and eventual deployment and integrations with CircleCI and GitHub action. So I think it helped me visualize things better. Like this is what it looks like. And yeah, as you said, right? Like we don't, you don't need um an expert or a, or a 10X engineer doing this once for your entire organization, you can have developers come up with their own phases, own stages, codify it, and then
01:05:11
Speaker
ah run it locally before they push it to somewhere else or or like that reusable liti component also helps. ah One thing that again this is something I found from the docs and we didn't get enough time to talk about it but ah If your pipeline breaks for some reason, you're no longer looking at logs and figuring out if it was your application code, if it was the tool itself, what broke it, like it is coded at the end of the day, written in Python code or TypeScript. So you can troubleshoot it the same way you do your application code. So that that was also super interesting to me. But yeah, those were my key takeaways.
01:05:43
Speaker
Yeah, fun stuff. We will make sure and put the docs as well as their website and their Discord server and how to join in the show notes. So if you're interested to talk to the folks at Dagger and Sam, it sounds like himself on Discord, that will be in the show notes. Or you can find it on their website as well. But yeah, I think that brings us to another end of another episode. I'm Ryan. I'm Paul. And thanks for joining another episode of Kubernetes Play.
01:06:15
Speaker
Thank you for listening to the Kubernetes Bites Podcast.