AI's Role in Software Engineering
00:00:00
Speaker
And I like the idea of like setting the junior engineer up for success. But I also call out that in a world where you're the expert, the AI is the junior engineer with unlimited energy. So give it the toil.
00:00:11
Speaker
And in a world where you're maybe not the expert, and like I don't know rust, I think of the AI as being more senior to me until my code smell ability meets it or exceeds it. And then I start treating it again as a junior intern with a lot of energy.
Introduction to Don Syme and Agentic Workflows
00:00:26
Speaker
Hi, I'm Scott Hanselman. This is another episode of Hansel Minutes. Today, I have the pleasure of chatting with Don Syme. He's the designer and architect of the F-sharp programming language, but he has turned his eyes to something more interesting lately, which is agentic workflows. How's it going, sir?
00:00:40
Speaker
Hello, Scott. and It's so a real pleasure to be yeah appearing on your podcast. Great yeah have to be but great to be back here. It's been a minute. Absolutely. It's been
The Evolution of Computer Science
00:00:50
Speaker
a minute. um And, you know, I think you and I are in an interesting place right now because we are people of a certain age with a certain amount of history and context at a moment in computer science where I think more stuff is changing than anything I could remember. I cannot think of a time that was more kind of chaotic and interesting than this time that we're in right now.
Modern Software Engineers' Capabilities
00:01:11
Speaker
It's a super, super interesting time. The way i You know, i think a lot about what people entering software engineering and ah is is is coming into. and you know they've One analogy I like to use is that they have wizard powers in their hand. They've got the magic staff, they can bang the ground and po you know up comes the the magic.
00:01:37
Speaker
the magic construction of a piece of software using modern coding agents and its extraordinary powers. And there's a lot to learn about how to use those well and how to use them well in teams and how to yeah use them well in companies and what it admit all the ramifications of having roomfuls of wizards kind of working together. But it's incredible changing times for software development, for sure.
Is Software Craftsmanship Dying?
00:02:04
Speaker
do you Do you mourn the loss of the craft, or do you think that the craft is dying or just changing? Because I feel like we've been handed a power tool, and people who chop trees down with axes might be sad about the advent of the chainsaw.
00:02:17
Speaker
Yeah, there's a lot of analogies you can use. I'm not one too ah to mourn. I love the craft. you know i've invested a lot of time in making ah much better experiences for developers for making software you can rely on that has these yeah strong construction guarantees. as you ah you know If you look at, say, getting rid of k nulls all the way through software and kind of just making robustness in the underlying structures.
00:02:44
Speaker
ah That's all still relevant, you know? and that's all those themes are kind of going to come back in one way or the other. We are going to want, as much as people are happy vibe coding up ah Python and and the like, there is still a notion of better software.
00:02:59
Speaker
And a lot of the work we're kind of doing is about how can we how can we get the best of both worlds, right? how do How do you empower the developer to be running full throttle in, not necessarily in the VibeCoding mode, but certainly in that kind of creating creative. VibeCoding can be part of that. You can be using CCA, sort of task-oriented programming, Copilot in GitHub or whatever you're using.
00:03:21
Speaker
And yet you want them to be strong in their construction. You want to be good performance, you want to be solid software, you want to be well-tested software, you want to be achieving yeah achieving all the notions of quality in software that we still need to be kind of chasing chasing after.
00:03:39
Speaker
One of the the samples and examples that I use when I'm teaching this stuff is I made a silly little trivial app that's a Windows ring light. It draws a square, rounded square on your screen, and it used the brightness of your LCD screen to give you a light, like a ring light, like the one that I've got.
00:03:58
Speaker
here and on its face, I vibe coded rounded white square. That's a simplistic thing and someone could look at that and say, that's trivial. That's a one shot, right? That's a one prompt vibed thing.
The Software Development Lifecycle
00:04:13
Speaker
But then I say it has a GitHub workflow. It has signed certificates. It has a packaging model. It has change logs, it has tests, it has an automatic updating system, it works on this operating system and that one. there's There's a software development lifecycle even around something as trivial as I want to ship a rounded square. And I think that's important for people to remember.
00:04:37
Speaker
Absolutely. And, you know, we'll get into sort of get hypergenic workflows and automating aspects of improvements and the like in in just a moment. Yeah, we're getting into examples of improvement. But one of the one of the cases I came across was where the agents in a repository automatically suggested to me that I use Nougat Trusted Publishing.
00:04:58
Speaker
and nougat.org. And I hadn't actually set up the repositories to do this before. And it's just amazing because the agents can kind of proactively kick in and say, hey, you can improve your engineering along this dimension. And of course, I've got to double check that and I've got to be in control of that because these critical aspects about how to shape software and actually get it out to customers, just like you were saying.
00:05:18
Speaker
Yeah. I feel like though, if the corpus of material is the last 50 years of programming, or at least the last 20 or 30 years of it being online, are we at a place where it's going to just make the statistical mean of the software development lifecycle?
Empowering Repository Owners
00:05:36
Speaker
How do we improve the software development lifecycle if we are just training it on workflows that already exist that are all kind of like mediocre How do you call out best practices and who owns best practices?
00:05:48
Speaker
I mean, it ah in my and my setting, something that's really important to me in our current work is that we'd be about empowering the repository maintainers, repository owners, the core engineers who are kind of shaping the whole direction, the whole trajectory of the overall piece of the software, including, and they can be product managers and they can be a stronger connection with product, for example.
00:06:09
Speaker
that we'd be empowering them to be setting the tone and the direction and they're in control of the automation that they're using in their repositories and what goals are being chased after by that automation. It's clear we're at a point where you can specify the goals of a repository and the agents chase after those goals with great vigor.
00:06:29
Speaker
with great vigor and But someone's got to be there setting up the right feedback loops, for example. If you're going to get it to chase after performance goals, it really helps to have production but profiling production telemetry data and metrics coming flowing into your kind of ah loops that are about self-improving the software as it goes.
00:06:49
Speaker
Yeah, you know, these loops, I call them the ambiguity loops. I feel like there's for loops and then there's ambiguity loops. And for loops are deterministic and programmatic and clear. And it's like I'm doing a thing and it's unambiguous. Yeah.
00:07:04
Speaker
But the power of an ambiguity loop is the amount of like flexibility it has to make the decision it needs to to solve the problem that it needs to in the moment. So if something is fragile, i can either make it less fragile by making my for loop more robust and putting in error handling and checks and asserting my assumptions. Or I could introduce an ambiguity loop that has a little bit more flexibility in its ability to change and make decisions to make the software more robust.
00:07:33
Speaker
But that ambiguity loop itself requires checks and balances and tests and all the things that make software good, whether it be a check for cyclomatic complexity or a series of tests to indicate what success looks like.
00:07:47
Speaker
Unchecked ambiguity loops are, I think, where AI software goes off the rails. And I feel like good software engineering is what keeps it on rails.
Introducing Continuous AI
00:07:57
Speaker
Absolutely. the the The term we've kind of been throwing around to kind of package up what's happening in the SDLC is we we we love the term continuous integration. We love continuous deployment.
00:08:10
Speaker
And those are absolutely critical parts of the software industry going forward. But it's like it turns out there are two pillars of of a stool. There's a third leg we need to add. And the one we're adding is what we call continuous AI.
00:08:23
Speaker
And that is all about these ambiguity ambiguity loops, these soft judgments that need to be made about software. So, for example, you have continuous documentation.
00:08:36
Speaker
Now, when you do continuous documentation, you've got a lot of judgments you need to make. Do you update the docs or did someone update the docs and you've actually got to go update the code, for example? You've got to make judgments about how well how to how to how to update the docs. and Of course, a human needs to be there checking that the the doc updates are good and guiding the kind of process.
00:08:58
Speaker
And so we obviously want continuous integration. We want continuous development. We want to add to that. Third leg, continuous documentation. ah okay But it's not just about the continuous documentation.
00:09:11
Speaker
We've got all sorts of other things we can be adding. We can be doing continuous triage. We can be doing, and ones that are really close to my heart, are like continuous code simplification.
00:09:23
Speaker
Like why not improve your code continuously? Continuous test improvement. continuous um reporting of, say say, security analyses or other kind of analyses over your code.
00:09:37
Speaker
And so, yeah, this is this is the world we're heading to where we in the SDLC, where we're adding, and it's it's really important to see that it's additive, because actually devols DevOps becomes even more important. But there's a strong culture in DevOps where You know, people are, for CI and CD, you don't want non-determinism. There's a huge culture around determinism in that space. And that's great.
00:10:02
Speaker
We love that. Okay. but And so you've got to open your mind a little bit to say, let's also bring in continuous AI into this to allow... these powerful tools to be present in this SDLC, yet still under human control.
00:10:18
Speaker
it's There's an interesting culture shifts that are going to happen in the whole world DevOps and SDLC.
Addressing Podcast Criticism
00:10:25
Speaker
I got a ah negative comment recently on one of my Instagrams where i you know I put out clips of our shows and stuff, and someone said that they felt that the podcast lately had become an AI selling machine.
00:10:41
Speaker
And the podcast is my own. i work for Microsoft and GitHub. You work for GitHub in our day jobs. What do you say to someone who whos who says, oh, man, Don, that was my guy. You know, I love F-sharp and i love all the work that Don's done for the last 30 years. And now he's he's selling ai Are you selling AI or are you meeting the the software development lifecycle moment and trying to meet it with integrity?
00:11:07
Speaker
ah Of course, you set up the question. I'm i'm going to say I'm honestly wondering because I feel like I know what I'm doing. I'm trying to explore the space. I mean, ah you I absolutely this you get the reactions and there's there is some snake oil and in in the ai industry. That's true. But the we have to shift the conversation about what is the future of software development to understand how developers can be given the power to decide how much automation they use and what kind of automation they use. And that's exactly
00:11:47
Speaker
what we're doing with GitHub agentic workflows is we're saying, just like in CI CD, how did we resolve that kind of conversation about how how continuous integration how and continuous development
Automation with GitHub Actions
00:12:00
Speaker
Well, the resolution for a huge portion of the industry is found in GitHub Actions. And it's found in GitHub Actions YAML. And it's found in the fact that developers are given a very quite powerful set of tools in order to decide how CI CD happens in their repo, decide how build, decide what automation means in their repositories.
00:12:22
Speaker
And i am all in favor of giving that power to teams and to repository maintainers and to individual developers to say, actually, let's solve this question the same way, which is, I don't know how much AI you should be using in your repository.
00:12:40
Speaker
I don't know what your constraints are. And as I think you should be given the power to decide how much to be using and to guardrail that.
00:12:55
Speaker
and At the outset, we should give by default extremely strong guardrails to allow to make sure that whatever happens proceeds safely. Yeah, so OK, so let's talk about GitHub actions workflows and and how that turns into agentic workflows. So I have a GitHub dot GitHub folder in my repositories within that I have dot YAML files and I do my i like a build dot YAML and things like that.
00:13:21
Speaker
And then I have other workflows that are not necessarily builds. They might be on a check in. I run a workflow or when an I when an when an issue moves into a certain state, I run a workflow.
00:13:32
Speaker
What then is an agentic workflow? Because these YAML ones are very deterministic. I mean, they're almost a little programming language. Most of my GitHub actions are little scripts, little programs, and they don't really deal well with things moving a file.
00:13:46
Speaker
doesn't isn't named right or some folder is moved, then the build will break. It's pretty un unambiguous. Absolutely. So a GitHub agentic workflow, I mean, at some level, it's really, really simple. You've got some front matter and you've some markdown.
00:14:01
Speaker
And you can think of this as bit like a prompt. In fact, the prompt, the markdown is going to end up in the hands of a coding agent, and it's going to be run in a sandbox in the context of your repository, and it's going to do all the magical things that kind of coding agents can do.
00:14:15
Speaker
except it's being run with extremely strong constraints around it. It's going to be run in a read-only mode, so it can't do anything, can't do any write actions to to GitHub. It's going to hand off its results, and they're going to be checked further, and then they're kind of going to be applied. okay So agentic workflow just has at the at the top, it's got like here are the tools that will be available, like here's what it can read from GitHub.
00:14:42
Speaker
and and that's usually limited just to the repository itself. ah And here's the outputs it can make, and we call those safe outputs. And those are extremely guardrailed set of safe outputs. So for example, you can say that it's allowed to add a comment to an issue, but it's not allowed to add a comment to any issue. It says you're allowed to add a comment to the issue you accepted as input.
00:15:08
Speaker
And that's all you're allowed to do, Mr. AI agent. You can deliver your comment. And we're going to check that a little bit more with another agent. We're kind check that it looks safe. And then you can make the comment.
00:15:22
Speaker
make the comment And so an agentic workflow is sort of a pure expression of intent. It's like it's just saying, here's the markdown, here's the intent that we have. We'd like you to do an analysis of the issue and report back possible useful resources from the documentation sets that we have ah for for investigating this issue. If that's the AI automation you want in your your repository, you can kind of set up those kind of automated responses, you can get it to write the markdown so things are collapsed, so it's not too intrusive into the repository. There's many different... You're in charge of how the automation kind of proceeds in the in the repository.
00:16:01
Speaker
ah So it differs ah because it's sort of intent-based working. ah It differs because there's an implicit use of a
Managing Technical Debt with RepoAssist
00:16:13
Speaker
coding agent to do the the kind of core of the work.
00:16:17
Speaker
it, You can mix the two, so you can have ah an initial section which does a set of kind of traditional kind of YAML ah to kind of collect data, for example. For instance, you might want to you must say you want to um ah triage all the issues in the repository, you can list out 500 issues and kind of all the unlabeled ones, for example, and grab that as a data set and then have the agentic step kind of work on that data set and and output a whole lot of labels to apply.
00:16:48
Speaker
Mm-hmm. You had told, i think you had told me that there was like some technical debt that you were really passionate about looking at a project and breaking this technical debt down. Yeah. success Give me a success story because I think people are listening to this as an audio podcast and they might be saying, this is all very squishy.
00:17:05
Speaker
Tell me how I can use this today to make my life better. Mm-hmm. Absolutely. So ah one of the uses of GitHub agentic workflows is a workflow I put together, which is called repo assist.
00:17:17
Speaker
And look at it from the point of view of the life as a maintainer. And I was in a particular kind of repository where there's a fair chunk of technical debt, maybe even from tracking back over years. Okay, so you might have like 200 issues going back over the years, and you've never been closing those out. And you've kind of got this guilt of a maintainer where you say, I'm not taking this software forward because there's all this technical debt.
00:17:44
Speaker
And what do I do with this ah repository? I'm not properly maintaining it. and And I want an assistant. I want help in making progress on this. repo So you install this thing called RepoAssist and it's like dead easy to add to the repository. You just go g-h-a-w-add wizard and ah then add RepoAssist and it takes you through the process of setting it up in the repository. And that just installs one YAML workflow, one agentic workflow into your repository. It'll appear under.github slash workflows, and it'll start running.
00:18:19
Speaker
And by default, it runs four times a day. You can also run it in a repeat sort of mode that it kind of blasts away for like 30 days' worth of work or just by a dash, repeat 30. And then you get a whole lot of assists coming through the repo. Now, before you start running it you can configure it. As I said, it's under your control. But the default configuration is it it sort of chooses between different repository maintenance kind of tasks.
00:18:43
Speaker
It can label issues. if the If there are unlabeled issues, that's the first thing it'll kind of focus in on. ah It'll try and fix bugs. It'll propose improvements, engineering improvements to the repository. And it'll do that by creating issues for those.
00:18:59
Speaker
If it's open to pull requests, it'll update its own pull requests. And it will also ah will also prepare releases in the repository. It won't make the release, but it will kind of make sure everything's in order according to the kind of guidelines and so on for towards moving towards a release, the release notes and whatever else needs to be done.
00:19:21
Speaker
And it is, I found it absolutely amazing. i've I've worked in, I've used it, I think, in seven repositories now. We'll take one, ah let's say fsharp.control.asynchronoussequences. It's a piece of software that's very dear to to my heart.
00:19:37
Speaker
ah because it's one of the first ever implementations of asynchronous sequences. And like it's it's it's used widely in the F-sharp community. And so i think we had about 50 issues in that repo, and it just closed.
00:19:53
Speaker
It just either made very good comments on on the whole, very good comments on the on the issues that were set up to be able to do that, and I was able to close out about half the issues with really good, solid technical analysis. Because it's like these days, if you get an issue into a repository, what's the first thing as a and maintainer you're likely to do You're probably quite likely. i mean to use a coding agent to help understand the kind of issue. And it's kind of automating that step. Of course, the human needs to be in control of actually checking those kind of results. And I went through and you go through and you're guided through this process. Each day you get a kind of set of links of things you should check, comments you should check.
00:20:36
Speaker
It's funny that you mention that because in in this world now, I'm spending like 90% of my time like reviewing stuff that AI has generated. And we're producing artifacts faster than the community than I can manage as a human. And if you take that combinatoric and apply it to open source, to your point, issues are coming in, pull requests are coming in. open source maintainers are getting overwhelmed and no one knows if they want to look at a PR, if the PR was AI generated or not, if the person's a good committer, like the the open source community. And I think Peter Steinberger from Open Clause a great example. People are using their open clause to make pull requests to open clause. He needs to bucketize all of those issues and see like he might have 10 PRs and they all represent the same problem.
00:21:21
Speaker
That historically has all been done but manually. That seems like a perfect opportunity for an agentic workflow to step in. Absolutely. So you install, and and again, you've got to think about what are the problems I'm having in the repository and what help do I actually need? And like getting a kind of an authentic opinion about how this issue relates to all the other issues in the repository as
Agentic Workflows for Issue Management
00:21:44
Speaker
prepared and ready for the maintainer to work on, that's absolutely in the zone of what we are enabling with agentic workflows. So my typical morning, so once you crunch through the technical debt, and that itself is just this incredibly freeing process. You like you feel like you're coming alive as a maintainer again.
00:22:03
Speaker
Because you've got this software you love, and it're just you know it's reached that kind of stage where it's become a little bit of a burden, And become like there's that technical debt and, you know, actually you you crunch through that technical debt and it just goes back, the it being the agentic workflows, it goes back over old issues from say 2000 and, I don't know, 2021, right? And it'll find this issue and there was a real bug there that hasn't been fixed and it'll do the depth analysis and bring you the fix and, of course, you're in control. Nothing gets merged and all issue comments should be checked by the by the human.
00:22:38
Speaker
And you get to work on that kind of, you get to bank that fix or close out the issue and say, actually, we don't really care about that. The technical debt gets crunched away through this automatic process. And you're all in the context of GitHub and the repositories that you're using and working with today.
00:22:57
Speaker
I love that process. I've absolutely adored being a maintainer again. I feel like I've come alive again as a maintainer on about six or seven different repos. We've got people I've worked with, I'm collaborating with. I find I'm collaborating better with the humans in the repository. And this, I mean, you've got to be aligned. The maintainers absolutely have to be kind of talking to each other about, like the worst thing you can do as a maintainer probably is just like,
00:23:22
Speaker
just bring it in ah and um against the wishes of the other main active maintainers in the repository. So you've got to have good discussions. But I'm fine. it It raises the level of collaboration to be much more about guidance and specification and about direction. The
00:23:43
Speaker
yeah the the focus absolutely shifts more to the issues and less to the kind of pull requests. The pull requests, of course, you've got to review them closely. but you know The collaboration between the maintainers can be much more around the actual, what do we actually want for this repository, where are we going?
00:24:01
Speaker
Where do you find most of your time then from a just a purely UI perspective? Because I get overwhelmed at GitHub. Do you go into each repository? Are you spending time in your GitHub inbox? is that What is your interaction model with this? yeah I think this is a wide open thing going going forward. so My interaction at the moment is to run along through different repositories that I've installed this repo-assist-identic workflow into.
00:24:25
Speaker
And i you know I wake up in the morning, often on my phone, and I kind of say, oh, what's it done for me this morning? And yeah this morning I woke up, I went along to F-sharp control async seek. Yesterday I had dropped in an issue into the repository saying, hey, since you've cleared out all the technical debt, how about working on the performance? yeah There's an issue there. During the night, the repo assist woke up.
00:24:45
Speaker
And it it kicked in and did some work. It's optimized one, two, three, five, three different kind of functions in the in the in the library. And and you know my morning, it's like, it's a beautiful morning because you deliver these performance optimizations to software you care about.
00:25:02
Speaker
ah It's like you're waking up to goodness, you're waking up to sunshine. it's like you know it's It's like I've got no no problems in these repos anymore. It's like just pure pure happiness going it's's progressing along. and I absolutely love that feeling and it's a taste, of I think, of what's ahead and it's a taste of how automated AI works can really counter some of these narratives that are around about like slop or other AI generation because it can clean up this stuff. It can make your software better. You set it in the right direction and you're going to get the kind of improvements you want are coming out.
Importance of Benchmarks in AI
00:25:43
Speaker
So I, I, can feel your, your excitement and your enthusiasm and your energy. And I appreciate like that. That's really cool. I wonder how you just really have to try it. Don't you? You know what i mean? Like you have to just see it, experience it and see a good PR or see a good issue that shows up and, and to, to feel like, yeah, okay. This is for me.
00:26:05
Speaker
Yeah. And there are limits to this. I look, there are, look, i've i've I, There are some early versions of this that we installed into repositories where, for example, we didn't talk to enough of the maintainers.
00:26:18
Speaker
And some of the maintainers just said, look, this is this this this this one is wrong. And, you know, this is not a good not a good suggestion. it was so So I've got a blog post, design.net, and you can take a look through some of the blogs. And one of them is about automatic performance engineering. And I'll give an example of a small library that benefits from automatic, semi-automatic performance performance engineering.
00:26:43
Speaker
Now, if you're in a C++ plus plus library that's taking 30 minutes to build and it hasn't got good microbenchmarking set up, then but automatic performance engineering is probably not going to work out well. okay Because to do automatic performance engineering, you've got to be able to take really good microbenchmarks. We know it from benchmark.net or whatever else you're kind of kind of using.
00:27:06
Speaker
And so how well the engineering and we all know about agents.md and kind of setting up the and the the agents for success in the repository. If you're going to get it to do more advanced software engineering tasks like performance engineering or test improvement,
00:27:24
Speaker
then you really need to be setting it up, setting the or automated coding agents up for success. And that means investing in making benchmarking sweet in the in the in the repository. It might mean you doing sort of manual verification that whatever kind of performance was taken in, say, a GitHub Actions VM actually checks out for real on a real... on ah on actual real machines. So there's serious engineering that needs to be done that needs some of the best talent talent in the industry to be guiding and shaping that what the how these agentic experiences explore a sort of problem in the design space.
00:28:08
Speaker
So that really comes down to the the the harness. Like I keep trying to tell people that if the ambiguity loop has no bounds, it will make stuff up. It will make slop. And the slop that people are experiencing is you left it up to interpretation.
00:28:25
Speaker
And tests and performance or microbenchmarks reduces or removes that that up for interpretation moment. Yeah, ah look, there's I love this question of like how ambiguous should a workflow be and where should the kind of gu guard railing be. The framework I usually use is you've got goals and you've got constraints, you've got guardrails.
00:28:50
Speaker
And that's the um but that that that that's a magic kind of combination. And i also just like thinking through the kind of stuff setting up the engineer for success. If you think of the agent as a junior um sort of so kind of junior performance engineer, probably much, much worse than that. they they they like yeah They're not necessarily good at doing performance engineering, and they might lag in their results. That is some of these agents do have a tendency to not be to not reveal fully whether they, for instance, ran fully proper before and after tests on the performance of a piece of software.
00:29:32
Speaker
And ah so you've got to be setting the agents up for success and you've got to be setting the human reviewer up for success to correctly review the proposal the AI is coming up Yeah, I think that that that
AI in University Education
00:29:48
Speaker
can't be overstated. And I think people don't, people haven't yet seen the big picture. And we as a community, as a software development community, are still trying to see this And I like the idea of like setting the junior engineer up for success.
00:30:00
Speaker
But I also call out that in a world where you're the expert, the AI is the junior engineer with unlimited energy. So give it the toil. And in a world where you're maybe not the expert, and like I don't know rust, I think of the AI as being more senior to me until my code smell ability meets it or exceeds it. And then I start treating it again as a junior intern with a lot of energy.
00:30:23
Speaker
Yeah, I mean, I don't. I was very cautious say about using the and like the anthropomorphization language. Yeah, don't like anthropomorphizing. I kind of take want to take that back because one of the things I i really love in this space is is to think about the education aspect. but I think we can shape the education of people coming through the universities and coming through their kind of learning in first part of the learning process.
00:30:50
Speaker
so they can do this well. I absolutely believe that they're able to to to to be great engineers in this kind of automated software engineering agentically assisted software engineering. I believe every university should be having a course on agentically assisted or AI assisted software development and kind of exploring the space of these kind of ramifications of what's going to what's happening.
00:31:17
Speaker
And that that's so all the all the juniors, you know, on this call, go go learn how to use the learn what makes software great, learn what makes software good, learn what what good performance engineering means, learn what good test engineering means.
00:31:34
Speaker
ah learn the Learn how to make great tools under the hood because these days tools are easier than ever to kind of make. And ah there's there's just so much potential to make to be great at the craft of guiding guiding software to the right, ah to the place it needs to be. Like we're all sculptors these days or something like that. The analogies, everyone uses different analogies. yeah you know You can kind of sculpt the kind of software and you've got to start start on that journey today. Yeah.
00:32:01
Speaker
Yeah, I really think that there's a moment here and I feel like as kids are coming out or young people are coming out of school, having learned computer science, the software engineering, the actual practice of shipping quality software is the skill that needs to be and good decisions, good judgment and the toil that will go away will kind of fade into the background. But being able to make and ship quality software matters more than
Conclusion: Benefits of Agentic Workflows
00:32:27
Speaker
ever before. So you're working on GitHub agentic workflows. You've got automated markdown workflows, AI powered decision making integrates tightly with GitHub. You can use whatever AI engine you want, co-pilot, cloud, codex, whatever makes you happy. And you can learn more about continuous AI. Just go ahead and check out GitHub agentic workflows. Go out and Google with Bing or your favorite search engine and you'll learn all about how repository automation is making Don Syme love being an open source maintainer again.
00:32:56
Speaker
Thanks so much for chatting with me today. Scott, it's been a pleasure. And I look forward to coming back and we'll talk through all these all these kind of things. All right. This has been another episode of Hansel Minutes, and we'll see you again next week.