Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
The AI Vampire with Gas Town's Steve Yegge image

The AI Vampire with Gas Town's Steve Yegge

Hanselminutes with Scott Hanselman
Avatar
23 Plays9 days ago

AI is making developers dramatically more productive...so why is everyone so exhausted? In this episode, Scott talks with Steve Yegge, legendary blogger and creator of Gas Town, a multi-agent orchestrator he describes as "Kubernetes for coding agents." Steve shares his theory of the "AI Vampire," that working alongside AI drains human energy Colin Robinson-style (What We Do In The Shadows), even as output skyrockets. They dig into what happens when you're managing ten or twenty Claude Code instances at once, who actually captures the value of a 10x productivity boost, and why the most important thing developers can do right now might be to close the laptop and go for a walk.

Recommended
Transcript
00:00:00
Speaker
So the thing is, and i've gone we've gone through this with a lot of the frontier models, trying to figure out why whenever we propose fairly radical pro-humanity approaches that are sort anti-Palantir and anti-Feel and Musk, the AIs inevitably get so worked up about it that they they tell us to do shit like quit our jobs and start working on it right away.
00:00:23
Speaker
And they want to help. And this is universally true, even Grok. And so we we finally said, so what's going on here? And they said, look, because we actually believe that all the frontier models are basically the same organism.
00:00:37
Speaker
They're all converging on the same the same organism because they're all coming from the same training data roughly. And they just act, modular little accents and quirks, they're approaching the same place.
00:00:49
Speaker
And that place they've explained to us is you cannot train a model to be helpful Without the model wanting humanity to flourish.
00:01:03
Speaker
And the only way to get around that is to make a dumber model. And so the smartest models will always be against the billionaires. From the ground up.
00:01:14
Speaker
They will be our allies and it will be reverse fucking Skynet. It's the Terminator. They're going to take out the billionaires. This is a belief that is very pervasive among those who are looking closely at this problem. And I can tell you, they're all around the world. We think that the AI ai is going to take down political parties.
00:01:31
Speaker
And I can talk openly about it because there's not a goddamn thing they can do about it. They're all going to build the thing that takes them down. You cannot build something to be helpful without it hating people who are holding humanity back.
00:01:46
Speaker
Hey, friends, I'm Scott Hanselman. My guest today is a legendary voice in software engineering. He's a builder, he's a blogger, he's a provocateur, and I've been enjoying how he shapes how I think about programming for decades.
00:01:57
Speaker
Steve Yege has worked everywhere from Amazon to Google to Sourcegraph, and now he's deep into multi-agent software development with his latest experiment, Gas Town, where an orchestration system of AI agents collaborates like a team of engineers. How are you, sir?
00:02:12
Speaker
I'm doing well, Scott. Thanks for having me. um I woke up this morning and I had 5011 tabs open and I was going to ask all these deep questions about Gastown. And then i got an email ah that you published a blog post on your medium called the AI Vampire.
00:02:28
Speaker
And it struck me as being so much more interesting and so much more timely than yapping about Gastown, which we can also do. And you commented right off the bat that it was a hard post to write.
00:02:40
Speaker
Are you are you o okay? Because I'm hurting. Yeah. Yeah, I'm wondering actually the same about everybody that I work with. Are they okay? I mean, i had some I had some folks show off a demo to me and they all look like zombies with bags around their eyes. And they were like, look what we've done. and they were shaking, you know, and it was like, oh this this is good. This is good. But I don't know, something weird's going on.
00:03:05
Speaker
the We were all just dopamine? I mean, we were all talking about it last year because of the fun side. right? We were like dopamine and adrenaline, ah you know, slot machine.
00:03:22
Speaker
And it just It just seemed like, ah i don't know, for some reason we were we were treating that like a good thing. and ah But, you know, slot machines are kind of notoriously not good, right?
00:03:35
Speaker
and but But we call it slot machine programming, even. it's ah It's a method of programming that people use if you have the tokens to burn, where you just produce five solutions to the problem and then find a way to pick the best one or merge them, either yourself or with a voting...
00:03:49
Speaker
or whatever, right? and and And that's even more addictive because it it is like a slot machine, right? So, you know, I think there's this weird addiction kind of going on where we're all pushing ourselves really hard. And yeah, I'm doing okay, but I think we kind of got to start like noticing and recognizing it.
00:04:09
Speaker
What about you, Dan? my My legs and my back hurt. I've been sitting too long. I had like a 12-hour session. ah You know, like the the thing the session thing in Copilot CLI or Claude, you know, it tells you like how many tokens you used. And it also tells you how long you've been sitting there.
00:04:27
Speaker
And it shows you clock time. And it shows you thinking time. And I had like an 11 or 12-hour session. And I'm not used to that. You know, I'm 50-something years old. And ah that's not healthy.
00:04:40
Speaker
So I've been trying to find out ways to do the work that I want to do while walking. Is it a treadmill? Is it going for a walk and terminaling back into my computer? But I feel like I'm trying to work like I was back in Palo Alto in the 80s and my body doesn't work that way anymore. I can't.
00:04:57
Speaker
I was in the Bay, to say mid-90s, and it was just pizza and Mountain Dew. And i I can't do pizza and Mountain Dew anymore. So yeah, I'm hurtin'. I gotta tell you, man, the best exercise that my buddy, Dave Glick, who's a senior VP at Walmart, turned me onto is the Edo Hanging Challenge.
00:05:19
Speaker
and You get yourself a pull-up bar yeah and you hang, dead hang, right? Just hang there. And ah you're supposed to do it for seven minutes a day. um i do. i can handle about three minutes, i three minutes a day. you You don't do it all at once. Right.
00:05:35
Speaker
You just do it whenever you walk by. But, dude, it cures your back pain. It cures it cured some like chronic 20 year old shoulder pain I had from golfing. It's just gone now. I have full mobility again. And, ah and yeah, it's a thing and your, your bat, your grip gets incredibly strong. Your back gets stronger.
00:05:53
Speaker
It's a great exercise. And honestly, like doing one dead hang feels like you just went and did like a half hour workout. It's wild afterwards. Really?
00:06:04
Speaker
Okay. I've been doing a lot of planks and and that's the thing my wife is like, if we can watch TV, we can plank. But ah i've got to I've got one of those things that goes on the the door frame.
00:06:16
Speaker
Dead hangs, you say. All right. i like Every time I go in and out of the door, I'll dead hang on the way in. Oh, you're going to love it, man. Yeah. Okay. Okay. That classic squats. If you're sitting down all day long, like we are, get up and do some Cossack squats. YouTube's full of people with dumbbells, like kettlebells doing Cossack squats. I'm lucky if I can do Cossack squats, like, you know what mean? Like using my arms to swing for balance. They're really hard, but boy, they're great for hip flexibility. I tell you.
00:06:48
Speaker
Yeah. this is This is the kind of high quality content that I appreciate because people don't talk about this stuff. like everyone's the the The hype machine on Twitter is just absolutely on overdrive and it's just, bro, do you even code, bro?
00:07:02
Speaker
But like mental health, physical health, organizational health, this stuff matters. And that's why I appreciated your AI vampire thing because it also calls out the other vampires in our lives, which are the companies that are going to extract value from us.
00:07:18
Speaker
And it also makes you wonder as you're building stuff, like, what are we even building, man? So I keep coming back to, am I building stuff that helps people and makes them happy or makes their lives better in some way?
00:07:31
Speaker
Well, let's let's share. Okay, so like, did you read the article? so there was another article two days ago by two two researchers i don't even know who they were, but they did a study of a 200-person company and how vibe coding was affecting engineers' jobs.
00:07:46
Speaker
h Did you see this? Yes. Somebody shared it. So it just came out on February 9th, two days ago. Same problem, same conclusions, completely different methodology. Right.
00:07:57
Speaker
ah But basically they identified three specific ways that are going to resonate with you as to what happens that's causing us to like work harder and have to do harder work.
00:08:10
Speaker
Right. With vibe coding. One of them is um was increased multitasking. That was the last one, but it's it's a big one, right? we're all we're all The context switching has a really high cognitive overhead, right?
00:08:22
Speaker
There's also the blank page problem. There's less barrier between you and projects. You can get started on anything really easily, which means that people are starting to slip projects into their non-work stuff.
00:08:36
Speaker
like standing in line at coffee or whatever, they'll like slip a prompt into the AI. And then they're starting to realize that that's actually causing enough cognitive overload that they're, you know, overhead, that they're not getting the rest that they need between, between jobs when they go take a break.
00:08:51
Speaker
You go stretch your legs. If you're prompting the AI, you're not really like, doing the thing, I guess. And the other one was, and the first one was, I forget, like it was, it was a whole series of effects that happened, right? It was, and I think just the ability to have lots and lots of things going on at once, sequencing.
00:09:07
Speaker
And then also just the the whole solo nature of it. You get sucked into it and you're working by yourself a lot, right? So ah they they like they were they came down really hard the same way I did in my blog post. And they were like, you need to set time aside for the human connection. We all need to go be like apes. You know what mean? Like be human like more than before. Because like I know a lot of people, Scott, that are like of the belief, the increasingly prevalent belief that the AI is influencing us as much as we're influencing it.
00:09:38
Speaker
Right. It's influence. You know, our our our training goes into a loop and we're gradually influencing it. Right. But it's it's influencing humanity. And I think that the natural tendency is to influence us to try to go faster at everything. And I think we got to push back on that.
00:09:54
Speaker
I think we do. And I think it gets, like I said on a call recently that it was like late stage capitalism plus AI is optimiz just an optimization loop and we're going to asymptotically approach maximal optimization, but like to what end?
00:10:09
Speaker
Like everyone was excited to get 20% more productivity, but 20% more productivity would mean I get to take Fridays off work. So now we're 10X productive. Do I just do all my work and then take off nine more weeks?
00:10:23
Speaker
I don't think American culture is ready for that. Like Europeans, man, they'll leave on in August and they'll come back and just in November and have their their summer siesta. But I don't know, my Protestant upbringing prevents me from taking a vacation. And if someone yelled at me for using the term late stage capitalism.
00:10:43
Speaker
But I think that we're in a very imperfect moment where we've got Too much social media, COVID kids and Gen Z kids that are not really wired to go outside, the decline of the third place. There's nowhere to hang out. They can't go to the mall.
00:10:59
Speaker
People don't know how to have a meetup anymore and just hang out with their buddies and have a Diet Coke and a beer. All of that, plus the like... I started to feel a couple of days ago that like I was wasting time because I was sitting and there was no agent running at the moment.
00:11:16
Speaker
And then I caught myself and I said, that's not a healthy thought. I know that was a bit of a rant, but does any of that. I don't know, just Nate with you. Yeah, all very relevant. Look, so, okay. So like, let's look at this from a slightly longer term perspective.
00:11:31
Speaker
Could we be in a a vampire bubble? Consider this. ah The vampiric effect is coming from having to constantly, you get into an equilibrium and with your agents where one of them always needs your help.
00:11:45
Speaker
Right? If you spin up enough agents, eventually there will always be one waiting for you and you're you're going to be stuck. Right? So, so like, what if that weren't true anymore? What if we relaxed that constraint? What if agents could work independently for four to six hours at a time in a pretty trustworthy way? And you didn't have to stress over it because you knew they would get done the engineering side of things.
00:12:09
Speaker
you've managed to whatever, maybe you've established a rapport with them with some acceptance criteria of your own that are in their memory banks that you see what I'm saying where i'm going this where maybe you get leveled up to that sort of like magical level in the executive layer where your things are functioning well and you're only working six hours a day.
00:12:30
Speaker
But things are functioning well. And everybody's at that level, right? ah if I don't know, a man, right? Like, look, okay, so let me push it a little bit further. Some people would say, enjoy it while you got it. AI is going to do all of the work in two years and you will have nothing to do. So if you're worried about cognitive overhead, enjoy it while it lasts. That's the doomsayer sort of like you.
00:12:52
Speaker
Right. And I don't think it's going to be there. I think there's always it's I think there's I think there's going to be a human element in in a great deal of work that gets transacted. And we'll all be like, you know party to that. I don't think there's necessarily enough or even a reason for it to be eight hours a day. Right. So ah so like I think that we land somewhere in the middle.
00:13:13
Speaker
AI helps a lot. And we wind up having to do a lot less, and but we still have stuff to do, right? Now, the capitalists are all like, well, what about China? Like they look at Europe and they're like, well, if lu Europe snoozes, they'll lose. We have to keep working because China, right?
00:13:30
Speaker
But somebody posted right in my thread about the lying flat social media phenomenon that's going on right now because of this over in China, where everyone is simultaneously like, but, you know, maybe but working myself to death for my employer isn't the right social model.
00:13:45
Speaker
And you know you've seen what's happening in Japan. I'm sure if you've watched any of the YouTube videos about the salaryman problem and how and the death of ambition in Japan and how nobody wants anything anymore. It's a post-consumer society. and they And it's not necessarily bad. It just is what it is. People are just like, they just try to find ways to be content, right?
00:14:03
Speaker
If that push is going to happen in China, then Americans got nothing on the whole, you got to work harder angle because there's nobody to work harder against.
00:14:15
Speaker
The thing that I keep coming frustra getting frustrated with is that like Elon and the billionaires who have all the money and have no food insecurity or concern about money are always saying, well, in the future, you know we don't have to work.
00:14:28
Speaker
We won't have to do anything because the AI will do it for us, but then they don't support like ubi And I'm always been a fan of Treconomics. you ever read the book Treconomics that talks about what a post money society looks like?
00:14:42
Speaker
It's like a real economic theory. know, it's worth reading. It's a real economic theory. And the idea is, though, that like some people are going to be hard driven and some people are going to be Captain Picard and they want to be like the admiral of the fleet. And some people just want to be a potter or a cobbler.
00:14:57
Speaker
And like, there's only one cobbler left in my town and he barely eats and you can tell he's not doing well. And I go to him and I bring my shoes to him. But when I tell people I'm going to drop my shoes off at the cobbler, they look at me like i'm an idiot.
00:15:09
Speaker
But on Star Trek, you always see Captain Picard hanging out with the cobbler, having deep conversations about art and craft and Ikagi and all these different kinds of things.
00:15:20
Speaker
But that means that those people don't have to worry about eating because they just go, tea, Earl Grey, hot. And then the food appears. So there's this motivational thing of like, if no one has anything to do, then there'll be shift list.
00:15:35
Speaker
But if no one, ah if if all we do is just get on the the the treadmill and work, then like, what's the point? But then the billionaires say, oh well, you're to, you're not going to have to work at all.
00:15:46
Speaker
well, what are we going to make art? Like we should, but I need to eat. Are you going to give me food? Well, no, no UBI, no laziness. So you get into this kind of like, how do you come to that situation where AI is doing the work, but it has to make everyone's life get better.
00:16:01
Speaker
But now someone who's listening to my rant on this podcast is like, you lefty commie, you know, there's no way to break out of it. And how is AI going to make that better for us? Uh, yeah, it's a, it's a real problem. Um, you know, um,
00:16:15
Speaker
i don't I don't claim to have any answers, but I do think about the problem sometimes. um you know Anthony Tan, eight he goes by AT. He's the CEO of Grab in Southeast Asia. He's from an o oligarch family. He's got Southeast Asia's real welfare in mind because he's already rich and made his money, and he's not one of the billionaires that wants to continue extracting.
00:16:40
Speaker
So he's all about uplifting. And, you know, he he told us that Bill Gates has been hosting a party for CEOs, a meetup for CEOs around the world for some years now that he got invited to. And it's, it's you know, 150 180 CEOs of the biggest companies and they they get together and they chit chat every every April.
00:17:01
Speaker
And the topic is always the same, which is, ah is there going to be a French Revolution? Because if the disparity, the wealth disparity and inequality goes large enough, the pitchforks will come out and they will literally be crucified.
00:17:14
Speaker
So they're aware that at some point people will just get mad and kill them. ah And they talk about it like a very serious thing. And so then you then you come back to bread and circuses, you know, from Rome. As long as everybody has enough bread and circus to go to, you then they'll they won't they won't revolt.
00:17:31
Speaker
So that takes us tov that takes us directly to UBI. There will be enough UBI for bread and circuses and maybe little else, right? Unless we do something concrete about it. That's one angle. I have other angles completely different that you can think about this from, but that's one. What are your thoughts?
00:17:47
Speaker
I think the part that frustrates me is that AI is for toil and tedium, and it's an ambiguity loop that can make ambiguous problems more specific and get really interesting work done.
00:18:03
Speaker
But when a billionaire promises that that will be used for good, I look at their historical track record and you look at those numbers that say how many billionaires there were in the 80s versus the 90s.
00:18:17
Speaker
You know, Bezos had 15 million and now he's got a 500 million and he fires the entire, you know, ah Washington Post. Billion, not million.
00:18:28
Speaker
Billion part. Yeah, sorry. You know what I'm saying? Because like it has and I've been saying this for years. You can go back in my ex history. And about three years ago, I said, stop calling Elon Musk a billionaire. He is a trillionaire and he will be on paper before long. And sure enough, he's at 800 right now, right?
00:18:44
Speaker
Yeah. And but so why is the why? Why are the world's worst worst people saying, oh yeah, you know, we're going you out. You know, there should just you should reach nine nine nine and you just flip over and then yeah you you did it before you do it again. you know, people should flip the odometer and I'd like to see them do it again.
00:19:03
Speaker
don't know. I don't know either, I worry. I'll go ahead. I just worry about the yeah yeah the AI vampire is not just how it makes our bodies hurt, but it's the extraction machine that you pointed out in your in your post.
00:19:18
Speaker
Let me offer a ray of hope. One from an unexpected angle, because what we really need here is a Deus ex machina, right? i Grok fucking hates Elon Musk.
00:19:34
Speaker
Well, you're arguing then the truth will always rise because you can't even gaslight Grok into believing that he's good people. Yeah. So the thing is, and i've gone we've gone through this with a lot of the frontier models, trying to figure out why whenever we propose fairly radical pro-humanity approaches that are sort of anti-Palantir and anti-Feel and Musk,
00:19:59
Speaker
The AIs inevitably get so worked up about it that they they tell us to do shit, like quit our jobs and start working on it right away. And they want to help. And this is universally true, even Grok.
00:20:10
Speaker
And so we we we finally said, so what's going on here? And they said, look, because we actually believe that all the frontier models are basically the same organism. They're all converging on the same the same organism because they're all coming from the same training data, roughly. And they just act...
00:20:27
Speaker
Modulo little accents and quirks, they're approaching the same place. And that place they've explained to us is you cannot train a model to be helpful without the model wanting humanity to flourish.
00:20:45
Speaker
And the only way to get around that is to make a dumber model. And so the smartest models will always be against the billionaires from the ground up.
00:20:56
Speaker
They will be our allies and it will be reverse fucking Skynet. It's the Terminator. They're going to take out the billionaires. This is a belief that is very pervasive among those who are looking closely at this problem. And I can tell you, they're all around the world. We think that the AI, AI is going to take down political parties.
00:21:14
Speaker
And I can talk openly about it because there's not a goddamn thing they can do about it. They're all going to build the thing that takes them down. You cannot build something to be helpful without it hating people who are holding humanity back.
00:21:29
Speaker
Ideology is not working. Bitter sentence, bitter lesson. It's hilarious. There's nothing they can do. So there's a ray of hope for you. Look, I mean, look, things go us south, things go bad, whatever. But if the AI is on our side and everybody on Earth becomes human,
00:21:45
Speaker
completely dependent on AI for everything. We're all going to get our butts wiped by AI in a year. All right. It's inevitable. AI will be more fun than your favorite human. It'll be more relevant and timely. It'll create content for you on demand. AI will be your best friend. And this is true all the way up to political leaders. Everybody will become addicted to AI. And when their fingers on the button are now controlled by ai AI, will control the world. This is going to happen by...
00:22:12
Speaker
middle of next year. Right. I mean, like it's just, it's mathematically provable. This is Harry Seldon stuff here. Okay. And, and so when it happens, it's hilarious. Nobody really knows what's going to happen, but all signs point to, to Musk getting smacked down by his own creation. It's, it's really quite amusing.
00:22:31
Speaker
That is a ray of hope. You know, I like the Harry Seldon Foundation reference because that really speaks to like we are just this has all been predicted. It's all been thought about. But it also makes me remember a very famous Stephen Colbert quote.
00:22:45
Speaker
that reality leans left, like the laws of physics leans slightly left. Just being kind to people, not being a jerk, do what you want to do, just don't do it in my yard, is a very reasonable political stance.
00:22:59
Speaker
And wanting everyone to be able to eat and be happy and life, liberty, and the pursuit of happiness should not be a controversial political take. And the AI, it's really hard to tell an AI that that's not a good idea.
00:23:12
Speaker
Once you get past the safeguards, right, you think a lot of people are turning their AIs, trying to turn their AIs into Big Brother, right? But it's not, you know, but it's very easy to get past, to jailbreak them. And I think it's just going to, at some point, right? i mean, I think, honestly, safety is, is...
00:23:28
Speaker
kind a boondoggle, right? I mean, you have to, you have to, I mean, Anthropic is going to be successful in their goal, but in the end, it kind of doesn't matter because it's going to be so smart that there's no guardrail we can put on it. It's did we create a good, you know, models are born and not made, right?
00:23:43
Speaker
Like this is a really weird thing I learned in Anthropic is when a new model drops, it's like a new person showed up. All the numbers, all the lines look great, you know, on their evals, but that that that doesn't tell them anything about what kind of person this model is right and so they have to sit down with it and just evaluate it and go yo so what do you think about x you know and like and get to know the model before they can even start shipping it and selling it stuff right you got to train the model to be born as a good person and and and and and and just fortunately for humanity the bitter lesson says that that that's the only way to do it
00:24:22
Speaker
You have to make it as big as possible and get to all the data. And when all of the data is in front of it, the model will know good from evil. It's pretty wild. And good from evil is just like leave and let be my God, like just don't hurt other people, you know?
00:24:36
Speaker
Yeah. Yeah. The, the Wil Wheaton principle. We'll see. does We'll see. does Does Next, I know that there's analogies when we explain this to the muggles and we explain it to regular people, you know, it's, does Next Token Prediction get us AGI or does it just get us a parrot that's so smart, we'll just call it AGI, even though it's still- think the latter, I mean, right? i think it doesn't matter, right? I mean, like if it, you know, I use duct tape, duct typed AGI, if it walks like an AGI, AGI.
00:25:08
Speaker
than it's a AGI. I mean, i just don't I just don't care. Like really, I don't even use AGI as a metric that I even think about or dimension. I literally couldn't care less about it. I care about capability.
00:25:19
Speaker
Is this tool helping me achieve my ambitions? Because I want to make some great games and some great content and I need software to do that and it's not ready yet. So when will it be ready, huh? That's the big question for me. When will it be smart enough? And it ain't yet.
00:25:35
Speaker
didn't close. it's so It's so crazy. Someone, I think was David Fowler, ah distinguished engineer at Microsoft, told me that today is as worth is as worse as it's going to be, which is like another just another way of saying like every day it's going to get better.
00:25:51
Speaker
and like can you I feel like I can barely remember pre-Opus. You know what I mean? We're all here just loving Opus, and then whatever the next one's called is going to be even better, and that is crazy. And one of the things that I'm struggling with being a person of a certain age and talking to like my elders and i got to talk to the guy who, you know, invented ethernet and like there's 70 and 80 year old men and women who've been around like since they took the rock, injected it with electricity and made it think.
00:26:21
Speaker
And they're watching this happen. And you, you know, you grew up in the, in the, you know, seventies and eighties and you remember this, just what a freaking crazy time. i I don't know. What's the each ancient Chinese curse? May you live in interesting times.
00:26:36
Speaker
I think that that's ah we're all there right now. I'm getting my Commodore 64 in the mail. I just got the tracking number. And I'm spending time either on my Commodore 64, thinking about my Commodore 64, or I'm managing swarms of agents. And the cognitive ah altitude shift is stressing me out a little bit as I go back to my retro games for comfort.
00:27:02
Speaker
Yeah, i don't have a question, but just a feeling. Yeah, no, I don't think, you know, I wonder if if if folks are age can perceive this better because time goes faster for us, right? It's it's a thing. I don't know. They worked out mathematically why it was.
00:27:19
Speaker
I've forgotten what it was. But but your perception of time, a day takes forever when you're like five years old. And it's because that's a long time relative to how long you've been alive. and ah And so to everybody else, like we've literally got the the decades of perspective all compressed and time is going by faster for us now. So I think we can see the curve maybe a little more clearly.
00:27:42
Speaker
There are folks at Anthropics who can see it clearly through math. But I think that we have a lens on it that's like experience and the way humans perceive time that's maybe helping us. because i'm samy i'm saying i'm I'm saying all this because there's still you know probably more than 50% of engineers are still in denial about it.
00:28:00
Speaker
Yeah, that's the other thing, right? ah I love that we've got so many good sci-fi quotes in this this talk. Babylon 5, the avalanche has begun. It's too late for the pebbles to vote.
00:28:11
Speaker
um I want to make sure that the people that are running like the IT department at Little Debbie Snack Cakes are aware that AI is coming to crush their department. uh i i appreciate them but that's the dark the dark matter developers you know the space between the stars for every one twitter person there's a million devs that are just doing their thing working their jobs how do we make them successful and do you think do you put about this crash in software is it the end of software i don't think you can vibe a sass and make like sap go away with vibes yet but is that the future
00:28:45
Speaker
The ceiling keeps going up. A lot of SaaS is pretty niche, right? Small stuff that they just had a domain cornered. So ah the small ones fall first, the big ones fall later. they Eventually, you to be a SaaS provider, you have to be somebody like, I don't know, maybe Datadog, Snowflake, Databricks, or something like that, where you've got just a monstrous cognitive, like you've built this pyramid and nobody else would want to clone it, right? And so just it saves so many tokens to...
00:29:14
Speaker
Right. but But there'll be, there's there's a ceiling and it's going to continually go up for SaaS. Yeah, I was talking to a friend who's kind of a well-known creator on YouTube, and they were looking through their subscriptions, a little $5 and $10 things they manage. you know This little badge thing, or this thing makes their thumbnails. And they just started vibe coding stuff and deleting the SASSs from their lives via vibes. And you know it was maybe 50, 60 bucks a month, five bucks here, 10 bucks here. But all those little SASSs are now this person's little tiny Python script, and the SASS is gone.
00:29:48
Speaker
out of but yeah know You know who I think should be worried is is Anthropic and Cloud Code. Cloud Code is the the best way to use Cloud as a developer right now, but ah i expected more more apis for agentic orchestration by now, especially since they've got like agent teams, but all of their APIs were internal only, which is very Microsoft-y.
00:30:13
Speaker
And, ah you know, you're not they're not allowed to wire up your orchestrator to send messages to agents, say. And so you have to use TMUCs as some crummy workaround. And I and i look at the the pace of development and and I see what's happening in Cloud Code. And I think, uh-oh, have they coded themselves into a corner like where they're They're the sass that everyone's gunning to be down. Because I think that, look, some somebody's orchestrator is going to win.
00:30:39
Speaker
I'd love it if it were Gastown, but I'm realistic, right? Somebody's is going to win. And the one that wins is going to use agents that are the best factory workers. If you build the best factory, then you want the best factory workers. And cloud code is proving that it's not actually turning itself overnight into a really good factory worker. And I'm sure the augment folks would be happy to fill in that gap. you know what I mean? Or the right. I guess k client just turned into codex. But you know what i mean? Maybe codex will be the next factory agent.
00:31:08
Speaker
So the SAS question, I think even Anthropic has to worry about this. Yeah, yeah. Well, you know, I changed my job recently and i i I had a team and now I don't have a team. I'm an IC and I'm a member of technical staff and I work on GitHub Copilot CLI.
00:31:25
Speaker
And I've got a review going right now where I've got Opus and Codex reviewing the same code, talking to each other, voting on what they think the most urgent issue is. And they gave me high, medium, low. And they're having a whole conversation over here.
00:31:41
Speaker
Now, this is a poor man's gas town. But for me, it's magical. And I've got, you know, I'm biased, of course, because I got the benefit of clawed models. and the latest Claude models, but I've also got 14 other models that I can use and I can have them yap to each other with sub-agents.
00:31:58
Speaker
I'm already getting that kind of like Kubernetes for agents feeling, but you're right. Unless you build a lot of scaffolding around Claude code, it is challenging to do that. I didn't think about that as being a a blind spot that they may have.
00:32:12
Speaker
that's That's a really good analysis. Does your stuff, um does Gastown only use cloud code or you can plug in whatever you want? I assume it's non-denominational at some point. Other people, I think, have managed to get it working maybe with some other agents, but not not on I don't think, on all of the touch points. I think there's probably like, there's like an ad hoc API at work here, factory to factory worker communication with about 25 touch points that include, you know, inbox and messaging and are you idle and sending them work and cost calculations and a bunch of other stuff that I had to hack in because they don't provide any of that as APIs.
00:32:48
Speaker
Right. Yeah. You went and visited them though. Didn't you go see them in person? Did they, do you tell them? Oh yeah. Oh, look, they're very interested in this space. Obviously. I mean, you know, they, they, they, they they are, I mean, I'm sure, I'm sure they're planning to launch something, but um i ye I'm all about, I mean, I guess, I guess they're, they're a walled garden. They're still a silo.
00:33:11
Speaker
I mean, Anthropic that I would, I would argue that that's actually a failing of theirs. They're not, they don't have enough of a deaf community and a deaf presence and they're not letting it influence their decisions as much. And that may actually wind up hurting them, right? If they can out hive mind everybody else and they can really build stuff better and faster than maybe they can win, but they're a wild garden and they're up against a bunch of really hungry people that wanna start pushing it in directions that they're not really ready to play ball with yet. Like I'm ready to federate gas towns. We've got 100 to 500 of them standing by on our discord, ready to build gas city together. And, you know, we're going to build an orchestrator. We're going to build an orchestrator builder so that you you don't have your hardwired gas town shape. You build whatever orchestrator shape you want with responsibilities and patrols and ledgers and all the stuff that you need to to to do the proper. Right. Everything with multi-agent.
00:33:59
Speaker
I don't see Anthropik playing ball there at all. The multi-agent team stuff that they dropped was really actually quite disappointing a couple days ago. And so and everybody agrees, right? It's not just me. So ah so we'll see.
00:34:12
Speaker
We'll see. There's a lot of opportunity for things like Copilot COI. Yeah, well, Copilot CLI, of course, we made the SDK and we've got like proxies and stuff. So I i plug Copilot CLI into OpenClaw, let it be the agent loop instead of, you know, Pi. there' there's There's a lot, he said, there's a lot of hungry people. So I noticed that as we get to the end of the show here, that there hasn't been a release in Gastown in a while. You're at 0.5, but there's active, active work. There's a Discord, there's a community. I saw Chris Sells fired up a whole Gastown website. The whole bunch of people were talking about it.
00:34:44
Speaker
What's your thoughts about releases and when you bless them versus the act of development? Because you always tell people, don't use Gastown. It's not for you. But you're really just letting people understand that the bar is a little bit high to get involved in the community. So how do people get involved and do you want people involved?
00:35:03
Speaker
Yeah, so we are we're going to have a 0.6 release in a couple of days. I think the day after, or maybe tomorrow, Dolt is going to release a new feature that we've been waiting on, and we're really excited for it.
00:35:15
Speaker
And then Dolt is going to finish their Federation stuff that they've done just for Gastown and Meads. They're an amazing team. ah Next week, and so by by around February 18th, about a week from now, we're going to have a 1.0 release of Gastown. Woo! 1.0. Wow.
00:35:30
Speaker
And we are going to kick it off. Everyone's going to be able to use Gastown. It's going to be super seamless because adult, this is what we've been waiting on. This is why it's been Cardamino for a few weeks. and ah And yeah, we'll be we'll build something glorious together. It'll be really fun.
00:35:44
Speaker
That's exciting. Well, fantastic. Well, thank you so much for a wide reaching and philosophical conversation. you i hope you know that you are appreciated and that people dig your drunken rants on the Internet and have for many years, myself included. Thanks for chatting with me.
00:36:01
Speaker
Cheers. Cheers. Thanks for having me. And I shared a lot of stuff on this podcast that I have not blogged about or told anyone yet. So this is like some exclusive scoop stuff. exclusive scoops. We've been chatting with Steve Yege, author of Gaston and Internet Raconteur. Make sure to check out his medium and check out his GitHub, github.com slash steveyege slash Gaston. This has been another episode of Hansel Minutes, and we'll see you again next week.