Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Why Every Code Migration Feels Different (and What to Do About It) | Ep. 8 image

Why Every Code Migration Feels Different (and What to Do About It) | Ep. 8

Tern Stories
Avatar
10 Plays9 days ago

If you’ve worked on more than one code migration, you already know the punchline: none of them are the same.  

Sure, it sounds like a React 17 → 18 upgrade should follow the same plan for everyone.   

But a real-world code migration always find a way to be uniquely painful. Your app uses different corners of the library.  

Your plugin stack has its own fingerprint.  

Your engineers wrote some “just for now” code that’s still running five years later.  

Whatever you thought the code migration would be, it won’t be.  

This was one of the biggest takeaways in a recent episode of Tern Stories, where we looked inward for once.  

My cofounder Ryan and I talked through what we’ve seen in our own migrations and the ones we’ve supported across companies.  

Despite the variety, a few consistent truths emerged.

---

Get Tern Stories in your inbox: https://tern.sh/youtube

Transcript

Introduction: Tolstoy's Analogy on Migrations

00:00:00
Speaker
Anna Karenina, happy families are all alike. Unhappy families are all unhappy in their own unique way. If Leo Tolstoy was a vibe coder, Every successful migration is alike.
00:00:11
Speaker
Every unsuccessful migration is unsuccessful in its own way. In today's episode of Turn Stories, I sit down with my co-founder, Ryan Greenberg. Between the two of us, we've done dozens of migrations across Twitter, Slack, and at scale companies.
00:00:24
Speaker
But we've also talked to hundreds of engineers who have done migrations of both far bigger and far smaller companies. Today, we're not going to talk about a single migration.

Why Migrations Differ from Feature Work

00:00:33
Speaker
We're going to talk about all of them.
00:00:35
Speaker
What are the patterns? What are the lessons? What are the surprises that we see that are common to every one of these big changes? Why do migrations feel different than feature work? How can you build confidence by doing the migration and not just planning it?
00:00:49
Speaker
And will AI just do all this for us? Well, sure, but not with the tools we're currently using. Enjoy. All right. On today's Tern Stories, um I've got with me Ryan Greenberg, who was an engineer at Twitter, an engineer at Slack, and now I have the wonderful privilege of working with as my co-founder at Tern.
00:01:07
Speaker
ah So, Ryan, welcome to the show.

Ryan Greenberg's Migration Journey

00:01:09
Speaker
Thanks so much for having me on our podcast, T.R.
00:01:13
Speaker
I know the scheduling was brutal to get you here. um ok Okay. I know the answer to this question, but because our audience hasn't met you yet, tell me a little bit about why you decided to spend 40 or 80 hours a week working on my creations.
00:01:30
Speaker
Yeah. I mean... I think that it was initially a tough sell when we were like, do we really want to do this? And then i was thinking about it and I realized, wait, I'm kind of like already doing this in some way and I've been doing it for many years.
00:01:44
Speaker
And so really the idea of, well well, let's figure out how we can make this easier for other people and give them tools to approach and think about it. That kind of mindset shift made me think, yeah, let's do this together.
00:01:57
Speaker
Um, you already know this, of course, but we met because we were working at Slack. I was working on a team where we did a lot of migrations or migration shaped projects. And you had one of those projects that my team ended up helping with.
00:02:10
Speaker
That's, you know, we actually talked, I think we talked about that last week, uh, with a little bit of, uh, the end of the memcache story with Glenn. Yeah, exactly. like um Cool. So I think today is going to be a little bit of a different format for our from our usual um find someone and talk about an external migration.

Tolstoy Analogy Applied to Migrations

00:02:29
Speaker
Ryan and I have talked about a ton of migrations with a ton of different people and in addition to our own experience. And I wanted to share some of the and some of the lessons we've learned and some of the bits and pieces of those migrations that feel common or uncommon across each of the the stories.
00:02:45
Speaker
Yeah, for sure. Yeah.
00:02:48
Speaker
So i think one place one place I think makes sense to start is, don't know about you, but I thought that migrations would all kind of have this the same pattern. We would help folks, especially who are doing the same migration, and we would be able to say, like well, we're we're the migration um experts.
00:03:07
Speaker
We can help you upgrade from React 17 to 18 like we have 15 people before. um That didn't really happen, did it? No, not really. I mean, it definitely reminds you of the, um what's that line from ah Anna Karenina? Happy families are all alike.
00:03:23
Speaker
Unhappy families are all unhappy in their own unique way. And I think we do really see that with migrations just because um they end up being uniquely difficult. And you're right. it it like It definitely seems a little counterintuitive because it's like, hey, you're upgrading from Vue 2 to 3. They're upgrading from Vue 2 to 3. You basically run the same playbook.
00:03:44
Speaker
You know, you would think, OK, but what happens? Well, so it maybe doesn't seem intuitive at first, but if you go kind of like one level down, I think it does make a lot of sense.
00:03:55
Speaker
A lot of these, first of all, are libraries with a significant surface area or some size. So it's expected that like some people are going to use one part of the library and other companies are going to use different parts. And so they're going to have to focus on different parts of the upgrade.
00:04:09
Speaker
One example that um that I was thinking of about this was I worked on a ah Rails upgrade many years ago, um but still quite sharp in my mind. It was an upgrade from Rails 2 to 3.
00:04:20
Speaker
And we had hooked into ActiveRecord, the Rails ORM, in this kind of unholy way to accomplish something interesting. And in that jump from two to three, we had just a bear of a time making that change in a way that other people doing the Rails two to three upgrade did not because ActiveRecord and AREL and like a couple other things changed.
00:04:39
Speaker
And so we had to do a lot more different work that other people didn't have to do in that upgrade. And it's in part about like, well what parts of the library are using? You know, you're going to have to ah adjust to different changes.
00:04:50
Speaker
And that's like, that's just the thing itself. So if you're going from, you know, version two to three, you're going to be using some stuff. Other people are going to be using other stuff. The applications are shaped differently. You're going to have to make different types of changes.

Challenges in Using Internal APIs

00:05:02
Speaker
It makes sense. This was a Twitter, right? Yeah, that's right. Yeah. so work We had a couple of Rails monoliths and this was one of them. So what, ah tell me a little bit more about that. what ActiveRecord is a pretty core part of Rails. why was Twitter using it and in a different way that made that upgrade particularly hard?
00:05:25
Speaker
Yeah, you know, trying to remember the specifics of it, but it was largely like, you know, people are just writing code to get something done. And so somebody had an idea for like how we would hook into the DB layer in a way that ActiveRecord didn't expose in its public API.
00:05:40
Speaker
And so we were like, OK, we're just going to write the code to do this thing. And, you know, like this is the the. you know, you're playing with fire when you do this, whenever you touch these internal APIs, right? Because, you know, right.
00:05:51
Speaker
Like it works. It did the thing we wanted and we were happy until this moment came. And then all of a sudden, you know, ah the debt came due and we were like, oh, shoot, like, There's no migration path because nobody wanted us to do this. No one expected us to do this. And we just happened to do it, you know, in order to to do something. I think it was to subscribe to some type of change in the database. But um ah the main point was that we were doing something that worked, but you weren't really supposed to do. And so there was no like paved path for us to continue doing it in the future. And it took somebody, you know, like weeks to unravel and figure out basically how to do it from scratch.
00:06:25
Speaker
Yeah, absolutely. i and You know, you've you figured this is a story from a decade ago, and that was Twitter's migration. We've all learned our lessons. but No, I talked to somebody two weeks ago who decided they were going to integrate ah a GraphQL API with a Python ORM, and they're going to hook into the internals of it, and now they're stuck three versions behind.
00:06:43
Speaker
So, yeah, of course. That's pretty common. and And then, so that's one aspect of it, which is that, like, the these libraries are relatively large. People are using them in different ways. But on top of that, often the challenging upgrades happen in like a larger ecosystem.
00:06:58
Speaker
You know, if you think of whether it's Rails or Django or any of these larger frameworks, people rarely use them, just that library off the shelf. Usually they're using it with an ecosystem of plugins.
00:07:10
Speaker
that are part of their upgrade story. So in going from, you know, Django ah bumping a version or going from React 17 to 18, you are presumably using a lot of things that hook into that library.
00:07:23
Speaker
And you need to figure out how the things that you're using are going to be upgraded along with everything else. And it's unlikely that your fingerprint of all of those dependencies and the plugins you're using are exactly like somebody else's.

Strategies for Tackling Migration Complexity

00:07:34
Speaker
Yeah, that that makes sense. I mean, there's there's a migration guide to get from Rails 2 to 3 to 4 to whatever version it is now. And there's a a guide to get from yeah know React 17 to 18. But there's not not a guide to get from React 17 to 18 if you're using these 15 different libraries because you're probably the only company in the world doing that.
00:07:54
Speaker
Right. it And, you know, each of those plugins... is going to have its own story. You know, like, did they think about, is this person or is this platform going to use my plugin to get from here to there?
00:08:05
Speaker
um You know, maybe there's a team of people developing and maybe it's just a solo person um who's doing it as a side project and you've taken a dependency on that. And so like figuring out how you're going to get not just from um Rails 7 to 14 or whatever the current version is, but like how all of your plugins are going to to go along with it adds a lot of complexity that's unique to different companies.
00:08:27
Speaker
Yeah, absolutely. So tell me, have you seen have you seen or heard patterns of how people tackle that complexity? Like you can't follow the guide. You can't just hit the button and and do it. And we'll talk about why you probably at this point, no one's had success to chucking it into ChatGPT and having it rewrite it for you. what What are the different like styles of plan that people come up with?
00:08:51
Speaker
Yeah, I mean, the most popular way for people to to tackle this is they kind of are in a research and discovery phase that's coupled with like just doing it. you know, where you're like, okay, let's rip the bandaid off. Let's see what's involved.
00:09:06
Speaker
And so you just like, you change the the number in your dependency and then you go kind of through these series of hurdles. You're like, all right, can I find a set of dependencies that will even stall once I've done this bump?
00:09:17
Speaker
You know, do i need to sort something out with one of my packages? And then you ask like, can I get my application even started? um You'll go through a set of hurdles and try and figure some stuff out there. Some things you'll fix.
00:09:28
Speaker
Some things you'll just kind of like hack around and you'll make a note of as part of your discovery. Like, okay, we're going to have to sort this through. I disabled it or I commented it out. Um, you go through your test passing. Um, maybe there's a phase where like, how do I even get my tests running?
00:09:43
Speaker
Um, and then, you know, then you start to chunk through like, all right, the tests are failing in this way, or, you know, depending on the language you're using, I'm getting compile errors here or there or type checker errors. And you'll sort through those until you kind of finally,
00:09:55
Speaker
um Emerge either with, if you were fortunate, the completed project or basically a plan for how you're going to complete it. You know, it depends a little bit on what attitude you're going in with.
00:10:07
Speaker
You might find out like, oh, it wasn't this particular one wasn't that bad. We're pretty close to doing it. Right. yeah um But in some cases you merge with like, OK, like this is the ah Google Doc, the paper doc, whatever it is, it's like 20 pages long where you've like ingested information from the migration guide and blog posts that you've read in specific errors in your code base. And that's kind of like, OK, this is the kind of the proto plan for how we're going to tackle this thing.

AI's Role in Migration Challenges

00:10:34
Speaker
Yeah, there is there's like these this ambiguous outcome to it where you can go through and you can do the work. And if you're lucky, you will have done the work by the time you ah upgrade it. You have understood what the work is to be done. You're like, oh, it was actually just like 17 super easy steps.
00:10:51
Speaker
cranked it out in the course of two weeks. And then there's the unhappy path. which I feel like is much more common where you end up uncovering some giant piece. Like you peel the onion and the third layer is just totally rotten and it's going to take four months of work before you can get to the fifth You kind of a semi-common place that this crops up, especially with these larger upgrades, is changes to your tests.
00:11:12
Speaker
I've seen this happen a bunch of times. That oh yeah Rails upgrade that I mentioned, not only did we go from Rails 2 to 3, but it turned out I'm trying remember the details, but it's like you could use RSpec 1 with Rails 2 and you could use RSpec 2 with Rails 3, but you couldn't, like they did not, there was no other combination of them. So in addition to doing this pretty beefy upgrade, we also had to like update all of our tests.
00:11:38
Speaker
And that's a story you see with like, a React upgrade. We had this issue at Slack um with the view upgrade from two to three, multiple people are looking at how the way that they're doing testing with view testing utils changes. And so like, there's this thing where like the tests often will provide you with some ability to do the migration and figure out what needs to change.
00:11:58
Speaker
And sometimes there are also a weight in it where, okay, I have, you know, like 5,000 of this thing and they all need to change in some way. Right. that it it It almost goes from from being, like you said, it's it's not an asset anymore. It's a liability because even if the tests pass, you don't know if those tests are actually testing anything valid anymore because you just had to rewrite 5,000 of them.
00:12:21
Speaker
Right. Exactly. how does How does tackling the testing changes happen? ah Coming back to this theme of like every migration is is painful in its own way. Are are there any patterns you've seen around um whether testing changes have have their own nuances or or are those a little bit more standardized?
00:12:43
Speaker
You know, I think that like the way that I think a lot of people try and make progress on these things is they try and figure out, is there some intermediate state where I can kind of like bridge between two of these worlds?
00:12:55
Speaker
And that's a pretty common technique for this and really any large scale change, because um you really, in most cases, don't want to be in a place where you're like, OK, I've got this long running branch.
00:13:07
Speaker
We're like fixing all 5000 tests. And like once we get there, then we're going to merge it. But also like that's going to happen with the upgrade like that. Occasionally you do that, but that's a really uncomfortable place to be.
00:13:19
Speaker
And so I think often people are trying to figure out, like, is there some compact mode or is there a shim that I can create myself to allow us to kind of exist between these two behaviors? Or can I depend on both of these things simultaneously so that I can move some tests to the new thing?
00:13:35
Speaker
um But those are and those are all variations on how can I do the work incrementally? Yeah. um One of the patterns that I've seen is if you run, if you get the tests running with the new testing, RSpec 2 or, you know, Vue 2 to 3 has exactly that same pattern. Vue test utils goes 1 to 2 and it's only compatible with ah the old version or the new version. and You can get the new tests running and they will absolutely fail because,
00:14:06
Speaker
they're they're not compatible with your existing libraries and that's okay. Now at least you test suite that you could you could try to run and you're back in this mode of like, well, when the tests pass, I'm probably one guaranteed turn of the ratchet closer to finishing the migration.
00:14:23
Speaker
So 2025.
00:14:25
Speaker
Why is just solving this for folks?

The Importance of Context in Migrations

00:14:30
Speaker
um why why is a i not just solving this for Yeah, I mean, i think some of it has to do with the expected complexity and size of these migrations, right?
00:14:45
Speaker
Like if you're upgrading a small dependency, the kind of dependency where you might today have success where you're like, okay, I bumped at a dependency. There were some errors.
00:14:56
Speaker
I went through and fixed the errors. Maybe I consulted a guide. You know, i think that if you tried to use AI to do that style of change today, There's a decent chance you'd have success, um or at least it would, you know, make the process a bit faster.
00:15:10
Speaker
Those are not generally the changes, though, that I think that people really stumble on. That's kind of like, hey, we need to floss. um Maybe not as often as as our dentist wants us to, but we're going to like bump these things.
00:15:20
Speaker
And so I think people are seeing success with that kind of simpler case. But when you come back to like what we were talking about, where it's like, okay, what do you have to do to figure out what even needs to be done?
00:15:32
Speaker
You're synthesizing from a lot of sources, whether it's, you know, how do I get the application to start? How do I resolve this issue? How do I get my test to run? Um, how am I supposed to interpret the output from them?
00:15:45
Speaker
And it it does end up requiring both a lot of different tools and modalities and in some cases just a massive amount of context. um And so, you know, ah we've had this discussion a lot in terms of like swinging back and forth between being really impressed by what models can do. And then on the other hand, being like, you're just not...
00:16:06
Speaker
that smart they're just such idiots sometimes it's really shocking how dumb they are yeah i mean it's like and i have like these idiot savant moments where like at one moment i'll be like wow i can't believe that it produced that and the other hand you know it'll like turn trying to edit a file four or five times and you're like i i can't believe you're getting stuck like trying to change line 30 of this file that's like the easy part This is something I can absolutely do.
00:16:35
Speaker
I think i think that's that's right, that one of the the hardest migrations um where you talk about, where you see people like really struggle not to do the migration, but to even figure out figure out how to do the migration, is if you're trying to move a database, especially if there's no context around the database itself and how it's used. um when we When we talked to Nanad a little while back,
00:17:00
Speaker
um He pointed out that they were moving or consolidating a number of Postgres databases and understanding what the production state is, but then understanding how people expected that production state to be used is the question when it comes to doing this migration. And you can see that at every level of the them this the stack, right?
00:17:20
Speaker
I need to get this application running locally is a question that can't be answered in code alone like it might be a process that's also on your laptop like it's it's almost local context but the migrations need that context in order to succeed and in some cases you need to go all the way out to production and you need to do user interviews with people who've been at the company for a long time in order to get that context and you know Cursor doesn't do that yet.
00:17:51
Speaker
Yeah. um I mean, I think that there are kind of like two perspectives here. Right. One is that if you get enough context and in some of these cases, you're talking about a huge amount of context. You're like approaching the theoretical maximum of like, OK, like can it ingest everything that could be knowable about the world of my software application?
00:18:10
Speaker
Right. Right. um If we did all of that, would that be enough to just make it work? Right. And I think people are coming down kind of on different sides of that question where, you know, like I think Maximus are like, oh, yeah, that'll be enough.
00:18:27
Speaker
It kind of pushing aside the question of like, well, how easily and how tractable is it to get all of that context? Yeah. um And then the other side is like, well, it's a little hard to predict all of the things that you're going to need and need to know and need to do in some of these cases.
00:18:42
Speaker
So how do we like, what do we do when we get to those hard things that were just not working? What, you know, kind of the more complex version of, I tried editing this file four or five times and I'm, I'm just not able to do it.
00:18:55
Speaker
um So I need a person to figure this out. That, of course, editing a file a simple case, but you're, you still see plenty of other things where like, I don't know what the heck's going on with this ah this JavaScript ah dependency in this editor component, like something really weird is going on.
00:19:11
Speaker
And I think often the context is required as just something that you're not going to anticipate um being able to provide ahead of time. It might be that you have to go talk to um Kelly and figure out like, what does she as an expert know about this thing that you need to make forward progress?
00:19:28
Speaker
Yeah, absolutely. like And I think that's where wait you know if i If I kind of summarize this these pattern set of patterns we've seen, that the way that people do migrations doesn't so much fall into different ways into different like broad patterns, but everyone is starting with how do i the question of how do I gather as much context as possible?
00:19:53
Speaker
And then as they peel the onion, it gets they get different answers because their setups are different in every case, and that's why every migration feels different. the The core loop of how do I get this thing to a better state, how do I ratchet progress, and what do I have to do to get there is the same, but The answer varies

When to Avoid Migrations

00:20:13
Speaker
wildly.
00:20:13
Speaker
Sometimes it's, oh, I just have to make these seven changes. And other times it's, oh, I guess I got to go talk to the seven people we laid off last quarter. Woof. That's a hard one. Hypothetical. I will note.
00:20:27
Speaker
I'm sure it's happened. I'm sure it's happened out there. We'll we'll find it eventually. um So you you said earlier that one of the key strategies for the migration, for learning how to do a migration, is simply do the migration.
00:20:42
Speaker
um I was a PM in a past life, and I saw another strategy, which is just don't do the migration. um Which... I think is a common conversation. You have some pile of upgrades or some work that you know would be better for the code base, for stability, for for whatever.
00:21:03
Speaker
And you end up having this tough conversation or like, why do we have to do this migration right now? Um... what's What do those conversations feel like and what kind of patterns have you noticed among the conversations you've had and the conversations you've had with our customers about that?
00:21:20
Speaker
Yeah, I mean, ah so to be frank, I also agree that not doing a migration, it's a great option. Very easy to implement, um very time efficient. And I actually, I remember i maybe like eight or 10 years ago, I was writing this piece on migrations and i had this whole segment, which is,
00:21:39
Speaker
is particularly for technical stakeholders, like, are you sure you have to do it? Because you may be signing up for something that's like really hard, really time consuming. It costs not just time, but also usually political and social capital.
00:21:53
Speaker
um And so like, you want to be pretty confident that you have to do it. If you don't, you know, or if you do it it turns out you didn't really have to, like, it's hard to get those resources back, you know?
00:22:05
Speaker
So absolutely there is kind of like this moment where on the tactical side, you have to ask, like, do we actually have to? Because sometimes you just get so desire based around these things. Like it would be better this way or would be a little bit safer or, you know, so that's like the first thing is you have to like look in the mirror and be like are we sure that we want to embark on this journey?
00:22:21
Speaker
Right. Like ah engineering is a multiplayer sport, right? That you have to, um you have to know that you want to do this and the organization wants to do it. um The, our, our old SVP fuzzy who's now at notion would always talk about like the tokens you get that when you're hired as a, as a senior staff engineer, you get a pretty big pile of tokens as a junior engineer, you get a small pile, but everyone gets

Technical Debt vs. Product Needs

00:22:46
Speaker
it.
00:22:46
Speaker
And a big part of, what happens at a company is that you need to make good decisions and you will earn more tokens and grow your pile by making good decisions and being, you know,
00:22:59
Speaker
pushing the company in the right direction. um But you can also spend those tokens. You can spend them on bad migrations. For sure And some of these migrations are like an all-in move where you're like, you're pushing them all in um because they can, you know, like at the extreme case, you could spend easily like an engineering decade um of effort on some of these changes. Once you kind of fan it out to all these teams and say like, here's what needs to be done.
00:23:20
Speaker
Read this guide, make these changes, verify. Like it's very easy to turn around and find that you've had a hundred people working on something for a month or two. Yeah. So, okay. So don't Don't do the migration. Like, that's one thing.
00:23:33
Speaker
The other thing is like, okay, so you have come to the belief that you have to do this thing. And one of the the key things to do is try and figure out like, well, which business value um or like which capability you're tying the success of this migration to.
00:23:49
Speaker
You know, there are some that are really easy um because there's really concrete numbers like money. You know, like we're going to do this thing and it's going to save us X. And, you know, like if people like X and they believe your plan, then fine.
00:24:03
Speaker
um There are some that are harder to um to talk about um is we're kind of going down that spectrum. Things like performance ah that is like those are harder numbers. So you you're kind of like moving away.
00:24:17
Speaker
um from really certain migrations, then you start to get to things like, all right, well, what about safety um or developer experience? or you know like Those start to become a little bit harder to really um to justify. and Like on another dimension, though, you're looking at capabilities like does the business need some capability?
00:24:37
Speaker
um Sometimes it's something more ambitious, like we need to do X in order to get a product. Sometimes um you see those related to like security, legal compliance, where like the capability is you want to continue to be stock to compliant or comply with whatever you've promised your customers or vendors. Yeah.
00:24:55
Speaker
hey, we will comply to this standard. And that involves not using any end of life libraries, platforms, OSs, et cetera, you know? Yeah, I... in this This gets really sharpened up in infrastructure because a a lot of infrastructure work is, frankly, downside protection. you know Infrastructure has to work, and if it's down, incidents are incidents hurt a lot more than 10% bump in performance gains you yeah um the you. know we We would talk a lot about if we lost...
00:25:27
Speaker
you know if we if we lost the ability to promise code review at Slack, if we lost the ability to promise, um you know, CVE fixing on a certain timeline, that the downsides of that were just so much worse than almost any other upside could deliver. Like, it's not going to net out, okay, we're going to lose half the customer base and, like, it doesn't matter how good the product is at that point.
00:25:51
Speaker
For sure. um Yeah. And that was, I think, in some ways, like, a very... I don't i want don't want to say it's unique to infrastructure, but it is a classic of infrastructure and platform and like yeah almost not purely backend, but the the infrastructure infrastructure side of backend teams.
00:26:09
Speaker
um You were in the product org, though. doesn't Doesn't product and like infinite upside of the revenue potential of this new line of business always win the prioritization fight?
00:26:23
Speaker
Yeah, I mean... That is a pretty familiar tension for or engineering organizations, right? Because um depending on kind of like the inclinations of the organization, but generally speaking, you're right. Like it often feels like product stuff will just win.
00:26:40
Speaker
But that's, you know, that's the role of software engineers and engineering managers is to help navigate those tradeoffs, right? Because sometimes left to our own devices, engineers will just kind of like, we'll spin our wheels on, ah insert your favorite word, but like refactor, polish, cleanup, et cetera, without necessarily creating the new capabilities are going to, you know, drive business or revenue or customers or user happiness or growth and so on and so forth.
00:27:05
Speaker
um So there is that real tension. And I think that the key really um part of, I think, why you see the success in infrastructure is because you have this thing is you need to be able to communicate clearly on like what you're going to get and why that matters.
00:27:21
Speaker
And then there's also kind of like some secondary issues around like, do people believe that your plan is going to get you to that place? Right. Yeah. um I would often talk to like engineers on my team when we were embarking on we did kind of a bunch of refactors in Slack's notification stack.
00:27:36
Speaker
And this may be obvious to to some of our listeners, but I try to explain like, look, we are asking for you can think of it as a pretty significant chunk of money.

Communicating Migration Benefits to Stakeholders

00:27:45
Speaker
If you think of our salaries times the number of people on the team times the you know fraction of the year that we're planning to work on this.
00:27:52
Speaker
And people just very reasonably want to understand, like, why do we need to spend this money and what are we going to get for it? And I think that if you are able to kind of like bring the conversation to those terms, you're going to have a lot more success getting the buy-in that you want.
00:28:05
Speaker
Absolutely. I think in in some ways, the Slack notification stack made that easy as just like a perpetual source of confusion or complaints.
00:28:15
Speaker
um ah remember I remember, know that that a diagram of how Slack notifications floating around the internet. Yeah, I think that's somewhat infamous. And some of it is like, look, there's only so much that technically you can solve in terms of a a product that complex. But, you know, I think we did good work.
00:28:31
Speaker
the The other thing that I will say in terms of like how to not set yourself up for success in these, um because I've done this and I've definitely seen it happen, is you want to not talk too much about a thing that you're not doing.
00:28:44
Speaker
um Yes. Because I think that people have kind of a limited appetite for hearing about a thing. And so, you know, i'll I'll take a React upgrade as an example. you're going to You've got to do this big honking migration from 17 18.
00:28:57
Speaker
Yeah. You don't want to be talking about it for months while you're not doing it because, you know, like we can only pay attention to so many details. And so by the time you actually get started doing it, you've like already burned three or four months of attention um where people are like, why isn't this done yet? And you are like, well, because we haven't started.
00:29:19
Speaker
um Or we haven't been working on it. And I think you really want to be tight about like, hey, this is something we need to do. And we're doing some research about it and just kind of like beat the drum about like we're not doing it.
00:29:31
Speaker
Right. um Yeah. Because it's really easy to turn around and find that you've spent capital that you didn't realize you were spending, and you know, like the equivalent of the Netflix subscription that you weren't using.
00:29:44
Speaker
Although, i mean, who doesn't use a Netflix subscription? um Just because like you were like, it was leaking um out the door. And by the time you really need people to like focus and commit and contribute resources, they're kind of like, wait, isn't that done yet?
00:29:57
Speaker
um And that can that's definitely something to avoid. Yeah, absolutely. that ah yeah Similarly, we there was there was a team at Slack who was relatively undersized, who owned their Kubernetes like kubernetes stack. And honestly, at one point, just took almost all of their time to keep up with the Kubernetes pace of release on top of AWS. And there's a relatively tight window that you had to hit.
00:30:25
Speaker
But as the team grew and as they added automation, they got it down to the point where they were really efficient at it. They they could just knock it out and no big deal. And they still had to do it every quarter. So it showed on OKRs.
00:30:35
Speaker
But there was this perception that persisted where it was like, that's just the team that upgrades Kubernetes all the time. but No, that's less than 5% of their work. But because most of what they had done when they talked to other teams was telling them about the Kubernetes upgrades that were happening and making sure that no one had any breaking changes. that was That was the theme. That was the mission of the team as seen by other people. And it took a long time to get rid of that kind of perception, um which harmed their ability to make other changes as well.
00:31:08
Speaker
Yeah. I mean, think that plays into this, I think, is kind of a common um behavior that you see in engineers. I've certainly seen it myself at times where you're optimistic. And so, you know, when you're planning, like, what are we going to do this sprint or this quarter or for this chunk of time?
00:31:25
Speaker
You say like, oh, well, maybe we're going to make progress on this or like if we can squeeze it in, you know, we're going to work on such and such upgrade or migration or, you know, insert your technical improvement project there.
00:31:37
Speaker
And like that optimism can be sabotaging because like you didn't actually have the time to do that. And so you're talking about it, but you're not really making progress. And a quarter goes by and people don't really want to hear about it anymore.

Platform vs. Product Teams in Migration

00:31:49
Speaker
Absolutely. isn't isn't Isn't the dream, though, to be able to do all those things or set a different way, isn't the point of a platform team to absorb all the work so nobody else has to think about it?
00:32:02
Speaker
Yeah, I mean, that's a pretty common um way to set this up, right? Like you have the scenario where you're like, hey, product is competing with these technical things that we need to do and it's always winning.
00:32:13
Speaker
So like maybe they should compete in different leagues, right? So we'll have the product team work on this and we'll have like a platform team focus on this other style of change, right? ah Perfect. Ship it. That's the advice that we should give everyone, right?
00:32:25
Speaker
Right. And I mean, like, there's definitely something to that, right? Because these platform teams are able to focus on things in a way that can both, like, you you take those things are no longer competing.
00:32:36
Speaker
And, like, if you can get a smaller group of people to focus on a thing, it's going to cost you less in terms of time for them to um pay attention to this, fix this thing, you know? So, like, there's a lot of things to recommend that approach.
00:32:49
Speaker
um Of course, like, you again run into issues where just the amount of work that needs to be done can be quite overwhelming. Even with that kind of concentrated expertise and focus and attention, if you have 150 product engineers,
00:33:07
Speaker
growing and building a thing, it's going to be pretty difficult for a platform team of five or 10 people to keep up with everything that's being done, you know, assuming that they have many in-flight migrations, changes, um basically stuff that they need to adopt across that whole code base.
00:33:26
Speaker
Is it just a numbers game, though? I remember Stripe put out this ah report at one point that engineers spend 30% of their time dealing with tech debt um and maintenance. So is it just a numbers game? Shouldn't you allocate like 30 of your 100 engineers?
00:33:40
Speaker
Yeah, I mean, I don't have like the best thing to say about that. I think that you should probably allocate slightly more, but like nobody like the thing that generates value for your business is building stuff. And so like it's it's it's hard to be like, oh, you should put most of your money and your resources into this thing that like doesn't do that.
00:33:58
Speaker
um It's a real cost center. Yeah. I think the other the other thing that you run into is that as as you as you look at that, the platform team also doesn't have the context there's the rest of the organization building new stuff and I love the idea that software is fully abstracted and you can you can create APIs across which nobody has to understand anything. And that's just not at all my experience with running software.
00:34:26
Speaker
No, sure. And I mean, look, so I was on one of these platform style teams and, you know, I had seen a common pattern that you will see is a platform team encounters a migration or other large scale change that they want to make.
00:34:40
Speaker
And they are either on there. They're basically unable to do it across the whole code base. And it could be that they don't have the capability to do it. Or it could be that they don't have the context or the confidence to do it. You know, like if you're making a change everywhere, how will you know if you have broken some other piece of the code base? And of course, a typical answer here is, well, you know, the tests will fail.
00:35:01
Speaker
They might. That's that's a possibility, right? They could fail if they exist. But are you willing to like, Do you have that much confidence in the setup that you have? If the answer is yes, great, then you're going to be more successful in this kind of maneuver.
00:35:16
Speaker
But, um you know, like a pretty common story is you encounter this as a platform team. You're like, whew, just we don't. We don't have either the capability or the context. So we're going to do the fan out where you're like, OK, we're going to get every team to do their stuff.
00:35:29
Speaker
um That's super common. And it just in a lot of cases doesn't work super well because first you have to run like this org chart ah like Jenga Tetris like cage fight match where you're like, OK, I need to find like our most common VP and get them to buy in and then push it back down.
00:35:49
Speaker
But also like when you're talking to your peers on those teams, ultimately, let's say they agree to do it like you've got to train them up. You've got to explain like, here's this thing that you really don't care about, but I'm going to I've got to get you to care about it for a short period of time to make this change and then validate it and combine my desire to get this thing done with your expertise in this area.
00:36:09
Speaker
You know, and you can do that, of course, sometimes, but it's difficult to run that every time because just the way the incentives align. And also it's difficult to communicate that context. The best case scenario is that me on a platform team is having a conversation with an engineer on a product team about how we're going to tackle all the changes in their area of the code base.
00:36:29
Speaker
And that's usually not the way that these things happen. Usually it's like mandate goes up. mandate comes down. Okay. I guess I've signed up to make these changes at some point in the future. I'll put them off to the last week of the quarter. And then like, I'll try and figure out like, what do I have to do in order to get this change done?
00:36:44
Speaker
And maybe I'll be successful. Yeah, there's it's so expensive to go put context in another human being's head and have them then decide what to do with that context.
00:36:55
Speaker
ah we I remember talking a lot about at Slack about the difference between like the management communication plane and the engineer communication plane. and like The management communication plane, had a foot in each world as is like a PM on infrastructure, um where management happened in OKR sessions and in meetings and in a lot of one-on-one and rich communication building because you're trying to figure out like what is the alignment we are trying to hit.
00:37:19
Speaker
um And then the engineering communication plane is like, what's working for all of us? And

Empowering Teams for Successful Migrations

00:37:25
Speaker
there's a lot more, it happened a lot more on Slack. um And it happened a lot more in tools, like people stare at their editors all day.
00:37:31
Speaker
And to the extent that information is in an editor, or it's in a tool, or it's a thing I can run, and it does something for me, that's much more powerful than a document or a presentation or a meeting.
00:37:45
Speaker
Yeah. ah So like I was on one of these teams and having like been on the receiving end of a bunch of these kind of requests, like, hey, we need you to migrate to V2 of this thing or use, you know the next library or the safer approach here or, you know, like you name it.
00:38:00
Speaker
We need you to adopt types in this gradually typed code base in your chunk of the code. Right. I've been on the receiving end lot those. And the reality is like it's pretty hard because it's rarely the case that you're getting one incoming request.
00:38:11
Speaker
You don't have context. Fundamentally, like many in many cases, you don't want to do the work um relative to the other things that you could be doing with that time. So like as a platform team, one thing that can make you more successful and one thing that we tried when I was running one of these teams is can we actually make the change everywhere?
00:38:30
Speaker
Right. So like if there's anything you can do to give yourself that capability, that can really pay off because you can then go to people um and say like much more positive things like, hey, we made this change for you.
00:38:42
Speaker
You don't have to think about it. You just get something that's better. Or, you know, depending on the the way ownership works in environment, you can say like, hey, we made this change um and it's all done. We would appreciate your code review.
00:38:54
Speaker
Right. Which is that's an easier conversation, although not always easy um compared with, hey, like we need you to do this change out of whole cloth because you are able like you're able to do a lot more of it.
00:39:08
Speaker
If you're not able to do that, right, like if you aren't able to quite tool up, maybe like you don't have the confidence in that change, then you're shifting more into the mode that you described, which is you're trying to make it as easy as possible for other people to make that change.
00:39:23
Speaker
Maybe you have the tooling that they're going to run and then fix issues or validate in their own way. You know, maybe you have sample PRs and documents, but like you need to go with something because um the dynamics of the situation are such that, you know,
00:39:37
Speaker
You're usually approaching somebody with work that is challenging, um that the person that you're going to probably doesn't want to do. um And so like those are some of the things that you can do. Like best case scenario, you give yourself the capability to actually live the dream to do all the work yourself on that platform team.
00:39:54
Speaker
um But then there are like a couple of steps that you can go um that are not quite that good, but are better than just being like, hey, my VP says that told your VP that you have to do this, you know?
00:40:05
Speaker
Right. you You don't want to live in a world where the most effective engineers are the ones with like the winningest smiles who can convince the most people to do their work. but Yeah. And I can't say that that was me as much as I would like to believe that like I've got a decent smile.
00:40:21
Speaker
um Okay, so we've talked a bunch about the this this tension of prioritization around migrations. think the other thing I wanted to touch on is there is a wild variety in the confidence of teams to do migration. that on On one hand,
00:40:37
Speaker
ah You know, we talked, we talked to Lyfts about Lyft story of Python migrations where they would just ship stuff. if If no one looked at the the PR, it should be fine.
00:40:48
Speaker
And then, you know, we we've talked to other customers that I won't name you are struggling with migrations where they feel like they've got 10 year old code that they can't touch. And they're specifically, you know, steering around that kind of thing. And ah if you're listening to that and you think it's you, it's not just you. I've had that conversation so many times.
00:41:06
Speaker
Yeah. Definitely than me i guess Absolutely. So I think it wasn't me at Slack only because Slack hadn't been around 10 years by the time I got there. But... um but So let's talk a little bit about what makes teams confident in making these big changes. What have what have you seen? What do you think are some of the key factors there?
00:41:29
Speaker
Yeah. I mean, so there's a couple of things here right off the bat that are maybe obvious to to some of our listeners. Tests, if you already have them, can often be a huge asset, right? Like, how?
00:41:41
Speaker
At the right level, the test will give you a huge amount of confidence that something is still working or hasn't broken catastrophically. um It can be tough if you're in a trough where you're like we don't have them. And they're pretty hard to get at the time when you want to do one of these changes, right? Like you hate to insert another dependency on making a change that is going to take a while where the first thing is like, just write tests for everything that that don't have it. It's just hard to do.
00:42:06
Speaker
um One thing that um that you were talking about with Lyft, which is quite common, it's like, well, can you make it easier to fail back, right? So these are everything from like feature flags to like gradual rollouts in that, you know, if you have a ah migration or large scale change where you have to do something 200, 500, a thousand times.
00:42:28
Speaker
um Sometimes a slower velocity can be an asset. you know We did this with a whole bunch of different endpoints at Slack API endpoints where we would roll things out. And you know we'd only do um several a week, but that would mean that by the time we got to the end of the process, we had a lot more confidence that Yep. This is pretty rote. We understand how it's going to be done.
00:42:49
Speaker
um And so like doing things gradually can be another way to develop confidence. Start with something that is um less significant in your application. That's not as important. help Tell how you thought about like error rates during that time. Like what's an acceptable quantity of errors to return to users? Yeah.
00:43:07
Speaker
Man, that's a tough question because like ah like the ideal answer is like, well, we just don't. you know but just' like like the And then you really start to get into error classification because the errors that I care about are like errors in generally code introduced by my change and not like you know there's some baseline level of errors caused by like, oh, you know we weren't able to talk to some dependency or some service. The world is messy and those are harder to to hear about. though Like the scariest is you're like, I just merged this thing.
00:43:36
Speaker
Error rights are spiking. It looks like it's like the timing aligned and you're like, Oh, actually, you know, like some AWS service was, was like less available or just like, you know, this database shard went down.
00:43:49
Speaker
Um, like I, I still get like a cold sweat when I think about those moments. You're like, Oh no like we need to, we need to roll this back now or we need to flip this off. Um, But you know the ability to to tie your change to errors can give you a huge advantage.
00:44:06
Speaker
um One way that we did this um when Slack was migrating from PHP to ah so Meta's hack programming language, which is a gradually typed language, it's says trend you know similar to moving from JavaScript to TypeScript, um we were able to monitor basically recoverable errors um because there's such an idea as a soft type annotation.
00:44:30
Speaker
Once it's kind of like hardened, it's going to be a fatal error. But in the soft state, you're getting warnings. And so I think there is like maybe one pattern where where you say like, okay, can we like roll out and log or warn in some way so that, you know, then we have a preview of what the errors would be like.
00:44:46
Speaker
Yeah, absolutely. I've always loved the pattern for stateful services or when you have like a microservice, you know, ball of mud kind of thing. um If you have, if you're moving a call from one service to another or from one database table to another, just do them both and send the new one into the void and see what comes back.
00:45:06
Speaker
Don't ever return that to a user, but compare it at the point of return. Yeah, I think there's there's a bunch of different names for that pattern. um Traffic diffing. ah Twitter, we called it Tap Compare.
00:45:17
Speaker
um And I wrote just like throwaway bespoke versions of this for a bunch of different, more sensitive changes um at Slack. Often, know, like they're super useful when you're reading from two sources.
00:45:28
Speaker
Um, they can be trickier when you're trying to figure out like whether a right is going to be successful in a new service. And so I think then, ah and you're also particularly sensitive to change because screwing up rights is a lot worse than screwing up reads, um, most of the time.
00:45:42
Speaker
And so, you' you know, you'd write this machinery around like, okay, this is what I was going to write to the old thing. This is what I'm going to write to the new thing. Like, let's make sure that those look a certain way or like we're pretty confident.
00:45:54
Speaker
Um, And so that's like another way that you can build confidence. I think that like one trap that you can get stuck in um is just doing everything you can to build your confidence and like trying really hard.
00:46:09
Speaker
um You could have, you should, of course, should try and like you should do something to to build your confidence. But But I think it's easy to get stuck in this like, all right, we've got to like think about it more. We've got to think about all the scenarios because like we've all been in cases. I certainly have many times where you think as hard as you can and it still doesn't go right.
00:46:27
Speaker
And that just proves it like thinking about it for an extended period of time by itself is not enough. to get it right. you know So it's right totally likely that you could spend... This happened when we were doing this big cache format change for tweets at Twitter.
00:46:44
Speaker
We were really concerned about overwhelming our memcache cluster, which was huge. I think it was you know like 600 or 700 of the biggest machines you could imagine in each of our data centers just holding this stuff in cache. So we were worried about overwhelming it.
00:46:57
Speaker
And we thought for like... weeks and weeks and weeks about like, all right, what will happen in this scenario or they'll happen in that scenario. And um as it turned out, we didn't need to think about it because we accidentally rolled it out um and it just worked.
00:47:10
Speaker
but Oh, yeah. yeah I mean, i can imagine worse outcomes, but I've also been involved with cases where we thought about for weeks and weeks and weeks. We were super careful and then it still didn't work. Um, and I think that the antidote to that, um, is you have to be willing to try things and give yourself a way to, to roll back faster.
00:47:30
Speaker
You know, like that's another thing is you try for a certain amount of time, but then also you need to be willing to say like, all right, we're going to ship this thing. We're going to do this thing. And then we, but we have a way to roll back when we detect that things aren't working properly.
00:47:43
Speaker
Absolutely. that the Every chance, every time you reduce the risk of errors by 50% is the same amount of time. And like that exponential is just less and less worthwhile over time.
00:47:57
Speaker
And I think the other thing is that if you, you can accept some error rate, but there's also ways to set up your systems like soft typing where the error rate doesn't matter. where just round it all to zero and the user never sees a problem. And yeah, you're introducing bugs into production, but that's fine because they're not user-facing bugs. They're just operational headaches.
00:48:16
Speaker
Yeah. that you know And that does start to get to a little bit of this question about, like, does the platform team no one know enough to understand which of the things that's changing are going to affect which user behaviors.
00:48:28
Speaker
Because, you know, it could be an endpoint where um you've reduced success rate, but the client is automatically retrying a couple of times anyway. So it doesn't really matter that the success rate has changed in some way.
00:48:39
Speaker
um The perceived success rate is still going to be acceptable. Absolutely. And then you can get into this mode where... in If you look at the people involved, like the platform team

User Impact and Communication

00:48:50
Speaker
who wants to make the change, it would be great if they could make the change. But if you have to gain enough confidence that you know what's going to work, which is, you that's a pipe dream. but um You have to go to a product team or a domain expert. Now you have a conversation and now you have to swap context. And now it's going to be really hard to get to the point where you can understand, yeah, we think this is going to be really good. Yeah.
00:49:12
Speaker
It's so much cheaper just to say, let's not let's make sure that we're in a mode where we're not going to break the application in some fundamental way. Let's try it. That's fine.
00:49:23
Speaker
And if we need to go bother the domain experts because we can't figure it out after two or three tries, fine. That's that's good. But more than likely, The platform team will figure it out. And those are the teams that are really motivated to get the migration done.
00:49:37
Speaker
And I use the word platform team here super broadly because everyone's a platform team at some point. Yeah. And then they've got, you know, like, I wish I could point to the name and be like, oh, yeah, you know, this is such and such as platform team. But they have like a half dozen to a dozen unique names at different companies.
00:49:53
Speaker
Right. We were talking to somebody the other day and they use like a different It wasn't effectiveness. It was like, you know, like they were it wasn't just efficiency or effectiveness. It was like developer delight.
00:50:05
Speaker
And it was like, OK, so you're you're doing this thing. It's just you're using a name that nobody else is using, um but you're still doing this. It's the same work. It's the same work. I talked to an AppSec team who is doing the same thing, but nominally they are responsible for security and making sure that the application is not doing anything dumb.
00:50:26
Speaker
And they were hung up on exactly this problem of they needed to make sure that the change they were making was right because they're making changes to authorization. And I don't disagree with that. like think You need to make sure that changes to authorization are right.
00:50:39
Speaker
But on the other hand, the long pole in the tent had become... a bunch of engineers staring at 5,000 changes before they went out. And you got to get context faster. You got to get that information faster and bring it back in and make progress forward. Or otherwise you're going to be working on this thing for three years.
00:50:56
Speaker
For sure. um You know, and another failure mode that you see there is like on some of these teams, um, And we on the platform team get so focused on our change that we're not really thinking that much about what is happening um at the consumer of that change, whether that's like the ah team inside the same company or actually how behavior will manifest for users.
00:51:18
Speaker
That's like one of the most important parts of the context to surface early on.

AI's Future Role in Migrations

00:51:22
Speaker
Because I think that we often think about like, okay, well, i need to tell you how to make this change um or what to do, or maybe these are common um problems you're going to have to resolve.
00:51:31
Speaker
But like the most important thing to to make sure that you understand and can communicate is how will the behavior change for me or for users? um I can't tell you the number of times where I would like ask questions like, how will the user perceive this?
00:51:43
Speaker
And it was like, I was asking a question that had not been asked before, even though that like should really be kind of like a central thing that you're thinking about whenever you're changing a piece of software. What is that the last question? That should be the first question, but yeah, we've all been there, right?
00:51:57
Speaker
For sure. Absolutely. Cool. So let's, let's go to our wrap up. bunch of different things that we've learned. What do these patterns tell us about the future of migrations?
00:52:10
Speaker
Yeah, I mean, the the so it really seems today that the speed of software and software development is not slowing down. um and And quite the contrary, it's it's getting much faster.
00:52:21
Speaker
um People are having a lot of success using AI um to write software. right We're seeing this at a local level. You and I have both um had a lot of success, whether it's with Copilot or ClogCode or, you know, insert your favorite thing here.
00:52:35
Speaker
um It's easier to write software. um But it doesn't appear to be the case that it is easier to maintain software at this moment, right? um Absolutely. We're seeing some success. Like we were talking earlier about like, yeah, like certain small enough version bumps or certain kinds of changes.
00:52:53
Speaker
Fine. Maybe we can handle those as well. um But once the context window becomes like the thousand call sites in your application, we're seeing this kind of asymmetry where like we're getting more code. Great.
00:53:04
Speaker
Except that from the perspective of platform teams, that code ends up being a liability. Right. Absolutely. like And so there's definitely like this tension building um where, you know, I think more and more teams are going to find themselves in this uncomfortable place of like, we've got to do this big change.
00:53:21
Speaker
And it's going to be in some ways harder than it was in the past, simply because you have more. Yeah, absolutely. And I think that you look at the rate of...
00:53:32
Speaker
generation of code is certainly going up, but also the rate of churn that in a company today, if you turn over one or 2% of your code base, that's really disruptive. And if you've got Cursor or Windsor for Claude Code and 500 engineers, like you could turn over 10 or 15 or 20% of your code base and add another 50% pretty quickly.
00:53:53
Speaker
And that just destroys a lot of the the context building that we do as engineers today. and that is where tech debt shows up. That's where you need, you're going to force yourself to do migrations.
00:54:06
Speaker
Yeah. And I think if you're using one of these tools, like you see it in the small now, you know, um I was doing a change just the other day where I was like, hey, like, no, I don't want you to pass all of this state everywhere. i want you to make like decisions about where to pass it based on like how much, how big is that state? And like, I want this to be testable. i don't want to have to construct this whole thing to test this little function. It doesn't need it, you know? And once you like blow that up to a code base with 100 plus contributors and, you know, a million lines of code, like the problem can start to feel intractable.
00:54:38
Speaker
Absolutely. and this is This is where I'm most excited for what we're building at Tern is if you get to the state where you can centralize what you want done to a code base and then you can gradually, intelligently, but automatically push it out.
00:54:56
Speaker
You say, i I want to upgrade these libraries. I don't want to use this pattern. i want you to use this. I want this refactored in this way. You can push those rules out. and And today, yeah, other people can consume them. They can run them. And they should be software engineering tools.
00:55:11
Speaker
But eventually... There's this agentic future where software agents consume those rules as well, and they just write the code right on the first time.

Tern's Vision for Automating Migrations

00:55:20
Speaker
One of the hardest things to do is to drop into a large code base, pre-AI, and determine what the right pattern is.
00:55:26
Speaker
Which one of these seven ORMs are we using is the correct one? yeah Real story, not made up. but For sure. I believe it. And we we talked a lot about... um is your pattern, this like the preferred approach being adopted, dependent mostly on like luck in terms of what old code did somebody choose to copy paste, right?
00:55:48
Speaker
And I don't, um I'm not thinking of copy pasting derisively here. It's just like, that's how a lot of software gets written. You look for an example of a thing um to do and you're like, I'm going adapt this thing.
00:55:59
Speaker
And so like, if it's the old thing that you are um starting from, maybe because it's familiar, maybe it's because it's like, there's more of it, um then you're very likely to kind of like just increase the examples of that.
00:56:13
Speaker
And so we we talked a lot about like, how do you make it more obvious um what the right thing to do is, especially when there's like two, three, maybe even seven options for how to do it. Yeah, absolutely. And AI gives us the power to just do it right.
00:56:29
Speaker
You look into a a file and as you're writing it and editing it, you know, a common pattern is and flag or lint or notify people that, hey, this is an old thing. And, you know, if you're in the file, please go change it um But you you layer on top of that, like, let's add batch work close. And let's just flip the switch on 80% of them because they aren't going to cause user-facing errors if we get it wrong. well We'll leave the last 20% because it's in the Slack message send the path to until a human comes along and touches it.
00:57:00
Speaker
But you can really get a lot of different angles around... Getting your teammates to help, getting software agents to help, batching the changes, and just driving the cost of all of that down. Like, that's what really gets me excited about this and and I think fits with what we're hearing from people. It's not the ability to generate code.
00:57:19
Speaker
It's the ability to share context and really shape large code bases with, you know, power tools, not hand tools.

Conclusion: Listener Participation Invitation

00:57:25
Speaker
Yeah, absolutely. I mean, I think... we talked about kind of the fan out of changes as um a potential anti-pattern.
00:57:33
Speaker
But, you know, like the once you, if you look at kind of like the reason for that, it's because you're like asking somebody to do a thing and then not really giving them enough to do it as opposed to like, Hey, I'm like giving you information about like why the old is not preferred, what the new is.
00:57:51
Speaker
And ideally like an easy way to get from old to new um in a way that like doesn't exist when it's just like, Hey, I asked your boss's boss's boss to tell you to do this thing. Good luck.
00:58:02
Speaker
You know, like maybe there's a read me somewhere that'll help you. I hope you can find it. Absolutely. Do you want to DM from your SVP or do you want to ah linter warning in VS code? and i I know the answer to that.
00:58:14
Speaker
i mean, I, if it's not in my editor, i question whether it really exists. You know, that's where a lot of us are spending um a lot of our time. And so if you can bring some of that stuff into the editor, that's just such a huge win.
00:58:28
Speaker
Absolutely. Well, we are just about out of time. um If any of this sounds familiar, painful, we'd love to talk more about it. If you've got a migration story of your own to tell, love to have you on the show. So reach out. And Ryan, thanks for being here. And thanks for for spending this time today.
00:58:46
Speaker
My pleasure, TR. Thanks for chatting with me.