Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
What's new in Serverless 2023 with James Eastham image

What's new in Serverless 2023 with James Eastham

The Basement Programmer Podcast
Avatar
26 Plays9 months ago

In this episode, I welcome back James Eastham to talk about recent developments in serverless technology. 

Transcript

Introduction and Disclaimers

00:00:12
Speaker
Hello basement programmers and welcome. This is the basement programmer podcast. I'm your host, Tom Moore. The opinions expressed in the basement programmer podcast are those of myself and any guests that I may have and are not necessarily those of our employers or organizations we may be associated with.

Listener Engagement and Guest Invitations

00:00:29
Speaker
Feedback in the Basement Programmer podcast, including suggestions on things you'd like to hear about, can be emailed to me at tom at basementprogrammer.com. And I'm always on the lookout for people who would like to come on the podcast and talk about anything technology related. So drop me a line. And now for this episode.

Introducing James Easton and Catching Up

00:00:49
Speaker
Hello everybody and welcome to the podcast. Let's see, how to introduce my guests for today.
00:00:54
Speaker
He's been a long time ally of .NET on AWS in that crowd. This is his second appearance on the podcast. He's also the only person I know to run a race powered only by broccoli. It's James Easton. Possibly the most interesting of the intros I think I've ever had, Tom, powered by broccoli. Wow, that would be it. I might have to try that at some point. How are you doing?
00:01:20
Speaker
I'm doing well. How have you been doing? Yeah, it's been a long time. It feels like it's been a while since we last caught up. But yeah, things are still going along. The serverless.net world is still there. It's still powering forward. So yeah, it's exciting to be back.
00:01:34
Speaker
That's actually been nine months since we recorded the last podcast episode. Yes. We'll get a lot to talk about. Definitely. I was thinking, once you've been on the podcast five times, I think you get an official Basement Programmer t-shirt. Oh, okay. Can you line me up for the next four episodes? I just chunked this one down into five smaller episodes. Does that count? Did I get a t-shirt? Don't get too excited. I have absolutely no artistic talent whatsoever. If you watch my YouTube videos, you can see the Basement Programmer shirt.
00:02:06
Speaker
Anyway, so yeah, it's been nine months since we caught up. What have you been up to?

James' Work on .NET and AWS

00:02:12
Speaker
Mostly still trying to...
00:02:16
Speaker
wax lyrical about the benefits of .NET and Lambda mostly. And actually I turned a little bit to the dark side and started doing a lot of Java content because actually there's a lot of similarities between .NET and Java, particularly when you're talking serverless. And even with the programming models themselves, they're very similar languages. So yeah, just a lot of
00:02:37
Speaker
YouTube content, I've created a couple of courses, so I've got a course, Shameless Plug, on solutions architecture and on microservices, building microservices, and that's on the Dome Train platform that Nick Chapsus, who any of the .NET listeners will know very well, I'm sure, is Nick's new learning platform. So yeah, it's been a pretty exciting last nine months, actually, London Summit, re-invent again, all the joys of Las Vegas. I think I've just about recovered from
00:03:03
Speaker
from re-invent and Las Vegas. So yeah, it's been a pretty wild year actually when I look back retrospectively. Did you manage to avoid Covid? I did. I actually came back completely well. I was really jet lagged. Normally I have a couple of days of jet lag. It was like a week this time where I was just up at 3am, 4am because coming back from the US to Europe is much worse than going out.
00:03:23
Speaker
But yeah, I avoided COVID, I avoided illness of almost any kind. I still do not know how because many people I know came back from Vegas with COVID. So yeah, some miraculous reason I've managed to get away with that somehow. Nice. I did too as well. But I wasn't actually in the heart of re-invent. So that probably attributes some of my success in avoiding it.

AWS Announcements: Focus on Step Functions

00:03:53
Speaker
So serverless, we both love it. In fact, you've done a lot of work around the documentation and enabling people in the area of serverless, especially when it comes to .NET, you know, recently Java. I'll forgive you for that one. And in fact, I actually was a Java programmer in a former lifetime. Obviously, things don't stay the same for nine months. So what's new and exciting?
00:04:19
Speaker
Ah, new and exciting nine months as, you know, nine months in AWS land is about 10 years in how the pace of the rest of the world seems to walk. So there's been a lot of stuff that's changed in the last nine months. And I've tried to, in preparation for this, this chat to kind of
00:04:38
Speaker
put together what I think are some of the more interesting announcements that have come both out of reInvent and kind of around the reInvent area, whether that's just before or just after, because there's been bits of both sides. And I thought we could probably work through it like a service by service basis. So obviously Lambda is a big component of serverless. I talk about Lambda a lot with .NET and Java, but that is not all there is to serverless. There is a whole bunch of other things. So I thought we could go through service by service, talk about some of the new announcements, and yeah, go from there.
00:05:08
Speaker
And I think the coolest place to start is actually with my favorite service, which is Step Functions. I just love it. It's just amazing. It's just such a good service. It gives you so much. It takes away so much. And fundamentally, when I think about servers, I think about reducing operational overhead. And when you think about all the things that Step Functions can do, it's a workflow engine. It can manage retries. It can manage exponential back off, parallel execution, massive scale with distributed map.
00:05:36
Speaker
It's a brilliant service. And there's a whole bunch of things that came out of reinvent for step functions. One of my favorite being the restart from failed state function. And what that means is, well, prior to this announcement, if you had a step function, a workflow that had 10 steps, and if you're not familiar with step functions, anyone listening,
00:06:01
Speaker
Picture it as just a workflow engine. You have a collection of steps and passes from step to step to step, hence the name step functions. And if you had a failure at any point in your function, you could build into the step of the workflow to retry a number of times. But if it retried so many times that it still couldn't work, the whole execution would just fail. And if you wanted to rerun that same workflow again, you would have to start it from the start of the workflow again. So you'd pass in the exact same input
00:06:31
Speaker
And the whole workflow would have to run again, which in some cases might be fine. In some cases, you imagine like an order processing workflow where you're taking payments from people. You probably wouldn't want to run that again if it failed after the point in which you took the payment. So this is a really cool feature because it allows you to restart your workflow execution from the point at which it failed. So you've got a 10-step workflow, it fails at point A, you restart it from point A, and 8, 9, and 10 will run.
00:06:57
Speaker
independently of the first seven steps. Have I done the maths right there? Yeah, I think I have. So yeah, really, really, really cool feature for step functions. That sounds cool, but feel free to do multiple charges and put them in my bank account. Yeah, the opposite way around. Oh no, my step function failed. I don't think customers would appreciate that feedback so much. Probably not, no.
00:07:21
Speaker
And then there's a couple of other things with step functions. So there was a couple of new announcements around the different things you can call from

Step Functions Enhancements

00:07:29
Speaker
step functions. So historically, like way way back more than two years ago now, I think there was a finite set of things you could do within a step within a step function. So invoke a lambda function.
00:07:41
Speaker
write a record to DynamoDB, publish an event to EventBridge. And then I think it was two years ago now where they released the SDK integrations, which opened up almost every single AWS SDK call directly from Step Functions. And that gives you then what you've got. You've got what are called optimized integrations, which are the first set. And then you've got the SDK integrations, which is like everything else.
00:08:04
Speaker
And that was really powerful because it gives you almost every single AWS API that you can call directly from a workflow. It reduced a whole bunch of Lambda functions and just got rid of them completely. So at reInvent, there was the Bedrock. So for anyone who's not familiar with Amazon Bedrock, this is the AWS offering in the ever-growing generative AI space that gives you access via a single API to multiple different foundational models, so multiple different GenAI models.
00:08:32
Speaker
It's basically serverless... Sorry, go on. As I say, it's basically serverless GenAI. Absolutely, yes. Paper use, you make a call, standard API call that have different parameters that you pass into it based on the model that you're calling and you get a standard-ish response back. So yeah, absolutely. Serverless GenAI, I like that.
00:08:55
Speaker
But the two integrations that the Step Functions release is in the optimized integrations camp. So that's for invoking a model. So actually making a generative AI call and getting a response back. And then also creating a model customization job. And what that means is that these foundational models are, as described, they're foundational. And you can fine tune these models to meet a specific use case that you might have, maybe with some custom training data, for example.
00:09:23
Speaker
So these are now optimized integrations with step functions, and it just makes the way of building simpler. You don't need to know, if you're using the SDK integrations, you need to know the actual AWS SDK. You need to know the parameters for the AWS SDK. You need to know the format of the call. Optimized integrations just simplify a lot of that.
00:09:40
Speaker
So that was one of the announcements. And there's also the HTTPS endpoint support, which now means directly from your step function, you can now call any HTTPS endpoint that exists.
00:09:55
Speaker
And obviously that opens up a whole load of different use cases for calling GitHub strike for handling payments. Like there's a whole bunch of different use cases that that fits that you can now do directly from your workflow as opposed to needing to have your workflow invoke a Lambda function and that Lambda function simply just makes an API call and then does something else. So yeah, two really cool announcements there from like an integration perspective and a service capability perspective.
00:10:21
Speaker
That's really cool. I can remember when Step Functions first came out and it was like, you can do this or you can do this and that's it. But now it's a really, really powerful toolset. Yeah, there's just so many things you can do with it. You can build.
00:10:36
Speaker
entire APIs, you've got a simple CRUD API that's storing data in DynamoDB, you can build all of that without a single line of code, as in a line of code running in production. There's nothing in your API gateway, direct to step functions, direct to Dynamo. There's just so much you can do now. And when you think about, as I said at the start, serverless being
00:11:00
Speaker
a reduction in operational overhead. Serverless is a spectrum, isn't it? It's not you are serverless or you aren't, you are more or less serverless. And the way I think about it is how much operational responsibility can you reduce? And anything like that, where that API I've just described, AWS has almost all of your operational responsibility, if not all of your operational responsibility, because you're not managing code, you're not operating code, you're not deploying changes to your code, it's just step functions does all this for you. So it's a super powerful service.
00:11:31
Speaker
And then the last one, at least on the step functions side of the world, the last feature I wanted to call out, because this is another really cool one, is task state testing. And what that means is,
00:11:45
Speaker
you can invoke an individual step of an workflow passing in some custom input to that single step, and you will get some output back. So it allows you to test an individual step of your workflow. Previously, you would have had to invoke your entire workflow. There was a customer I worked for, and we had a workflow that deployed a Kubernetes cluster. Yeah, I'm not deploying Kubernetes with serverless. It was kind of wild. And it was like a 25-step workflow.
00:12:11
Speaker
And if you wanted to add a 26th step and you wanted to test the 26th step, you would have to run all 25 steps prior to get to the 26th step to then test it. And then it fails. And you're like, ah, it took 15 minutes to deploy that cluster. I've got to go again. Exactly. So this is just really cool. You're building and developing workflows. You can invoke a single step. OK, that's cool. That works. And it allows you to iterate a lot quicker.
00:12:36
Speaker
to test different scenarios, different cases of how the input to that single step might work.
00:12:44
Speaker
That sounds really cool. It also helps with things like authentication. One of the common issues anyone is working with AWS for any of our time is IAM. IAM can be complex. It can make things fail in weird and wonderful ways. That would also test IAM. If you've got a step of your workflow that is calling the lambda function and you haven't said IAM up, you could run this test state in your workflow and it would work because you get the IAM authentication and you don't need to go through this whole process of
00:13:09
Speaker
waiting for the entire workflow to run again. It's all about speeding up this development feedback loop that is so useful, as we all know as developers. Nice. Well, I say I just run everything as admin, so I don't have a problem. Administer attack test, which is a star. No, I don't. Please don't do that. Yes. Yeah, fine-grained access control, please, everybody.
00:13:35
Speaker
Um, so yeah, that, um, that, that kind of rounds out my favorite AWS service step functions. And obviously there's, there's a whole bunch more than what I'm going to talk about, even in the serverless world. Like these are just the picks of the bunch, if you will, of the things that I've seen. James's top 10 or top, whatever.
00:13:55
Speaker
So, let's move on to Lambda, the kind of, the OG of the serverless world.

Lambda Improvements

00:14:03
Speaker
If you will, actually, I don't know if that's kind of, SQS and S3 are probably the OGs of the serverless world, but serverless compute world, we'll call it, we'll go to Lambda. Because there's a whole bunch of new things that came to Lambda, and actually one of the
00:14:18
Speaker
One of the coolest, I think, from a technology perspective, and this may not be applicable to a large number of customers, is the scaling behavior of Lambda. So Lambda now scales 12 times faster. So what that means is prior to this announcement,
00:14:38
Speaker
Lambda functions could scale by between 500 and 3000 concurrent executions, depending on the region in the first minute. So you get a whole bunch of load into a Lambda function. And in the first limit, in the first minute, in the first limit, because I'm speaking backwards now, in the first minute, you could only scale up between 500 and 3000 concurrent executions. And any more than that would then start to be throttled.
00:15:04
Speaker
And then after that, after that first minute, you then get an additional 500 execution environments per minute until your total account concurrency is reached. So to break some of that down a little bit, just in case anyone listening isn't familiar with Lambda.
00:15:19
Speaker
Concurrent executions, what that means is that every time a request comes into Lambda, each of them requests is handled in a completely unique execution environment. It's completely isolated. So one request comes in, you get one execution environment. Two requests come in at the exact same time, you're likely to get two execution environments. So what this means is that between 500 and 3000 concurrent executions means that you've got this many concurrent execution environments, you can process that many concurrent requests at the same time.
00:15:48
Speaker
This, of course, differs to the more traditional computing model where you've got one server and all of your 3,000 requests are handled by the same execution environment, which is your server. Very different model. So that was prior to the update. That's how things used to be. Now, as I've reinvented, that now Lambda now allows you to have 1,000 concurrent execution environments every 10 seconds until a point you reach your account concurring to them.
00:16:18
Speaker
So that means you can add an additional 1,000 environments every 10 seconds, which previously to this was about between 500 and 3,000 every minute. Think about that. And when I say this is like a technology thing, you just think about the engineering behind that. You just think 1,000 concurrent executions every 10 seconds. It's crazy. So yeah, what that means is that if you get a burst of requests into your function, say you get 999 requests into your Lambda function,
00:16:49
Speaker
in a 10 second period, Lambda will scale up to handle that. In the next 10 second period, you can keep that same 1,000, you can add 1,000 additional execution environments every 10 seconds, sorry, should I say. So your first 10 seconds, you get 999 requests, happy days.
00:17:06
Speaker
second 10 seconds you get another 999 happy days, your third 10 seconds you receive 1500 requests, 500 of them will of course be throttled because you can't add more than a thousand every 10 seconds. So yeah it's just it's just a really and what I said this might not be applicable to to all people listening is because that's obviously that's a scale thing that's that's you know if you're if you're operating at scale with a high number of requests throughput but it's just a really cool thing to think about.
00:17:32
Speaker
Sometimes I sit at night and I'm just like, the engineering behind that is just crazy. It's just insane.
00:17:43
Speaker
So other Lambda things sticking with Lambda. So sticking with the scaling thing, so Lambda will now scale up five times faster when you're using SQS as an event source. So the way Lambda and SQS integrations work is that there's a set of pollers. So the way the integration works is that
00:18:04
Speaker
Lambda is pulling the SQS queue on your behalf. If you were to write code to interact with SQS without Lambda, you would make a request to receive message, you would have some kind of for each loop and you would delete the messages from the queue depending on if they failed. What Lambda is doing for you is taking that away so you get the set of records into your Lambda function.
00:18:26
Speaker
And that works using a fleet of pollers. So there's a fleet of pollers that are polling SQS on your behalf. And the number of pollers will be roughly equivalent, and I can't remember exactly what the math is, but it will be roughly equivalent to the number of messages you've got in the queue. So if you've got a sudden influx of 1,000 messages into your SQS queue, the speed at which the pollers can scale up will directly impact how quickly your functions start to receive these messages off the queue.
00:18:52
Speaker
So these pollers now scale up five times faster. So if you get a sudden burst of load into your queues, you roughly will be able to work through that backlog five times faster because your pollers are going to scale up five times faster. Nice, nice little boost in integration there.
00:19:10
Speaker
Yeah, absolutely. And the other really cool thing about all this is all these things are for free. It's one of the beauties of serverless. Like this scale up happens. You can process your queue faster. There's no additional cost to you as a customer. And there's no additional engineering effort. It just happens. And suddenly, everything produces, everything processes faster. Quick sidebar, but there was a really cool
00:19:33
Speaker
Luke Van Donke is good, who's an AWS serverless hero. He won Verner's Let's Go Build Award this year at Reinvent. He works for Post and Health, the Dutch postal service. And he did a lot of things with EventBridge. And earlier this year, EventBridge released a
00:19:48
Speaker
new, something new in the back end that massively increase, decrease the process. It'll be like the end-to-end latency of EventBridge. And I stick to remembering putting a screenshot of a graph from one of his, off from Honeycomb, one of the observability tools on Twitter. And the graphs like this, and then suddenly I'm doing this on a podcast, so this is useless me using my hands, isn't it?
00:20:07
Speaker
The graph's really high, and then all of a sudden it drops off a cliff. It's like, no load. And then suddenly the processing speed is through the floor. So it's just all these things just happen for free. It's one of the things I just love about serverless is that you just get these sudden free upgrades to how fast things work or how fast things scale. It's just wonderful. Oh, it's great when you get something and you don't have to actually do the work for it. Yeah. There's literally no effort, and suddenly things are faster. Amazing.
00:20:35
Speaker
And then the last Lambda based thing I wanted to touch on quickly was the logging control. So the new logging controls that were released. So historically with Lambda, if you were to write a log line from your code, you know, we've all been there, we're trying to work out an issue. So you've got console, write line one, console, write line two, console, write line three. Ah, I got one on two in my log and I didn't get three. Yeah, that's where the issue is. Don't please do that in production.
00:21:05
Speaker
Don't rely on that as your observability strategy, but we've all been there. You know, you write console write lines, you print log statements. Everybody really should be using structured logging. Like, if you really are operating scale, you're building something in production, structured logging is the way to go. And what I mean by structured logging is all of your logs across your entire system have the same structure. Structure to them then makes them queryable, which means you can then actually, you know, give me all logs that have got x, y, z related, and you can actually query your log messages.
00:21:31
Speaker
And if you were doing that previously, you would have to use some kind of structured logging framework in your application code. And most languages have got some library like .NET, you've got Serilog, Java, you've got Log4J, and things like that. So the logging controls for Lambda now allow you to turn this on at a Lambda level. So you can turn on native JSON logging in Lambda. And what that means is that if you're writing log messages in your code, and you're just using Console.WriteLine, Print, whatever,
00:22:01
Speaker
Console log for you JavaScript enthusiasts out there. You would like to get JSON formatted logs out of Lambda because Lambda will do that transformation, if you will, into something that's structured. So it's a really nice way to get structuring your logs without needing to adopt.
00:22:20
Speaker
potentially quite heavyweight framework that's then going to start to affect the pop components of Lambda. That also includes native support for log levels. So you can set an individual function level. I only want to see error logs. I only want to see info logs. I only want to see warning logs, debug logs, trace logs, et cetera, et cetera. You can set that now at a function level so that what hits CloudWatch is filtered before it hits CloudWatch, which is going to save you money in a lot of cases.
00:22:50
Speaker
And they're the three Lambda, the Lambda hits I had. I gotta say, I love this, the Lambda scaling. I saw that, I was like, wow, that's really impressive. Yeah.
00:23:03
Speaker
Yeah, it's just crazy. And then, of course, with Lambda, you've got a whole bunch of new Lambda runtimes that we're on. So you've got Java 21, Node 20. I forgot what the latest version of Node is, and there was a new Python version. And then, of course, there's .NET 8 support coming.
00:23:21
Speaker
January. So it will be pretty January now. So it's coming in January. About time this podcast gets published. Maybe. Maybe it will be. Maybe it won't. I can neither confirm nor deny Mr Morris. It is coming though.
00:23:36
Speaker
But you can do native AOT with .NET 8. Absolutely. Something you and I have both worked on a fair bit, and it was more exciting. Yeah. I mean, if you really wanted to go really ahead of the curve, you could even run .NET 8 in a custom runtime. There's options to run .NET 8 today, of course. It's just to manage runtime support for .NET 8. It's coming in January.
00:24:00
Speaker
So that's Lambda. And then the last one I wanted to touch on, last service I wanted to touch on at least, was EventBridge. Because there was a couple of things that came from EventBridge that are really cool. So the first is wildcard event filtering for EventBridge.

EventBridge Updates

00:24:21
Speaker
So when you work with EventBridge, again, for anyone who's unfamiliar, EventBridge is an event bus, but one of the features of EventBridge is an event bus. You publish events into the event bus, and then you define rules on the consumer side to define which events you want to pull off the bus, and then you can send them events to different targets. And when you define these rules, there's a certain pattern syntax you can use. So you could do things like
00:24:51
Speaker
you could match on like a prefix. So does a property in the event start with Tom? Or does it start with James, for example? And let's imagine a scenario where you have files in an SEO, if you have folders in an S3 bucket and you have a folder for Tom and you have a folder for James, where two customers, and in subfolders inside them, two top-level folders, you have our order data. So you've got slash Tom slash orders slash James slash orders.
00:25:20
Speaker
And you want to do some work whenever Tom or James uploads an order. Previously, you would have had to add two separate rules on your event bus, one saying, I want all the S3 events for slash Tom slash orders and one for slash James slash orders. What wildcard event filtering now allows you to do is, as described, is to use wildcards. So you could say, I want to define a rule that is star star slash orders on this S3 bucket.
00:25:46
Speaker
And you would then get both events for Tom and for James. And then, obviously, in your application code, you might then filter for Tom and James as part of a step function, a blonde function, or whatever. And I actually missed this completely. So I only noticed this. I was looking through some of the recent announcements earlier today. And I actually only come across it. I was like, oh, that's cool. I didn't know that happened.
00:26:05
Speaker
So this has been a really useful exercise for me recording this podcast on because I've covered things I missed. And that's obviously really useful because it just, one of the service limits within EventBridge is the number of rules you can define on a given bus. So anything that allows you to minimize the number of rules allows you to define, it stops you hitting on limits as quickly that you might get against the actual service itself. And I think every program in the world has written a lot of things. It's like 90% of the same code just changes one little bit.
00:26:35
Speaker
yeah absolutely good time saving absolutely definitely and then the other event bridge based one was around event bridge pipes so like i said event bridge when you think about event bridge most people think of event buses event bridge event buses that was kind of the original feature of event bridge but event bridge has got much more to it than
00:26:58
Speaker
There's the schedule there, which allows you to schedule things to happen across time zones. You've got EventBridge Pipes, which is quickly becoming my favorite AWS service, and it's quickly catching step functions up. And that's what I want to talk about, EventBridge Pipes. So one of my sessions at reInvent actually was on EventBridge Pipes, a workshop actually, which we'll be going public at some point. Maybe I'll be able to give you a link for the show notes before this goes live, depending on when this goes live. And we can put a link for this EventBridge Pipes workshop in there.
00:27:25
Speaker
And what pipes allows you to do is to create point-to-point integrations. So when you think about enterprise integration, if we throw back to Gregor's seminal book on the topic, you've got publish subscribe, which is about buses. You've got producer and you've got many consumers typically. And then you've got point-to-point, which is I want to move a message from here to here, point-to-point A to B.
00:27:51
Speaker
And that's what EventBridge pipes is trying to solve for that the same way EventBridge buses did for Publish Subscribe. So pipes allows you to create these point-to-point integrations in a completely serverless way. So my favourite, favourite, favourite example
00:28:10
Speaker
for this use case is SQS to step functions. If you wanted to invoke a step function from a message on an SQS queue previously to before Pimes, you'd need a Lambda function to do that. And all that Lambda function is going to do is take data from one place, send it to and invoke your execute step function, which is called you don't need. It's just a glue code that doesn't really do a lot.
00:28:32
Speaker
So pipes takes a lot of that away. You can set pipes up with an SQS queue as a source, with a step function as a target, and pipes will do all of that for you. It'll take the message off the queues and send it to the step function and execute your step function. And along the way, you can do filtering. So you can apply filters using the same syntax you used to define the beverage rules. So you can filter your messages. And then you can also enrich your messages. So as part of this flaw,
00:29:01
Speaker
you could make a call out to a HTTP API or a Lambda function. So the example there I always like to think of is you've got a customer ID field in your message data. You want to get more contextual information about the customer before you pass it on to the final consumer at the final target. You can enrich that, reach out, grab your customer information and then pass it on. So this is really cool
00:29:23
Speaker
way of building these point-to-point integrations. It also respects ordering. So if you've got SQS5.4, first and first, and you've got an ordered source and an ordered target, S pipes will respect the ordering. And also you don't pay for pipes until an event goes past the filter. So if you have a hundred messages come off your SQS queue, 99 of them get filtered out by the filter that you define, you only pay for one message and the rest will just be dropped. And they will just be dropped. They won't go back on the queue, they will just disappear.
00:29:51
Speaker
So it's really, it's such a powerful service. I'm kind of using this now as a bit of a plug for pipes because it's such a powerful service. The customers I know who are using it love it, but people just, I think a lot of people I speak to just don't know it's there. They just think of Enbridge, of Enbus, they don't look at these other features.
00:30:09
Speaker
I certainly have never heard of it. Yeah, absolutely. So coming back to the actual announcement that I wanted to talk about though is logging supports for pipes. So one of the challenges with pipes previously is that it was a little bit black box in that if your message failed to get to the target for whatever reason, you got a metric saying there was a failure.
00:30:33
Speaker
But there was no way to actually work out what happened. And if you were using like an ordered like SQS54, that message that failed would actually block up the pipe that the analogy actually works here. That would like be, you know, throwing something in like a rock in the pipe that blocks it open. You can configure dead letter queues. So you could route that off to a dead letter queue and then everything will just carry on. But there was no way to actually know
00:30:55
Speaker
what caused the issue until re-invent where logging support was announced. So you can now log messages out to CloudWatch, I think S3 as well, and I think even Kinesis, I'll have to go and check the docs. And then you can actually get this list of
00:31:11
Speaker
log messages in terms, okay, so now I actually know what is happening in my pipe, what is not happening as the case probably is that some things are going wrong. So then you've got that relationship where, okay, you've got messages coming through your pipe, there's a problem, you route it off to your dead letter queue, you could have read drives in there that you read drive three times, okay, it's fair, we send it off to the dead letter queue, go and look at your logs, look at your log messages, and you've got the details there on why that actually failed. So it's not a particularly,
00:31:40
Speaker
It's not as exciting, maybe, as 1,000 concurrent execution environments every 10 seconds. What is that? I want to rephrase that again for everybody listening. But it's a really, really useful feature once you actually get to the day two stuff where you've got an application deployed and you need to work out why the heck it's broken because everything fails all the time, as we all know.
00:32:00
Speaker
Well, I think if you're the person that is trying to figure out what went wrong, you probably find that is very exciting. Absolutely right. Yeah. Logging. Very true. Very, very true. I actually once worked on some code.
00:32:15
Speaker
If you remember Palm Pilots when they were a thing, I worked on some code that couldn't log anything running in the background. The only thing I could do was my observability was the Palm Pilot making little ticks. They'd be like, tick, tick, tick, tick, tick.
00:32:36
Speaker
Is that like the way the concept of rap like one, two, three was the number of ticks that you had? Exactly, yes. I think sometimes, I've been working in tech for 10 years now and I don't think I've got a deep enough appreciation for
00:32:55
Speaker
how things were 40 years ago, for example. Since I've started working, you've had log files and Windows and nice OSs and all this stuff. I think I need to go back at some point and try and write some cobalt code or something and try and really work out why serverless is so amazing when you've got all this other stuff that you don't have to think about. Maybe that's going to be an exercise at some point. I need to do.
00:33:19
Speaker
I actually wrote once my first Windows program I do write the event pump for Windows. Very cool.
00:33:34
Speaker
Okay, you said it, not me. On the subject of observability actually, one of the last announcements that I had that kind of is serverless but kind of isn't is the CloudWatch alarms. If you've configured an alarm in CloudWatch, you can now target a Lambda function directly.
00:33:53
Speaker
So one of the things I've built previously in another pre-AWS life is I work for a consulting company. We have 10, 15 customers all with vaguely similar systems. So we wanted some kind of centralized observability. So all these different systems deployed in all these different customer data centers all reported
00:34:13
Speaker
data back to a centralized mothership, if you will. And we had alarms going off. OK, customer X has got a problem with process Y. And to actually trigger some custom functionality, in our case, it was sending a message to Slack to a Slack channel. You needed to go CloudWatch to SNS to Lambda. The only one of the targets for CloudWatch alarms was SNS. So you could now invoke Lambda directly from a CloudWatch alarm switching state from
00:34:39
Speaker
I'm happy to, I'm happy. Again, some of the really useful thing that allows you just to, I always, I don't know if you have found this Tom, but I always really like things that I can do when I get to delete stuff.
00:34:51
Speaker
Whether it's changing some code, oh, I can delete that entire method. I can delete all that class, or I can delete that entire component of my CDK code, because I don't need that SNS topic anymore. Anything like that, I just find I get really satisfied by just getting rid of stuff. It's just fun. One last thing that can go wrong. Absolutely. 100%. I mean, my code's perfect every time, but not everybody is. It's never the code. It's never my code. What? No.
00:35:22
Speaker
Now, anybody that knows me and just seen my code knows that's absolutely not the case. Yes, likewise. It seems like we're cut from the same cloth, Tom.
00:35:34
Speaker
And yes, in terms of the list of features or announcements that I pulled together, they were the kind of the high level ones that I realised I'm probably going to get a whole bunch of product managers now harassing me because you didn't talk about X or Y, but they were the ones of the big list of reinvest... I mean, you know how it is, Tom, you've worked at AWS, the list of
00:35:55
Speaker
re-embed announcements, even just in the serverless world, you know, Lambda, SNS, SQS, EventBridge, Step Functions is as long as you're armed. So apologies if there's a whole bunch of stuff that I've missed, but they were the ones that I thought were particularly relevant and exciting for people to be aware of. Well, any service owner that wants to come on the podcast and talk about their service functions, absolutely welcome to. I will have anybody on. I like what you've done there.
00:36:23
Speaker
Ah, yes. I'm going to try to think back to when I started in 2016 at AWS and the re-invent launches now. And so if we get more than that in what's affectionately referred to as re-invent, and then we have all the actual re-invent launches, it's insane. Yeah, it's crazy. It always massively ups my excitement for, I think last year was, this year there was a lot of good stuff. Last year was particularly exciting. And I think I'd judge my excitement on
00:36:54
Speaker
how excited I get by the pre-invent announcements. Because I think if pre-invent is this good, if this has come out pre-reinvent, what is coming out re-invent? I think it's quite a good heuristic, at least I use it in my own head. It's one of the cool things that you know about working at AWS. Even working within the serverless space at AWS, there's still announcements that surprise you. It's impossible to know
00:37:16
Speaker
everything that's coming in, you're like, what? Step functions does what now? I didn't know that was happening. So yeah, it's super cool. And a lot of it's kept under wraps too. It's very much a need to know basis. Yeah, absolutely.
00:37:33
Speaker
So serverless tends to bleed into this idea of microservices or smaller compute deployments, however much of a purist you are in the idea of microservices there. One of the arguments that comes up, and I actually was writing a blog post on this for
00:37:56
Speaker
for my employer a couple of months ago, is the idea of containers versus serverless. And let's set aside the fact that you can actually do serverless containers for the moment.
00:38:12
Speaker
What's your take on this? Where's a sort of balancing act? And where does this argument kind of explode, I guess? Yeah. I mean, it's something I've been thinking a lot about recently. I used to very much be in the serverless purist approach being that, you know, container? No, never. Put it all in Lambda.
00:38:31
Speaker
But there is a lot of talk, like you say, it's like, are you doing containers or are you doing serverless? And I think, you know, I know you said you put aside the fact that you can do serverless containers, but I think that's really important. If you come back to this idea that serverless isn't
00:38:49
Speaker
You aren't doing serverless or not doing serverless. You are simply more or less serverless. I like to refer to it as thinking serverless. My default is Lambda. I want to be as serverless as possible. I want as little operational overhead as possible.
00:39:04
Speaker
I can step out to other scenarios if and when I need to. There might be a specific use case. I know of a use case, not a customer I've worked with, but I know of another customer, AWS customer, where they had a scenario where they had a data processing job of some kind. And 80% of the files they needed to process with the job completed in under five minutes or two minutes, I can't remember what it was, but it was a finite number of minutes, single digit number of minutes. The other 20% took about two hours.
00:39:32
Speaker
Okay, so what do you do in that situation? Do you go, okay, well, I'm only gonna use Lambda, and then what do you do with the ones that run over two hours? Because Lambda can only run for a max of 15 minutes. Or do you go the other way and run everything in containers? And then you've got these long running containers that are only really relevant for the things that are gonna, you know, for the good attack under five minutes. That's the perfect use case for Lambda. So you end up kind of stuck in this middle ground if you have this view on the world that I need to be serverless or I need to be containers.
00:39:58
Speaker
that specific customer ended up using both. So they had a step function that was smart enough to know, okay, a file that has this criteria, maybe let's simplify it and say it's the number of rows in the CSV file. Okay, if I've got less than a hundred rows, I can send it to Lambda. And if I've got more than a hundred rows, I can send it to containers. And you can use that to use both.
00:40:19
Speaker
And if you've written your application code in the right way, you can actually run the same code on Lambda and the same code in a container. My view is that the decision is kind of taken away from
00:40:37
Speaker
what the actual conversation should be, which is about reducing operational overhead.

Serverless vs. Containers Discussion

00:40:41
Speaker
How can you reduce your operational overhead as much as possible that allows you to focus on your application code? And if that happens to be, you package your application up as a container image and run it on app runner.
00:40:53
Speaker
or you run it on Lambda. There was a really interesting blog post I read yesterday or the day before by a serverless hero talking about he did a really deep analysis on running containers on Lambda because of course you can run containers on Lambda. It's one of the packaging formats and actually with a
00:41:11
Speaker
In certain use cases with the right application make-up, containers are more performant than zip files, even though they're bigger because typically you've got the OS packaged in there as well. They're more performant. Actually, it's the wrong conversation to have, in my opinion. I think there's an argument of both. Now, one caveat I will put into that is that
00:41:38
Speaker
When I talk about serverless containers, what I mean is things like ECS Fargate are app running.
00:41:46
Speaker
What I don't mean is Kubernetes. And I'm not going to say Kubernetes is rubbish. Kubernetes is great for a certain use case. Kubernetes has a whole bunch of operational responsibility. There's a whole bunch of things you've got to think about. You've got even just to deploy a cluster, there's a whole bunch of core services you need to permit that cluster functional and you need to manage that stuff. Even if you're using Fargate with Kubernetes, there is still more operational overhead than something like ECS.
00:42:16
Speaker
And that's the way I like to think about it. The question I've started asking myself recently is, is the ability of the customer or your company, talking to the audience now, is your ability to manage and operate infrastructure a core differentiator of your business? If it is, right.
00:42:40
Speaker
Kubernetes, fantastic, because the behaviors you can get with the way you can scale up, scale down, you can add GPU instances, you can take them off, things like carpenter allow you to be really smart with how you position pods and compute on your infrastructure that you have. But if that isn't going to be a quite differentiator for your business, and that's probably the vast majority of people listening,
00:43:04
Speaker
The operational responsibility of something like Lambda or AppRunner or ECS Fargate, in my opinion, is lower than what you will take on with Kubernetes. AWS is your platform in the same way that Kubernetes might be your platform if you're building with Kubernetes. It's about picking the right tool for the... It's the age-old, it depends, that we all know and love. In architecture, pick the right tool for the right job. Start as serverless as possible.
00:43:30
Speaker
And then if you have the right use case, step out to something like containers. OK, you really need something complex, really complex microservices with really specific infrastructure requirements and inter-service communication and service meshes and all this stuff that Kubernetes is fantastic at, then you maybe step out to Kubernetes, but don't make it something that everybody has to use for every single use case because it's not going to fit every single use case. I've been interested to get what your thoughts are on that same topic, though.
00:44:01
Speaker
Well, I always try to lean towards, and if you were to ask me,
00:44:11
Speaker
As far as running containers, we start with, if you're talking about Kubernetes, do you know how to use Kubernetes? If you do, and you already use Kubernetes, then it's a great idea to continue to use that rather than necessarily adopting something. Now, I love Lambda, and I think that everything is a good fit for Lambda. I love the idea of pushing it over there.
00:44:35
Speaker
But that said, if you don't already know how to use Kubernetes, then maybe opt for something like ECS because it is simpler. It's the easy button for... I want to know if the podcast can hear that, but my
00:44:56
Speaker
Amai is doing off telling me to get on the podcast, which is a bit odd. I can hear it. So I don't think we're just doing for 45 minutes.
00:45:10
Speaker
I can bring it back whenever you want to bring it when we can edit it's part, but I can continue. Yeah, that's fine. If anybody is listening to this podcast, yes, I actually have an Alexa in my office. And it does go off and give me annoying reminders when I miss certain time. So, yeah. Yeah, I mean, I think just to round out the whole argument, you can make the same argument the opposite way, like, if you already understand Kubernetes,
00:45:40
Speaker
don't try and adopt Lambda for the sake of adopting Lambda. I think that it's... I think a lot of it... No, I love Lambda, so everybody should adopt Lambda. I would say the same in a lot of cases. I'm trying to be balanced, Tom. I'm trying to be balanced, but okay, I won't be balanced. Yes, everybody should use Lambda for everything. It's interesting though, the last YouTube video I released on my YouTube channel before Christmas was...
00:46:03
Speaker
about how you can run any web application on Lambda. You're writing it stateless, and it can run on Linux. And you can start it up by running an executable. And that executable might be a batch file that starts up like an express.js process. It might be a .NET binary. It might be a Rust binary, whatever it is. There's a project called the Lambda Web Adapter project, which kind of, it's a Lambda layer, and it's a little Rust application.
00:46:31
Speaker
And it acts like a proxy in front of your Lambda function. So when your Lambda function starts up, your .NET web application starts up and your web application is running on localhost 8080, for example. And Lambda web adapter will pull the Lambda runtime, take the request from the Lambda runtime and make the HTTP request to your web application that's now running on localhost.
00:46:53
Speaker
And this means you can now run any web application on Lambda, Angular, React, ASP.NET, Spring Boot, Rust Express, Flask, all these different things. You can now pick them up and run on Lambda. You can also do this with containers. So now you're getting into a place where you could literally take the exact same container image.
00:47:13
Speaker
and run it on Lambda and run it on ECS Fargate. And there's a proof of conduct I'm meaning to do for every now, which is to have an application load balancer pointing to both Lambda and ECS, the same application. And as you get a certain spike in traffic, shift more traffic to Lambda than to ECS because Lambda will deal with the spike in traffic better.
00:47:36
Speaker
than ECS will, because a container will, the cold start of a container is slower than the cold start of a longer execution environment. And I think that you get, again, you get into this real blurry line, where it's like, if you said I'm only using containers, I'm only using ECS Fargate, well, yeah, but you might have use cases where longer is better. You've got an internal HR application that's accessed three times a month.
00:47:59
Speaker
Still package it as a container. You can use all your container image tooling. Fine, do that. Push it to ECR. Fine, do that. But deploy it to Lambda. I think it just drives the wrong conversation. The serverless R container, it kind of drives the wrong conversation. It kind of talks around something that is not the real challenge that you're dealing with. It's an operational responsibility challenge.
00:48:24
Speaker
If we think about it, the whole point of containers was to deliver us one thing that could be run anywhere. It's like, here's your code. Now pick the execution environment that makes sense and be able to shift between them. I said not to shamelessly plug my YouTube channel. I'm now shamelessly plugging my YouTube channel. You can make these things work together really well. There's an example I've got, I think it was about two or three months ago now, where
00:48:50
Speaker
you've got step functions orchestrating ECS. So if you have a .NET console application or any kind of application that starts up, does some work, and then exits after it's finished doing the work. So if you run that application in a container, your container is going to start up, your code is going to run, and then your container is going to shut down because the process shuts down.
00:49:12
Speaker
So what you can then do is you can have a message on something like an SQS queue. You can use a step function to pull the message off the queue. You invoke a step function. And your step function can then invoke an ECS task. You can actually start a task on Amazon ECS using an API called the RunTask API.
00:49:35
Speaker
You could do that and you could pass in a custom environment variable. For example, this is exactly what I did in YouTube video. So you pass in as an environment variable the message contents that came from SQS and your container image starts up. It reads the environment variable, passes that as an SQS message, does the work, shuts down again. Now that's not going to be anywhere near as efficient as doing the same thing on Lambda.
00:49:58
Speaker
But if you're stuck with the same conversation of serverless or containers, you will probably never even look at step functions because you're only using containers. You're not using serverless technologies. And if it happens to take more than 15 minutes, then, you know, it doesn't matter. It just runs and runs and runs and then eventually shuts down.
00:50:18
Speaker
Yeah, I think you throw all the whole microservices and I think microservices, there's a whole module on this in my course, shameless plug again, sorry everybody. There's a whole module on this whole conversation, this serverless or containers conversation and microservices is something completely different because you can build microservices with serverless, you can build microservices with containers. Microservices is an architectural pattern
00:50:42
Speaker
serverless containers are a way of running the code that your Mac services define. That's not really very right, but you know what I mean. Again, there's a different thing. Are you doing serverless or are you doing Mac services? Well, I'm doing serverless Mac services, Thomas.
00:51:01
Speaker
Well, so in that discussion, you brought up something that I think five years ago would have been an absolute, like people would be pulling out the pitchforks and everything, but the idea of running an application, like a monolithic application inside of Lambda. That was like, oh, no, don't do that. You're going to be thrown to the crocodiles here.
00:51:31
Speaker
But it sounds like that's something that maybe is becoming more acceptable. And this is absolutely something I've softened my thinking on massively, even in the last, like, six months, let alone a year. I was absolutely militant about the fact, you know, you're building an API on Lambda, you've got five endpoints, that's five separate Lambda functions, it must be five separate Lambda functions. But actually, so there's a customer I've worked with, they've talked about this at Reinvent,
00:52:00
Speaker
I think it's on YouTube now. I can give you a link and put it in the short description. I'm not sure if it is on YouTube yet. So I'm not going to mention the customer by name, just in case. But they started off building in this way. It's a .NET customer. They're building this way where they had a separate Lambda function per API endpoint.
00:52:19
Speaker
And they shifted that. And they ended up with tens, hundreds, if not of Lambda functions servicing all their APIs. And what they actually did is they took all these Lambda functions and they condensed them down into six or seven ASP.NET web applications, entire web applications. And what they found is that it got easier to manage. Your developer experience is better because you can just spin it up
00:52:45
Speaker
You know, localhost, one of the common challenges I get when I talk to people with Lambda is that, okay, how do I run it locally? How do I debug it locally? Especially for us.net and even Java developers who are used to this, you know, running your debugger locally and off you go, stepping through the code. Exactly. Great points, all that good stuff. So you get that back. If you're running like a web framework on Lambda, you can just run it locally. You get, one of the more interesting things is that.
00:53:07
Speaker
Whenever you suggest this to people, whenever I suggest this to people, cold starts is always what people come back to. And for anyone not familiar with a cold start who's listening, when I talked about these execution environments earlier, an execution environment is only created when a request comes in, and when that happens, Lambda needs to download your application code, it needs to spin up the environment, it needs to bootstrap the runtime, it's through all this stuff, and that's a cold start. It's the period of time before a request actually gets processed.
00:53:32
Speaker
And what you'll typically find is that although your cold starts get slower, and they'll get slower because you need to start up your web framework, you need to start up ASP.NET, you need to start up Express, you need to start up Spring Boot, whatever it might be. Although your cold starts will get slower, they will get less frequent. You will see a lot fewer cold starts. And that's because if you've got five endpoints on your API,
00:54:00
Speaker
When a request comes to one of your endpoints, and that's processed, well, you've now got a warm execution environment available. And if a request now comes into another endpoint on that same API, it's going to hit that execution environment that's already available. In the single lambda function per endpoint world, you would get, if you hit five, oh, five endpoints were hitting one after the other, you would get five separate code starts in the full web framework, you get one code start. So although the slower, you will typically get less of them. And when you couple that in with,
00:54:27
Speaker
developer experience, the ability to run it locally if you need to. I think it actually becomes not necessarily a do this or do that. It's definitely not as simple as that, but it's definitely something to try and benchmark it against your actual application use case.
00:54:47
Speaker
So my default now is I was to work, I was to leave AWS and work at a startup now and build with Lambda. This is exactly where I would start building my web API. I would build it with a web framework and I would run it on Lambda.
00:54:59
Speaker
And I would run something like what I expected my real world load to be against the application, because that's where you get the interesting stuff. If I just deploy my Lambda function, I go into the console, I hit test, and I see a one and a half second cold start, and I go, ah, that's too slow. I can't do this. I'm going to go and run this on Kubernetes, whatever. That's not really a good test, because that's just a one-off invoke, and that's nothing like what you're going to see in the real world. So the first thing is whatever you're going to do with Lambda,
00:55:25
Speaker
run it with something like a real world load. That's kind of the first case.

Monolithic Applications in Lambda

00:55:30
Speaker
And what you will typically find in a lot of use cases is that if you are running an entire web framework on Lambda, you will see slower cold starts, but you will see less of them. And it just gets really, really interesting and really nuanced when you start to look at it. And I think when you post a question about monolithic applications, I think it's
00:55:51
Speaker
maybe not quite as simple as an entire 100-end point API monolith in Lambda. There's another really good talk at Riemann actually from Julian Wood and Chris Mons talking about Lambda and ways of structuring Lambda functions. And Julian actually, I think it's Julian he talks about this, it might be Chris, one of the two of them talks about this. And you imagine you've got a system and you've got an orders API, you've got a users API and you've got a, I don't know, shipping API.
00:56:18
Speaker
A monolithic application would have all three of them things in one Lambda function. But actually, you probably want to do a user's Lambda function, an order's Lambda function, and a shipping Lambda function. So there would still be an element of decomposition. I'm not talking about running an entire big million line of code monolith in Lambda. But this idea of running a web framework within Lambda is something that I think is becoming more and more reasonable of a thing to do.
00:56:48
Speaker
Very interesting. And I can't believe I'm saying this to you because, like I said, six months ago, I'd have been like, what? What is James talking about? He's talking absolutely rubbish. But I've seen a number of customer use cases now where, and I've done this myself, I've run the benchmarks myself. And if you look at some of the numbers we have on the .NET benchmarking repository on GitHub, yes, at cold start, ASP.NET on Lambda is slower than just .NET managed runtime single purpose handlers.
00:57:13
Speaker
But the number of cold starts that you see is typically, I'm actually going to try and get the numbers up right now. It's not a lot to pull work. Because the number of cold starts you actually see is considerably, considerably lower. For a single purpose handler, it's about 0.4% of requests are cold starts. And for ASP.NET, I think it's about 0.2%.
00:57:39
Speaker
Yeah, so here we go, .NET 8 on Lambda. This is native ALT, .NET 8 on Lambda. Running native ALT on Lambda, single purpose handlers.
00:57:48
Speaker
There's about 360 cold starts and about 76,000 warm starts. Okay. So that's kind of the ratio between cold and warm for single-purpose handlers. For ASP.NET on Lambda, so this kind of idea of running an entire web application on Lambda, you've got 91 cold starts and 79,000 or so warm starts. So what's that? 25, 75% lower, something like that, roughly.
00:58:19
Speaker
My math is not that good. No, I'm not either. It was my worst part of maths was mental maths, but I've got a friend who's really good at it, but I am not. And then the wipe start numbers are pretty comparable. So I'd say I think it gets. And then when you throw things like with .NET 8, Microsoft now have limited support for native compiled ASP.NET applications.
00:58:40
Speaker
In the Java world, you've got a snap start for Lambda where you can pre-run some of this cold start phase, this initialization phase of your Spring Boot application, for example. So when you throw some of these things into the mix as well, it just gets a lot more nuanced than, I think, just do this or do that. Like I said, do it in a way that allows you to move quickly and benchmark it with some actual real world load and then make an informed decision from there.
00:59:11
Speaker
Sounds good.

Conclusion and Listener Invitation

00:59:12
Speaker
Now, if you're listening to this podcast for the first time, go back and listen to the podcast nine months ago with James and serverless architecture and compare and contrast. Yeah, I should have listened to that before this conversation actually. I was like, right, hypocrite.
00:59:29
Speaker
Actually, I think from memory, the podcast that we did before was my most listened to podcast. Ah, well, we forgot that. I hope we won't let you buy Northern British Toms for the best part of an hour. What a way to ruin your commute. Gosh, thank you, everybody.
00:59:49
Speaker
All righty, James. Well, it has been an absolute pleasure, as always, and we're going to look forward to trying to get you up to that number five. Yes, absolutely. I'm looking forward to the t-shirt. The free shirt. I'll have to go and buy it. Thanks, Tom. All righty. Well, thank you very much, and we will see you on the podcast next time. Thanks for listening to this episode of the Basement Programmer Podcast. I really appreciate you tuning in. And if you have any feedback or comments, of course, send me an email.
01:00:18
Speaker
Also, please consider subscribing. It lets me know that you're enjoying this production. I'm looking forward to you joining me for the next episode of the Basement Programmer Podcast. In the meantime, take care, stay safe, and keep learning. Thanks for listening to this