Introduction & Disclaimer
00:00:00
Speaker
Hello basement programmers and welcome to the basementprogrammer.com podcast. My name is Tom Moore and I'm a developer advocate working for Amazon Web Services. The opinions expressed in this podcast are my own and should not be assumed to be the opinions of my employer or any other organization I might be associated with. Also, basementprogrammer.com is not affiliated with Amazon anyway.
Choosing the Right AWS Service for Microservices
00:00:23
Speaker
Today we're going to talk about a question I frequently get asked. How do I choose the right AWS service to host my microservice?
Transitioning from Monolithic to Microservices
00:00:31
Speaker
Many of the organizations I speak with these days are trying to shift the custom development effort away from large monolithic code bases that they may have developed over the course of years or even decades and move to a microservices-based architecture. There's a wide range of reasons that organizations are looking to make this change.
00:00:51
Speaker
A lot of organizations are looking to increase the velocity, in which they can make changes to their applications. They're also looking to reduce the impact that a bad code change can have on the stability of those applications.
00:01:05
Speaker
A lot of programmers, myself included, have at one point in their career made a change that they thought was completely innocuous, only to find out the change broke things in ways they could not have imagined. This is often referred to as the blast radius of a change. Other organizations, the driver will be to reduce costs. One way they can achieve this is to gradually break up their applications into microservices that can run on modern versions of .NET, such as .NET 6.
00:01:33
Speaker
and hosts those microservices on Linux. For most customers, the process of migrating code from large monolithic type codebase into microservices codebase is going to be an evolutionary process that involves peeling off functionality
00:01:50
Speaker
and re-hosting it. This is going to be a gradual approach because most organizations can't go through a Big Bang project that rewrites the entire application in one go. So we take one piece at a time and refactor it into an independent microservice. Then we repeat this process until there's nothing left to the original application.
00:02:10
Speaker
Now we can engage in a long discussion about the pros and cons of moving from monolithic code bases to microservices, and what the benefits are, and what the best way to achieve the end goal. Perhaps we'll get there in subsequent podcast episodes.
Deciding Factors for AWS Microservices Hosting
00:02:24
Speaker
However, one of the questions I frequently get asked by developers is, how do I choose the best place to run my microservice code?
00:02:32
Speaker
Assuming I've already gone through the process to figure out my methodology for extracting code, I've already gone through the work of actually doing the code extraction and building my code into a microservice that works. I'm now at the point of trying to host this code. Now the question about hosting isn't asked for a lack of options. Certainly there are a multitude of different ways that you can potentially run your microservice in AWS. And that's where customers need the most assistance.
00:02:59
Speaker
How do I, as a developer, pick the way that's going to best support my business long term?
00:03:05
Speaker
The first thing I always want to address when I'm posed with a question like this is, what is the end state result you're looking for? This is going to have a lot to do with how viable each particular option is, and what options are best suited for your microservice. Keeping in mind that the answer may be different for each microservice. To start with, I'm going to look at the code that we're trying to run, and I'm going to ask some simple questions. Does the code require the use of the Windows runtime environment?
00:03:33
Speaker
This could be the case if, for example, you're using some libraries that are Windows only. It could also be the case if your code that you're running is .NET Framework 4.8. In both these cases, your code is only going to run in a Microsoft Windows hosted environment, and this will limit your choices that are available.
00:03:51
Speaker
On the other hand, if a code is written in, say, .NET 6, and your code does not require any Windows specific libraries, it can probably run inside of a Linux environment. If this is the case, then pretty much every hosting option is available to you at this point. So now we look at some of the other areas to make our decision.
00:04:10
Speaker
The next thing you want to consider is what is the runtime profile of this code when it's running in a normal healthy state? What I mean by this is once your code is triggered, how long does the code take to execute? Imagine we're processing a file with a whole bunch of records. Does it take 10 seconds or does it take an hour? Another factor is how large is the deployment package? Is it going to be a few kilobytes as we package up some simple code? Or is it going to be 10 gigabytes?
00:04:40
Speaker
Now I would suggest if your microservice is 10 gig in deployment size, you might want to have a critical look at it and determine if it's truly the size you need it to be, or if you've really just repackaged the entire monolithic code base.
Scaling and Infrastructure Planning for Microservices
00:04:54
Speaker
Another thing you want to consider are the requirements and the ability for your microservice to scale in order to meet demand.
00:05:01
Speaker
Now in a lot of cases, scale may not be a critical factor. We might be happy knowing that a single instance of our application is able to run and churn through a backlog of data as needed. In other cases, scale might simply be a function of keeping one or two instances running at all times.
00:05:19
Speaker
However, in still other cases, we may need to be able to address huge bursts in traffic, handling hundreds and thousands of requests. It's important to understand the reality of your scaling needs, so that we can plan appropriately. Now that I've gathered some ideas about the system needs, so I'm able to look at the technologies that can help support the microservice.
AWS Lambda for Event-Driven Microservices
00:05:43
Speaker
One of my favorite services is Lambda. Now for anybody who isn't familiar, Lambda is a completely serverless service designed to encapsulate a single function to do work on demand. Lambda is designed to be entirely event driven, so your Lambda fires when something happens. This could be a file arriving, or it could be receiving traffic from something like API Gateway.
00:06:06
Speaker
There are countless ways that you can trigger a Lambda function, and some really cool architectures have been designed around this functionality. Even better, when there's nothing happening, Lambda doesn't have resources sitting idle that you're getting billed for. This event-driven architecture makes Lambda ideal for hosting a wide range of microservices.
00:06:28
Speaker
However, Lambda does have some limitations. First off, you can't use Lambda if you're dependent on the Windows runtime. Lambda also has a limitation of 15 minutes execution time, and has a limit to the size of your deployment package.
00:06:43
Speaker
If your microservice fits within these limitations, my first recommendation is usually to look at Lambda as your hosting environment. And why is that? Well, Lambda takes care of all the hard work for you. You don't have to worry about the infrastructure or high availability or anything like that.
00:07:00
Speaker
And if you have a workload where scale isn't a big issue, you may never have to give it any more thought because lambda scales automatically. Keep in mind that if you're running under a heavy load, you may have to dive into scaling optimization a bit more. And we can dig into that in another session.
Flexibility of Container-Based Approaches
00:07:17
Speaker
If you've determined that you can't use Lambda, the most common option then is a container-based approach. Containers give you a huge amount of flexibility because containers can run pretty much anywhere. Containers give you more flexibility than Lambda as you have the ability to define all of the aspects of the runtime environment, and you can package up your code in pretty much anywhere you want to. Just a side note, Lambda actually does support containers as a deployment option as well.
00:07:45
Speaker
Now once again, you need to understand if you need to run in a Windows environment, or you can run in a Linux environment. That will affect some of your options as to where and how you can deploy that container. Obviously, Linux containers have the greatest level of flexibility, where Windows-based containers have more restrictions and fewer options.
00:08:05
Speaker
So my general advice here would be if you can use a Linux based container, that's what you should do. And if you can't use a Linux based container, that's the time when you should use a Windows container.
00:08:18
Speaker
So let's assume we've gotten to the point where we've got a deployment package, and this is a container image, and we want to figure out the best technology for us to use to actually stand up the container and run the code in production.
Cost Analysis of Managed vs. Self-Managed Services
00:08:31
Speaker
Figuring out the best method is going to have a number of different dimensions to that decision, and there's no universal answer for every situation. We have to consider a couple of competing factors. One factor that is almost always a consideration for every workload is cost.
00:08:47
Speaker
Costs can be looked at in a couple of different ways. There's a raw dollar cost of using a service, and that cost will be more or less expensive depending on what's included. Typically you're going to find a service that is fully managed, like say AWS Fargate, is going to be more expensive than a service that is less managed, such as standing up in an EC2 instance and running a docker on it and hosting your own containers on that instance.
00:09:15
Speaker
Thankfully, there's a number of stopping points between those two extremes. Often if you're looking at the raw costs of running a service, and you think about it in the scope of a single deployment, it seems like the lower cost option is going to be the best way forward. However, as you start getting hundreds and even thousands of deployments into production, the value of that managed service starts to outweigh the costs. So for example, standing up an individual EC2 instance to run your containers may work for 5 or 10 containers.
00:09:45
Speaker
However, when you start having thousands of containers, you don't want to be the person that's going to manage all of that infrastructure to keep those thousands of containers running. So you're probably going to gradually move away from that idea of standing up independent EC2 instances and move along the continuum to a more managed service.
CI/CD Pipelines & Re-evaluation of Services
00:10:07
Speaker
Now Fargate, that I mentioned before, is completely hands off when it comes to infrastructure. You simply point it at your container image and it does the rest.
00:10:17
Speaker
Another dimension you have to take into consideration is the amount of control you need over the environment. So once again, if you're looking at standing up EC2 instances, you can do anything you want to that instance to shape the environment any way you want. However, as you start moving towards the managed services, once again, you don't have quite as much control over that underlying infrastructure, so you don't have the ability to customize the environment quite as much.
00:10:43
Speaker
Now between those extremes of fairly of completely self-managed like EC2, and completely managed with Fargate, we have intermediary steps, something like our Elastic Container Service, ECS, or Elastic Kubernetes Service, EKS. These both give you additional control over the underlying infrastructure management,
00:11:03
Speaker
but with corresponding increases in overhead. Now we may get into a situation where either ECS or EKS is not an option for running your microservice. We could then look at something using something like Elastic Beanstalk. Beanstalk wraps up a number of other services and provide you with a management layer for managing your applications, be it a container or native code.
00:11:27
Speaker
I should mention at this point, once you've chosen an environment for how you're going to run your microservice, I strongly suggest you build a pipeline to make sure the deployment happens in an automated fashion, and that can be driven through a complete CICD process.
00:11:42
Speaker
You don't want to be deploying these microservices into production manually. We really want to avoid that whole right click deploy out of Visual Studio and things of that nature. So build your CI CD pipeline early as soon as you know what your target environment is going to be and make sure you use that for all of your deployments, dev, test and production.
00:12:05
Speaker
The other important bit of advice that I would always give people is to make sure you're re-evaluating the functionality on a regular basis. If you've chosen to go with something like running your container inside of Elastic Beanstalk, because that was the only option that would work, you should check back in six months to a year to make sure that situation hasn't changed. It's very likely that additional functionality would have come out impossible that the ability for you to take advantage of these more advanced services will have evolved.
00:12:35
Speaker
Taking work off your plate and passing it on to a managed service is always beneficial. In the long run, because as time goes by, we never seem to have less work. It's always more. And so any way you can automate and achieve better results through managed services is going to be a benefit.
00:12:51
Speaker
So there you have it, my little discussion on how to pick a hosting environment for your microservice. In summary, start with the most managed option, and then move backwards towards self-managed until you find the spot that works for you. Also, as I said, re-evaluate on a regular basis to make sure your assumptions that guided your decision haven't changed, and that you may be able to take more advantage of some of these services.
Conclusion & Future Episodes
00:13:16
Speaker
Hopefully you enjoyed this podcast. For basementprogrammer.com, my name is Tom Moore, and I look forward to seeing you next time.