Serverless is the wrong tool for you, if …

Eugen Sawitzki
comsystoreply
Published in
6 min readMar 22, 2021

--

Photo by Zachary Kadolph on Unsplash

In my previous post “Consider Serverless for your Pet Projects” I wrote about some benefits of writing and hosting Serverless applications in comparison to the conventional way of using Virtual Machines in the Cloud. If you find a technology, language or an architecture which fits perfectly for your current challenge, it is very easy to fall into the habit of believing that your new discovery is the way to go for every other problem as well.

As the famous saying from Abraham Maslow goes: If you have a hammer, everything looks like a nail.

After working for some time with serverless solutions, I found some limitations and discovered use cases where I would recommend to take one step back and reconsider whether Serverless is the right tool.

Again I want to mention that I refer to AWS in this article, as it is my personal favourite of the many cloud computing providers existing and the one I spent the most time working with.

Serverless is the wrong tool for you, if …

… you know the load your application has to be able to handle

What is important to know regarding the costs of serverless solutions, is that one major cost driver is flexibility. You pay for the flexibility of scaling up and down depending on factors like the amount of traffic and cpu/ memory usage of your application. If you develop software, which will only be used by employees of your company, you know quite well how much load you have to expect. You should calculate the costs for both a classic hosting solution with a Virtual Machine (AWS EC2) and a Serverless Solution (AWS Fargate).
AWS provides a very neat and simple Pricing Calculator. Let’s say you calculated a demand of 2 vCPU and 2 GB of RAM to be able to handle the traffic produced by the users of your application.

The EC2 option with the lowest costs satisfying your requirements is at the time of writing the t4g.small instance.

Pricing for the t4g.small instance in the region eu-central-1 (Frankfurt)

The resulting monthly costs are: 1 instance x 0.0192 USD x 730 hours in a month = 14.02 USD

Now let’s compare this to Fargate using the same amount of resources.

Pricing for vCPU and GB of RAM per hour in the region eu-central-1 (Frankfurt)

The resulting monthly costs are: (2 vCPUs x 0.0144173 USD + 2 GB x 0.00158231 USD) x 730 hours in a month = 23.36 USD

As you can see, the monthly costs for running your application on AWS Fargate are approximately 60 % higher than running it on EC2. These are the mentioned costs for flexibility. If the traffic suddenly increases a lot, Fargate will automatically provide the needed resources to your application.

So if your software has to handle a constant load, you should ask yourself, if you are ready to pay the extra costs for flexibility you may never need.

… low response time is a must have requirement

Implementing REST APIs for websites works great with Lambda-Functions behind an API-Gateway. For many cases it will even be free of charge or at least the costs will be very low. AWS will spin Lambdas up for you if a request comes in and hold them warm for some amount of time. In this case keeping warm means that a container with your function and the needed runtime is up and ready to be invoked. In contrast to this a cold function needs to download the image for the container and your code, start the runtime and then handle the incoming event. This is called Cold-Start.

While executions of a warm function (depending on your code, the used runtime and the allocated memory, which also affects the count of allocated vCPUs) may take some milliseconds, the execution of a cold function may take seconds.

Lambda: The execution lifecycle (https://www.slideshare.net/AmazonWebServices/architetture-serverless-e-pattern-avanzati-per-aws-lambda)

I am sure that it should be ok for an user of a website to wait for one second in rare cases, but it might be a nogo for time critical applications which rely on constantly low response times.
There is the possibility of keeping an instance of a function warm by regularly triggering an execution. For example via a scheduled CloudWatch event. But this keeps only one instance warm. What if you need multiple concurrent executions?

You may see that the solution to this issue gets way more complicated than it should be very quickly. So here again, maybe Serverless is not the right tool.

… you are not ready to manage the complexity

Serverless architectures tend to become complex and hard to manage quite easily. You could say that Serverless is Microservices on steroids. To keep everything slim and fast you need to keep your functions small. Each function for itself may be simple, but the orchestration of them and communication between them is far from simple. AWS provides services to help you out here, but more technology doesn’t automatically make it less complex.

Example for a Serverless Architecture on AWS (https://docs.aws.amazon.com/solutions/latest/connected-vehicle-solution/architecture.html)

One Lambda may just fetch data from a third party API, map it a little and send out a message via SQS. The second one may be subscribed to this queue, do something with the data and write it to DynamoDB. Writing to DynamoDB may trigger the execution of some other lambda function, maybe even multiple ones. And so on. Each step in this architecture is small and does just one thing. Understanding what happens when and why is a totally different story.

Of course such an architecture also has certain benefits like a clear separation of concerns and a high level of decoupling. But in the end you have to decide if your current challenge can be solved by introducing a Serverless architecture, or another tool may be more appropriate.

… your applications current tech stack doesn’t fit well with Serverless

The evaluations mentioned above are way easier to make if you start on a green field. That’s rarely the case. Most of the time we work on existing applications, which use certain languages, frameworks and infrastructure. Such applications can’t just be refactored to be used in a Serverless environment. To achieve this goal a full rewrite might be necessary. Or think of a team of Java Spring experts which has to start writing their services using Lambdas because Serverless is the new way to go and maybe even use TypeScript from now on, as the node Runtime has lower Cold-Start times than the JVM. You might not get the benefits of Serverless architectures you expect to get.

The tools you use not only have to your business requirements, but also to the team and circumstances the application is developed in.

Conclusion

I am a huge fan of implementing software in a Serverless way. Especially if it’s something like a Proof of Concept which needs to be flexible and easy to evolve. Connecting AWS managed services is most often way easier and requires less code if you also use the Serverless tools AWS provides you with. These are in my opinion some of the best use cases for going Serverless. Also bigger applications, which are more complex, can have a solid and maintainable Serverless architecture. But at least equally as many use cases may not be suited for such an approach.
In the end it is the decision of each developer and every team whether they think they are tackling the problem equipped with the right tools or not.

I hope this article lets you remember to take one step back from time to time, to reevaluate your technical decisions and find the right tool for your next challenge.

Passion, friendship, honesty, curiosity. If this appeals to you, Comsysto may well be your future. I am part of our LESS team which focuses on lightweight agile cloud engineering on AWS. Apply now to join us!

SHOW ME THE JOBS!

This blogpost is published by Comsysto Reply GmbH

--

--

Eugen Sawitzki
comsystoreply

Software-Developer at Comsysto Reply GmbH in Munich, Germany