What is “Serverless”?

“Serverless” is an prevalent buzzword. Three syllables promise not only the infrastructural freedom offered by the cloud, but further liberation of the tedium of creating, securing, and maintaining the resources to provide the service you may be dreaming of.

In this first article of Glue Architecture’s series on the concept of Serverless Functions, we’ll introduce you to the concept of Functions as a Service (FaaS). We’ll cover some of the essential questions to consider regarding taking a product or service serverless, including whether going serverless is appropriate for your use case, and how to determine if the costs of providing your use case serverlessly will cost out in comparison to simply spinning up an EC2 instance.

In future installments, Glue Architectures will walk through deploying an advanced function on AWS Lambda to illustrate deploying complex functions in a cloud provided FaaS environment, compare Lambda to other cloud FaaS providers, explore on-prem FaaS solutions available in this space, such as OpenFaaS and OpenWhisk – just to scratch the surface.

To begin – what do we mean when we talk about a function being “serverless”, or delivering a Function as a Service?

Simply – we’re talking about providing a tool – whether cloud-hosted such as AWS Lambda or Microsoft Azure Functions, or self-hosted via a tool like OpenFaaS – with the source code necessary to execute one or more functions. That tool will take care of provisioning the necessary infrastructure to run the code it has been provided. Users wishing to invoke the function do so via the methods made available by the tools – providing the function to them upon demand as a service.

The idea of FaaS is tempting from a business perspective, as cloud provided solutions remove remove infrastructure acquisition and maintenance costs from the bottom line.  Even the on-prem solutions provide wins, allowing developers to worry about developing their code, whereas the tools take care of the infrastructure provisioning and runtime scheduling for them.  Developers get to focus on delivering their product to customers, almost all of the remaining headaches can be handled by the cloud service provider or your in-house DevOps team.

However, as win-win as this sounds – before committing to taking a serverless approach to implementing a use case, it’s important to consider a few points.

First – consider whether your use case is appropriate for a serverless environment. The best candidates for going serverless are those use cases that are encapsulated as independent function calls that are regularly executed, run efficiently and require no user interaction. Functions that run efficiently will minimize both your business cost and the responsiveness of a serverless function in your pipeline. Functions that are run regularly will alleviate the concern of cold starts within a serverless environment, which exacerbates the previous concern by requiring a level of spin up time for functions that are not executed regularly. Some examples of good use cases are triggering a tallying specific keywords within a document when text is uploaded to a specific cloud location, or performing a specific data analysis function when a dataset is posted to a specific RESTful endpoint.

Second – consider the cost of providing your use case serverlessly as compared to your other available options such as providing the service via more traditional cloud resources, or even hosting it yourself. While the various FaaS providers will charge slightly different rates (with a generous amount of monthly credit), the cost of running your use case will be a function of the compute time it requires to complete, the memory it uses during that time, and how frequently it is called upon.

FaaS services will generally consider compute time and memory requirements together in units of GB-s, and then additionally charge for the number of requests made to the service – typically for a number of cents per million requests.

For example, assume that you are wanting to take a function that reserves 256 MB of memory per execution, and completes in 5 seconds – or 1.25 GB-s per execution. Also assume that the function is relatively popular and is called 2 million times per month, which means 2.5 million GB-s utilized per month.

For standard deployments – AWS Lambda grants users 400,000 GB-s and 1 million requests free per month, meaning that in the example above, you’ll need to pay for 2.1 million GB-s and 1 Million requests. AWS charges $0.00001667/GB-s and $0.20/Million requests beyond the free tier. This results in the example costing $35.21 to run for the month.  As a comparison, a t3.micro node would cost $7.49 to run 24/7 for that same period of time, but with the additional time and labor requirements of keeping that t3.micro node up to date and functioning.

If you’re considering hosting an on-prem cloud solution – the calculus for this consideration changes to questions around whether the costs of hosting your own cloud infrastructure and hiring the staff to configure and maintain it make sense compared to the costs of standard hosting approaches.

The applicability and business justifications for utilizing FaaS are important, but are just the start of this series. Stick with us for the duration, and we’ll take you through deploying a signal analysis tool via AWS Lambda – including spinning up supporting services and security roles to successfully make it available for your users.


If you’ve got a questions about utilizing FaaS in your business’ use cases – drop us a line at contact@gluearchitectures.com. Our team of experienced developers will be happy to work with you to analyze, architect, and optimize the solution that’s right for you.