Skip to content
Home>All Insights>Seven things you need to know about Serverless computing

Seven things you need to know about Serverless computing

Serverless is becoming an increasingly popular way to write applications. Instead of needing to provision and maintain physical or virtual servers, development teams simply send the code (or ‘function’) to their serverless provider, and the provider takes care of running it whenever it’s called upon.

Adopting a serverless approach, or ‘function as a service’ (FaaS) as it’s also known, means those building digital applications and services no longer need to worry about infrastructure availability, or scaling to meet demand. Instead, they can focus more of their time on the product itself – and in many cases, save money as well. As with any technology, the critical thing is to use serverless in the right places at the right times. To help developers, architects and CTOs decide when, where, and how to use it, we’ve put together seven things they need to know.

1. You don’t have to go all-in on serverless

Adopting serverless doesn’t have to be an all-or-nothing choice. It may be perfect for one or more aspects of an application, but not others.

2. Serverless works best for small, independent functions

Where an application consists of multiple, relatively small and independent components, each fulfilling a specific role, these can be good candidates for serverless deployment. The more interdependent different parts of the application are, the more overhead a serverless approach will mean, due to the increased work involved with tying the functions back together, and dealing with asynchronous functions or unreliable networks. Where the application will see high throughputs, this additional overhead is likely to be a reasonable trade-off to make, but in smaller applications, the complexity often outweighs the benefits. Where the logic is more substantial, needs to run for longer periods, or requires specialist hardware such as a GPU, we generally wouldn’t recommend serverless.

3. Use serverless carefully when replatforming existing applications

If you’re looking to move an existing monolithic application from on-premises to the cloud, serverless can be an effective way of addressing specific issues, such as known performance bottlenecks, or areas that need to scale significantly. However, leveraging serverless for elements of an existing application will generally mean considerable rewriting of the affected code, to separate it from the rest of the monolith. The effort involved means it’s only advisable where the benefits of serverless justify it. The rest of the application can then be lifted-and-shifted as-is to containers or cloud virtual machines – a much more straightforward process.

4. When and how to leverage serverless scalability

One of the biggest strengths of serverless functions is their ability to scale very large, very quickly, without notice. This makes them ideal where load spikes are significant and potentially also unpredictable, as is the case on the UK’s register to vote service, for example, where we used AWS Lambda. Conversely, if load spikes happen at known times, or are less significant in size, there may be lower-cost ways to accommodate these, such as planned capacity or scaling conventional virtual machines. When adopting serverless for scalability purposes, consider how the functionality might be divided. Does the whole application have to scale at the same rate, or will certain elements need to grow at different rates or at different times? Are there parts that would be better suited to a more conventional deployment approach?

Imagine, for example, a ticket booking website. The ‘browse tickets’ function may need to scale quickly and with little notice, as venues release new show dates and large numbers of people log on to have a look. Only a proportion of these initial visitors will proceed to purchase tickets, so while this booking function will also need to scale, it won’t be to the same extent as the ticket browser. From a performance and cost perspective, it could therefore be beneficial to deploy the two as separate functions. Once someone makes a booking, a variety of other things will need to happen behind the scenes, including producing and sending out the tickets. Much of this doesn’t need to occur instantly, so could be deployed in a conventional way, with tasks queued up when loads are high.

5. Understand cold starts

While serverless means application teams don’t need to worry about the infrastructure their code runs on, the function still executes on a server somewhere in the cloud. When it launches for the first time, either because it’s not been run for a certain period, or because it’s needed to scale beyond the initial server’s capacity, or because the cloud provider moves the function to another server for operational reasons, data needs to be copied across and the server spun up. This can take a few seconds, and is known as a ‘cold start’, which can result in a brief delay for the end-user. The larger the function is, and the more dependencies it has, the longer the cold start can take. This could be as much as 20 seconds in extreme situations. Cold starts can put people off using serverless. However, they’re only really an issue in certain scenarios, such as where a workload experiences long quiet periods. If the performance impact of cold starts genuinely is an issue for a particular workload, there are ways to mitigate them. It may also be worth considering a different approach, such as keeping a server running all the time, or using provisioned serverless capacity.

6. Dependencies

If an application requires a lot of libraries or third-party plugins, these can present challenges in a serverless environment. Firstly, more dependencies typically mean larger deployments, which in turn result in slower startup times. Tools such as NPM are particularly vulnerable to bloat, though this can be mitigated through proactive monitoring and management. Secondly, because serverless gives limited control over the runtime environment, compiled extensions and tools with dependencies on the environment – such as binary dependencies in Python – can cause headaches.

7. You need to test in the cloud

Lastly, testing applications with serverless functions on anything other than the cloud environment where they’ll ultimately be run remains problematic. Unit testing can be done locally, but because control over some aspects is ultimately ceded to the cloud-based runtime environment, certain core aspects of an application are impossible to replicate locally. Examples include how network requests are handled, or the way other cloud services are triggered.

A powerful enabler

Serverless is a valuable part of the enterprise technology toolbox. When applied appropriately, it frees development and operations teams from many of the infrastructure considerations and constraints associated with traditional deployment approaches. In so doing, it enables organisations to direct more energy and resource towards the actual product, rather than the infrastructure it’s running on.

Digital Engineering

Get expert help with your digital challenges and unlock modern digital engineering solutions.

Get started now