Skip to content
Home>All Insights>Container Orchestration: Lightweight Container Runtimes 

Container Orchestration: Lightweight Container Runtimes 

In a previous blog post, How to choose the right Container Orchestrator, we explored a variety of container orchestration systems to handle the lifecycle of your containerised workloads, from scaling and deployment, to self-healing and monitoring. The options we looked at are ideal for large and complex systems, or for running multiple disparate workloads from the same interface. 

However, if you’re just starting out with containerisation, or if you only have modest requirements, then some systems may not be a good fit. You’d still have to successfully set up and maintain your container-orchestration system, and it requires a non-trivial amount of expense and regular training to ensure knowledge remains current. In these instances, there are lighter-weight options for container runtimes available, which will give similar advantages, without the extra costs of managing the orchestration system itself. 

The options presented below are all lightweight ways to run a modest container workload, each with its own advantages and disadvantages. And by ‘modest’, we mean a few applications of varying complexity in an environment that doesn’t have super-strict compliance requirements. These applications would be using standard web protocols, and wouldn’t be expected to do too much scaling during their lifecycle.  

AWS Fargate 

In our previous post on cloud orchestrators, we discussed the three main ways to run containers on AWS: Amazon Elastic Container Service (ECS), (indirectly) Amazon Elastic Kubernetes Service (EKS) and AWS Fargate. AWS Fargate is the lightweight option, described by Amazon as ‘serverless compute for containers.’ 

The big advantage of Fargate is its tight integration with other AWS services, such as CloudWatch for logs and alerting. If you’re already using AWS for other services, this makes management a lot easier. One disadvantage is in the task-definition file you need to submit to the Fargate service. While these are reasonable to work with, writing them is admittedly more awkward than working with some of the alternatives below. AWS has a provided some example Fargate task definitions in the user guide. 

Google Cloud Run 

When you think of Google and containers, Kubernetes is the first thing that comes to mind. However, Google also has Cloud Run, which it describes as a “managed compute platform” for containers. Deploying a Cloud Run ‘service’ can be accomplished by a single gcloud command, though there are a lot of arguments to fill in! 

As you can imagine, the big benefit of Google Cloud Run is its tight integration with the rest of the Google Cloud ecosystem. An additional advantage is its development support: tools such as Cloud Code make creating Cloud Run services quicker and simpler. Furthermore, Google offers two pricing options: request-based and instance-based. If your service is low-traffic or low in computational requirements, then using request-based pricing may result in significant cost-savings, without compromising on the platform. 

Google Anthos 

Google Anthos is also worth a mention. While it’s built more as a management service for different types of workload (so is not a lightweight runtime per se), it includes a rare feature: access to GPUs. If you need to run container workloads with GPU-offloading, this is one of few options available. 

Microsoft Azure 

Microsoft Azure provides a variety of options for lightweight container runtimes. While there is generally a huge amount of overlap in features, each is optimised for a specific use case. Let’s explore each one:

Azure App Service 

Azure App Service was one of the first services provided by Microsoft Azure. While it initially only supported code bundles, it can now natively run containers via its Web App for Containers service, which runs as a part of Azure App Service in an App Service Plan  

App Service is optimised for web workloads, and includes useful additions around this, such as automatic scaling and high-availability. This makes it an excellent fit for running websites, or where you need a highly stable and robust platform. Additionally, it supports running multiple containers via Docker Compose. Simply upload your docker-compose.yml file, and App Service will do the rest – an unusual feature amongst all the other alternatives! 

Azure Container Apps 

Azure Container Apps is relatively new, having been generally available since May 2022. Conceptually, it’s also similar to AWS Fargate, providing a serverless container runtime. 

Container Apps is more general-purpose than App Service, though Microsoft recommends it for things such as microservices. It supports long-running tasks, and can scale to events such as message queues. 

Internally, Container Apps is powered by Kubernetes. While you can’t access the underlying Kubernetes cluster from Container Apps, you can use what Microsoft calls ‘Kubernetes-style’ conventions. This means you can do things such as service-discovery and traffic-splitting, without delving into Kubernetes itself. If you’re working with applications designed for Kubernetes, but don’t want to have to run Kubernetes, this may be a good fit. 

Azure Container Instances 

Azure Container Instances provides a place to run Hyper-V isolated containers – and that’s more-or-less it. Features such as scaling and load-balancing are not provided, and are left for you to implement as required. 

While Container Instances is a much more niche product than the two above, it is useful for cases where you want an un-opinionated place to run containers, and don’t need the usual Azure-included bells and whistles. 

Container Instances can be integrated into an Azure Kubernetes cluster to provide rapid scale-out, without managing the underlying nodes. Details on how create and configure an Azure Kubernetes Services cluster to use virtual nodes are available on Microsoft’s learning portal.  

Azure Spring Apps 

Finally there is Azure Spring Apps, an extremely niche product aimed at microservices that use Spring Boot. If that’s you, then Spring Apps is an excellent choice! 

DigitalOcean App Platform 

DigitalOcean App Platform is a fully managed platform for running applications. The platform handles the underlying infrastructure and provides metrics, monitoring, alerts and even automatic SSL certificates. Like Azure App Service, App Platform can either run application code directly, or use a custom container. 

When configuring an app on App Platform, you have multiple options for the source of your container, including Docker Hub, a private DigitalOcean Container Registry, or even a GitHub/GitLab repository. With the latter, App Platform will automatically build and run the container on pushes to the main branch. DigitalOcean has more details on how App Platform builds Dockerfiles on its website. 


StackPath is a specialised option for latency-sensitive applications. It provides much quicker connections to end users, by deploying your container(s) to StackPath’s global edge network, which it claims means end-user data can reach containers up to 2.6x faster than containers running in the public cloud.  

If your application is sensitive to latency and requires the extra gains that edge deployment provides, then StackPath is an excellent fit. However this does come at a much steeper price than most other options. In many cases, pairing your workload with an off-the-shelf content delivery network (CDN) may be sufficient to achieve the performance you need. 

OpenStack Zun 

OpenStack is a set of applications that enable you run your own cloud. While you can run this yourself, a number of hosting providers have OpenStack-enabled cloud offerings (including Rackspace and OVHcloud). Zun is an application within the OpenStack ecosystem that enables you to run and manage containers as first-class OpenStack resources. A simplified API means you can manage the containers without getting bogged down in the actual runtime and other areas. 

While this is a nice, simple option, it really only makes sense if you already have an OpenStack Cloud available for use, along with readily available OpenStack expertise. 

Summary of lightweight container runtimes 

As is the case with larger workloads, you have a multitude of options when it comes to running containers in a lightweight runtime, each with different features and optimisations. While containerisation ensures your application is portable, it’s still important to think carefully about what platform is the best fit for you, given that migrating platforms and providers can be fraught with complexity! 

If you’re looking for assistance in choosing the right container orchestrator, we’re more than happy to help. Softwire has extensive experience in a variety of container platforms and architecting next-generation tailored systems. Please tell us about your project and let’s have a chat.

Data Engineering

Streamline Your Data Journey: Discover how our expertise turns complexity into clarity.