Running serverless containers in AWS Fargate, Google Cloud Run and Azure
Cluster-based orchestrators such as Kubernetes are notoriously complex to manage. Using a managed service such as Amazon's EKS or Azure's AKS might take care of the server infrastructure, but it doesn't spare you from the operational overhead involved in configuring, securing and managing your pods. Do we really have to endure this overhead just to run a bunch of containers?
The main cloud providers each have an emerging "serverless" offering for running containers without the hassle of running any orchestration infrastructure. Azure were the first out of the box with Container Instances, though this has been followed by Google's Cloud Run service and AWS Fargate.
Broadly speaking, each of these services offer a similar proposition. You can spin up containers on demand and tear them down again. You are charged by usage in terms of memory, processor time and, in Google's case, the number of requests. You are spared all the gruesome details involved in building and maintaining nodes and clusters.
Is this really “serverless”?
This may be convenient, but it doesn't necessarily count as "serverless". Running containers in these services isn't quite comparable to the PaaS experience provided by function-as-a-service platforms such as AWS Lambda and Azure Functions.
A "true" serverless platform should provide a complete abstraction of the hosting infrastructure along with pay-as-you-go pricing. Provisioning should be simple and immediate. You should also expect a degree of automatic elastic scaling in response to demand. Ideally this should include scaling to zero in the absence of any demand.
None of the platforms really fit this description completely. For example, Azure Container Instances is simple to get up and running with a similar experience to the Docker run command. So far so good, but it doesn't offer any automatic scaling and instances are provisioned manually. This does limit its usefulness.
AWS Fargate is more of a simplified way of deploying containers to ECS or EKS rather than a clean, serverless abstraction. It doesn't conceal the underlying clusters and the provisioning process can take up to twenty minutes while AWS spins everything up. You can specify resource consumption limits but there's no elastic scaling unless you want to get involved in configuring the underlying cluster.
Google's Cloud Run comes closest to a genuine "serverless" proposition. It provides a smooth developer experience with auto-scaling out of the box, including scale-to-zero. You can specify the number of concurrent requests that a single instance can accommodate and the maximum number of instances that can be created.
Can we abstract away orchestration?
Simplified deployment may be a convenience, but it may come at the cost of some of the more necessary benefits of an orchestrator like Kubernetes. These include features such as health checks and self-healing, replication control, routing and load balancing, automated rollouts and rollbacks and canary deployments.
In short, you don't get many of the features that you need to manage large numbers of containers in production. It can be tempting to use serverless containers while you are “testing the water” with containers, but it can be surprising how quickly you can be overrun by these concerns.
Serverless containers are also a relatively expensive way of organising processing. The price has come down in recent months, especially for Azure, but you're paying over the odds for processing power in comparison with a cluster.
For example, an EC2 instance with 8 CPUs and 32GB of memory will set you back around $275 in AWS, while the equivalent resource consumption in AWS Fargate will cost more like $385. This isn't necessarily comparing like with like as you don't pay for any unused capacity in a serverless model. Whether you can leverage this depends on your processing demand and whether you unused services are removed promptly.
When would you use a "serverless" container?
Running lots of small services in a distributed environment is complex and you quickly run into challenges around scale, deployment, monitoring and resilience. You can't necessarily abstract this complexity away into a “serverless” environment without limiting the applicable use cases.
Ultimately, serverless containers are a better fit for relatively small workloads with predictable demand or short-lived batch jobs. Any resource-intensive or variable workloads can become problematic as can large estates of containerised applications.
Beyond this broad premise there are some technical differences between the service that can also have a bearing on how you might want to use them.
For example, Google Cloud Run only allows you to spin up single containers, while AWS Fargate and Azure both allow multiple containers to be deployed together. This allows more freedom in terms of how you organise your deployments and supports patterns such as sidecars.
Some of these differences can be quite nuanced. Only Google Cloud Run supports in-place versioning where traffic is gradually migrated to a new deployment. Unsurprisingly, you can only run Windows Instances on Azure. Google Cloud Run and Azure Container Instances both support GPU-based workloads while AWS Fargate does not. Azure provides native persistence for containers in the form of Azure File Shares, while the other services assume stateless containers.
In general, serverless containers feel like a niche option rather than a potential replacement for Kubernetes. Given the complexity and variety of distributed applications it may be unrealistic to imagine that a "serverless" abstraction can completely remove the operational overheads involved in orchestration.