Do you really need Kubernetes?
Running containers in production is no picnic. You will need to solve problems such as elastic scaling, fault tolerance, rolling deployment and service discovery. This is where an orchestrator like Kubernetes comes in. There are other orchestrators out there, of course, but it’s Kubernetes that has gained traction and, critically, the support of major cloud providers.
The catch with Kubernetes is that you are trading operationally flexibility for complexity. Kubernetes has a lot of moving parts and abstractions. The initial abstractions of pods, nodes and deployments are easy enough to master. However, most implementations suffer runaway complexity, getting bogged down in less accessible concepts such as endpoint slices, ingress controllers, lifecycle hooks, and dynamic volume provisioning.
Not for the faint hearted
Kelsey Hightower's "Kubernetes the hard way" gives a flavour of the kind of intricacies involved in standing up a Kubernetes stack by hand. It’s not for the faint hearted. You can protect yourself from a much of this underlying complexity by using a managed service such as AWS EKS or Azure AKS to provision a cluster. Still, there’s a lot of non-trivial stuff to grapple with before you can get a viable platform into production.
Security is particularly involved, but that’s to be expected in a containerised environment where each new application brings a new set of threats. You’ll need to work through a lengthy checklist that includes scanning images and ensuring their provenance, limiting access to nodes, isolate resources by namespace and network, managing secrets, establishing resource quotas, ensuring everything runs in the correct security context and logging anything that moves.
Kubernetes is also at the heart of a constantly evolving ecosystem. You will need to consider solutions for concerns such as configuration, secrets, application management, logging and monitoring. These tend to require extra application infrastructure to be deployed into your cluster. Helm has emerged as a useful package manager to streamline application deployment, but the chart template format adds another layer of configuration.
Is this complexity worth it?
This complexity can be worth it, as Kubernetes does some impressive stuff. Take a node down and watch the applications automatically spread across the remaining nodes. Autoscaling and rolling updates can feel effortless. Services can go bad and be dealt with before you really notice that anything’s wrong. Deployment models such canary builds and blue-green are possible without losing sanity or hair.
If you don’t use Kubernetes then you'll need your own solutions for deployment, rollback, health monitoring, elastic scaling and networking whilst still being beholden to a long and difficult security checklist. In this case Kubernetes provides solutions to a set of problems that you may have to solve anyway.
Much depends on what you’re trying to run on Kubernetes. It is designed to serve stateless "twelve-factor" services that can be freely distributed between processing nodes. It expects services to start quickly and shut down gracefully, isolate their dependencies, derive configuration from the environment, scale out with extra instances and push logs out as events streams. Many containerised applications just don’t behave like this.
Alas, Kubernetes can often end up becoming a "bucket" for any processing workload. A mindset develops where if it can be shoe-horned into a container then it can be shoe-horned onto Kubernetes. Legacy applications that have somehow been conjured into containers won’t necessarily be any easier to manage if they are deployed to Kubernetes.
Just because you can run something in a container on Kubernetes it doesn’t follow that you should. Some applications might be better off running in a completely different context. For example, databases aren’t always designed to work as transient containers, while administration tasks such as index management and backup can be complicated by Kubernetes’ storage abstractions. You may be better off running your data in a managed PaaS service to cut out the operational overhead.
The complexity in a running cluster can often reflect the arrangement of your services. If you have a bunch of closely coupled applications locked in a complex web of mutual dependency, your production environment is going to be hard to manage. Kubernetes can encourage a gradual proliferation of small services by concealing the marginal overhead associated with each new service. This style of architecture can make complexity inevitable: as it turns out, distributed applications are hard to plan and even harder to operate.
What are the alternatives?
Kubernetes is optimised to serve containers on a grand scale. After all, it is a platform that emerged from the bowels of Google. That means it was designed to solve problems at a scale most people never reach and provide a level of flexibility that most will never need. It also expects a small army of Site Reliability Engineers to be on hand building processes around it. For most environments, Kubernetes could be regarded as overkill.
There are other ways to organise processing in cloud environments. Platforms such as Heroku can make it trivial to run web applications without having to worry about infrastructure. Managed services such as Google Cloud Run and AWS Fargate allow you to spin up container pods without worrying about the what lies under the hood. "Serverless" offerings such as AWS Lambda boil the unit of deployment down to the function and can be a cost-effective way of providing APIs.
Cost can be an issue, as you tend to get more bang for your buck with a cluster. A direct comparison can be difficult as you don’t pay for unused capacity in a managed service. That said, at the time of writing, an entry level EKS cluster in AWS costs around £8.50 per CPU and GB of RAM every month. The equivalent processing power would cost £37 in the managed Fargate service. This kind of cost difference can really start to matter at scale, which is where Kubernetes comes in, of course.
Given the emergence of managed services and “serverless” container environments, Kubernetes does not feel like an end-state but something that is more likely to live behind an abstraction in years to come. Commodity services will emerge to meet generic use cases, shielding developers from the complexity that often makes Kubernetes feel excessive.