2022-07-22 00:00:00
This article should help you to decide if you are ready to use Kubernetes and whether it’s the right platform to use for your application.
Kubernetes is a portable, extensible, open-source platform for managing containerised workloads and services, which facilitate both declarative configuration and automation. Kubernetes services, support and tools are widely available, and it has a large, rapidly-growing ecosystem.
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s (as an abbreviation) results from counting the eight letters between the “K” and the “s”. Kubernetes combines running production workloads at scale with best-of-breed ideas and practices from the community. Google open-sourced the Kubernetes project in 2014 (see 15 years of Google’s experience for more information).
Historically, organisations ran applications on physical servers. Typical challenges of using physical servers are:
Virtualisation was developed to address many of the cost and performance bottlenecks of physical hardware and introduces efficiencies by allowing you to emulate multiple servers, otherwise known as Virtual Machines (VMs). These isolated VMs are configured to share the same hardware resources (CPU, memory, storage, networking), ensuring that you can effectively manage resource allocation and balance performance utilisation across multiple applications on the same physical hardware.
Virtualisation also addresses the issue of scalability. Adding, removing, and moving workloads becomes a very simple process, as does sharing workloads across disparate hardware. Each VM runs a full Operating system (OS), and full stack of components, on top of virtualised hardware. Each VM has the same costly lifecycle-management issues as their physical counterparts e.g. OS upgrade and patching, access control, performance and logging.
Long before containerisation, server virtualisation was the most efficient way of running applications. However, it always comes with some resource overhead if you need to maintain a lot of servers because you must run a whole OS, including the kernel.
The ancestor of containerisation chroot has been around since Unix (version7) was released in 1979. Isolating a running process from the root filesystem and effectively simulating a root directory was typically know as chroot jail. Containerisation enables a lightweight platform that shares the OS with other containers. Like a VM, a container has its own filesystem, share of CPU, memory, process space, and more. Furthermore, it is decoupled from the underlying infrastructure, making it portable across clouds and OS distributions.
Sharing a kernel has its drawbacks. I will attempt to cover this in detail in a separate article focused on Container Security and Platform hardening. I mention it here so you can bear it in mind. The risks are fairly well understood, and there are plenty of resources on the internet that dive into more detail. Having said that, some of the more obscure mitigations are dependent on your environment and can only be identified following assessment by an experienced Security Practitioner.
If you have containerised applications, you are ready to consider using Kubernetes to run your workloads.
If you do not have the buy-in of Senior Stakeholders, stop here!
Building out a containerisation platform is an expensive undertaking. In the short term, many companies find that they are financing both the AWS infrastructure in addition to the original on-premises infrastructure. In the medium term, and as confidence grows, the original infrastructure can be shut down and decommissioned, and perhaps the datacentre can be exited altogether.
If you do not have a Capacity Plan, stop here!
Before you set about building any new platform, you will need to understand the short-term and medium-term demands at the very least (long-term projections can help at this stage too). This is critical to ensure that you do not have to stop to redesign and redeploy some of your infrastructure. DNS is a typical example of this.
Using a Capacity Plan, you can also begin to put together a financial forecast and begin to understand the Total Cost of Ownership (TCO). The TCO can vary greatly, so it’s a good idea to contrast and compare your container platform options to make the most cost-effective decision.
If you are not familiar with Agile Delivery Practice, or ready to adopt a DevOps culture, stop here!
To make the platform cost effective, you will need to understand how to make the most out of what you have, and where and how the saving can be driven from. This will involve driving out any old processes and so, if you are not ready to adopt the culture or are resistant to change, you will not easily realise the potential cost saving and optimisation that the platform can offer.
Kubernetes enables the speedy commissioning of code into a live production environment. With the fast turnaround of development efforts and low operational overhead, you will make the most effective use of its capability. Without this culture shift, you will find yourselves using Kubernetes in much the same way as you managed your previous infrastructure and will never realise any cost saving.
Do not use Kubernetes if you do not have trained engineers who can manage the platform. Tinkering with manifests can result in undesired consequences. Knowing what you are doing here is key to any successful outcome.
If your application can be containerised, and you have addressed concerns outlined in the above section - "When NOT to use Kubernetes" - then read on. Bear in mind that this was by no means an exhaustive list; depending on your specific use case, there may well be other reasons.
When starting out with Kubernetes it is important to understand the capabilities, as each will require some forethought which can help to shape your configuration. Kubernetes provides you with:
Of course, this all depends entirely on your implementation. Not all Kubernetes platforms are equal and unless you know what to look for, you may select a Kubernetes platform that cannot support secure secrets.
One of the most powerful features of Kubernetes is its ability to orchestrate the workload using declarative configuration. However, to fully utilise this capability your application must adhere to a few rules based loosely on the 12-factor-app checklist:
If you have applications that are somewhere close to this, you are just about ready to onboard.
There is no simple answer to this as it depends on cost, what your organisation model looks like, and how many different teams and applications and services you have.
One thing to consider with a single-cluster-multiple-apps approach is that you will have to start thinking about namespaces, Role-Based Access Control (RBAC) and network segregation, as well as the operational cost this may add due to the additional tooling and policy definition. All of this sounds great, of course, but it is only achievable if you know what you are doing with it.
Going back to my earlier point about avoiding the use of Kubernetes unless you have trained engineers who can manage the platform - in addition to engineers who can manage BAU operations, you will also need Architects and Security professionals to assist you with your infrastructure. You should adopt a compliance framework and use tooling such as Airwalk Airview in order to mitigate any risks attributed to unsolicited or malicious configuration changes.
Introducing guardrails as early possible is key to a successful outcome. Starting an undertaking like this without them can, and almost always does, introduce risks that cannot be confidently remediated without a complete rebuild from scratch.
You have a capacity plan, you understand the financials, and you understand how to containerise your applications. But choosing a Kubernetes platform is not an easy decision. You might like to read another of our blogs - Selecting a Cloud Service Provider - before taking your next step.
Airwalk Reply has designed and deployed Kubernetes infrastructure, controls and pipelines for some of the world’s largest organisations. Contact us for more information about our Kubernetes consulting services, advice or assistance.