Kubernetes is better at packing suitcases – Servian
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-idk.dev
We are facing some social and economic turmoil caused by the COVID-19 pandemic, I say that as I sit at home in my family bubble down in New Zealand staying at home to break the chain of transmission. We are blessed down here with strong leadership steering us through these turbulent waters, Jacinda Ardern is leading us impressively with her positive action in taking on this virus. Talking of helmsman that brings me onto one of my favourite topics Kubernetes (definition), and how Kubernetes can also help us right now.
At the moment we are still in level 3 lockdown pretty much around the world which means revenue has changed for every industry other than essential services. So we all need to find new ways to cut our costs so that our businesses can continue to thrive and survive, and I am aiming this article at people that currently run their IT services on VM based infrastructure. They would have been busy over the last couple of weeks shutting down non-essential VMs outside of business hours and reducing their VM instance size to save costs. There is another way they can now look at to further reduce their IT infrastructure costs, and that is to start running the workload in Kubernetes clusters on-premise or in the cloud.
How Kubernetes can help to optimise the use of Infrastructure is not well known as I think it has not been well explained, but I will try my best to add some clarity. I listen to the Google Kubernetes podcast¹ to keep me up to date with what is going on in the rapidly moving world of Kubernetes. They took on the topic of Kubernetes economics back in July last year, they invited Owen Rogers from 451 Research² who has a PhD in cloud economics. He explained it really well using a suitcase analogy³ which I will use here, in fact, large parts of this are direct from Owen’s pod transcript. If you imagine you are packing your suitcase on vacation after you’ve bought some new clothes, which are in fixed-size boxes straight from the store. And rather than unpacking the boxes, you pop them straight into the suitcase. There’s lots of spare space in the boxes and in the suitcase, but you can’t use this space because everything is in a fixed-size box.
Now, to take your suitcase on the plane would cost, let’s say, $100. But as you failed to pack in as much because you’ve used these fixed-size boxes, you need to take two suitcases at a total cost of $200. Now each of these fixed boxes is essentially a virtual machine. Each virtual machine requires significant overhead in the form of an operating system. And that’s why you’re getting all this waste and you have to take more suitcases because you can’t put as much in one.
Now imagine you open these boxes from the store and you just take out all the clothes and squeeze them wherever they fit in every nook and cranny so that now you only need to take one suitcase. And that would cost you only $100. Essentially, you’ve lowered your unit cost per item of clothing just by packing better.
Now, this concept represents a container. With less overhead due to less duplication, you can cram more in. And in theory, an application built using containers should cost less than using virtual machines, as long as you’re packing those containers into the suitcase of the server or the virtual machine in the correct way.
So to get away from the abstract but useful suitcase analogy let’s consider some typical company workloads which include me running 20 EC2 instances and 4 RDS databases, my instances are sized to handle spikes so my CPU and Memory utilisation is generally very low. I have 24 boxes at fixed hourly charges and my monthly AWS costs for this is will be something like:
20 EC2 Instances at $100 per month = $2000
4 RDS Instances at $250 per month = $1000
If I spin up an AWS EKS instance which can handle this workload, we will need a cluster that can handle at the very least same number of instances 24, but for Kubernetes these are pods. So AWS will charge a fixed charge for the Kubernetes control plane and then you can choose to have as many worker nodes as you want to run your workload (pods), the worker nodes are actually EC2 instances and the number of pods you can run on the EC2 instances is limited by AWS to the number of IP interfaces allowed per EC2 instance. So we decide to spin up a worker node that is m5.xlarge so that I can run 58 pods, and I have plenty of CPU and Memory too. This will cost me the following:
1 x m5.xlarge EC2 instance at $300 per month = $300
1 x EKS monthly charge to run the control plane = $65
That is quite a considerable saving in EC2 instance cost, I can pack 58 instances into one suitcase and save myself heaps of idle EC2 cost, and I have included some very rough figures. However, I have done this for several clients and know this approach can give considerable savings up to 60–70% of existing EC2 cloud costs particularly when the EC2 instances are very underutilised. I think it definitely proves that Kubernetes is better at packing suitcases.
To realise the saving it does rely on your team being able to containerise the workloads and spin them up into Kubernetes using your CI/CD pipelines. In my opinion, this is much easier in Kubernetes than VMs and also opens the door for advanced releasing strategies like Blue/Green and Canary releasing. You have simplified your workload into containers which makes it easier to run on your desktop, on-premise and in the cloud and allows the company to run their workload on the cloud provider of choice. Now saving money is, of course, important but there are many more reasons to choose Kubernetes, and I think making your release a tagged image in a container registry makes your DevOps pipelines and releases much quicker and simpler.