The Pros and Cons of Kubernetes for HPC
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:- hpcwire.com
To Kube or not to Kube?” That is the question now active in the HPC community.
If you work in IT, the rise of Kubernetes (K8s) has been hard to miss. Just five years after its initial release, Kubernetes has emerged as the new darling of open source, enjoying popularity and adoption second only to Linux. At the time of this writing, Kubernetes boasts over 80,000 code commits by approximately 2,200 separate developers. 1
A Kubernetes primer
For readers not familiar with Kubernetes, it’s worth sharing a historical note. Kubernetes was originally developed by Google and was heavily influenced by their in-house container-oriented cluster manager, Borg.
Unlike the workload managers familiar to HPC users (IBM Spectrum LSF, SLURM, etc.) Kubernetes was built to orchestrate cloud-native applications comprised of loosely coupled, containerized services. This style of software design, known as microservices architecture, is a preferred way of building scalable, modular, application services.
Different problems yield different solutions
The types of applications that commonly run on Kubernetes are very different from HPC workloads. While HPC applications usually run to completion (think a financial simulation or genomics workflow) Kubernetes applications usually run continuously – for example, a web store or a distributed Redis cache.
To launch an application on Kubernetes, someone from the DevOps team constructs a YAML or JSON format file describing details about the application and how components are interconnected. This file provides details like Docker images, ports, resource requirements, volume mounts and autoscaling policies. Then it is submitted to Kubernetes to start and manage the “Pods” that comprise the application.
For deploying application services, Kubernetes very powerful. It is much more than just a scheduler – it’s a complete runtime for containerized applications providing services such as DNS, storage, secret management, support for rolling updates, auto-scaling and more.
HPC users are from Mars, K8s users are from Venus
When HPC and Kubernetes users talk workload management, they often talk past one another and both end up confused.
HPC users look for specific features in a workload manager. It’s not a given that HPC applications will be containerized or that they will even use the same container runtime. While some jobs may be long-running (Spark, TensorFlow, or large simulations as examples) workloads can vary widely and include parametric sweeps, MPI jobs, multi-step workflows, long-running services, etc. HPC users tend to need features such as backfill scheduling, advanced reservations, preemption, topology-aware scheduling, and capabilities that may be unfamiliar to Kubernetes users.
One thing that HPC and Kubernetes users agree on is the usefulness of containers – albeit for different reasons. While micro-services architectures use minimalist containers for speed and modularity, HPC users are more interested in encapsulation and portability. HPC applications often involve multiple libraries and software components with complex dependencies. Rather than configure all this software, HPC users prefer to “stuff everything into a container” to hide complexity and simplify deployment. HPC containers are often large as a result with just a few CPU and memory-hungry containers per host.
So, will the Borg take over the HPC universe?
Perhaps someday, but probably not real soon. There is too much sunk investment in HPC libraries, tools, and middleware. While HPC applications are commonly containerized, there’s little incentive to completely re-architect them for Kubernetes.
One exception is in financial services, where cloud-native techniques are being embraced alongside traditional HPC. Developers value the continuous integration/continuous deployment (CI/CD) features in Kubernetes to enhance and share risk models and other application services continuously.
Kubernetes and HPC workload managers also intersect in areas such as data analytics and AI. A deep learning environment might use an HPC scheduler for model training because of its superior GPU-aware scheduling and workflow automation features, but use Kubernetes to deploy trained models for scalable inference in the cloud.
Kubernetes and HPC Converge – The best of both worlds
For users that need the power of Kubernetes, but that also need HPC-specific features, there are other solutions on the horizon. Kube-batch is an open-source effort used by Kubeflow and other projects to make it easier to run more complex workloads on Kubernetes. 2,3
An even better solution may be to run a full-featured HPC scheduler alongside the native Kubernetes scheduler.
[Read: Turbocharging Kubernetes Batch Job Management with IBM Spectrum LSF]
IBM offers such a solution on IBM Cloud Private (ICP). Rather than deploying separate clusters for K8s and non-K8s applications, IBM Spectrum LSF can be deployed as a containerized service directly on ICP. Once deployed, a wealth of new scheduling features become accessible to Kubernetes users. All Kubernetes users need to do is select LSF as the preferred scheduler in their YAML file. Kubernetes applications can take advantage of advanced LSF scheduling capabilities:
Managing SLAs between different users, groups and projects
Allocating resources based on hierarchical sharing policies
Reserving resources for workflows that run periodically
Managing job dependencies and multi-step workflows
[Also learn how using the right cloud cluster architecture can make your HPC fly.]
For HPC users dipping their toes in the cloud-native waters or for Kubernetes users that need richer scheduling features, IBM Spectrum LSF on ICP provides a slick strategy for coexistence. Users can embrace the rich capabilities of Kubernetes while still having access to HPC-oriented scheduling features enabling them to evolve applications at their own pace on a single shared environment. IBM also offers the hosted Cloud Kubernetes Service (IKS).