What is Kubernetes? Everything your business needs to know
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:- zdnet.com
The evolutionary path forward for virtual infrastructure in the world’s data centers is narrowing to a single lane. Historically that’s been bad news, because it used to mean vendor lock-in. That’s not what it means this time.
What is Kubernetes?
The definition of Kubernetes keeps changing because as it keeps growing, Kubernetes changes the world around it. Here now is the Fall 2019 edition: Kubernetes is a workload distribution and orchestration mechanism for clustered servers in a data center, ensuring resource availability, accessibility, and balanced execution for multiple services concurrently.
In this scheme, Kubernetes enables any number of servers of many kinds at the same time, separated by any amount of distance, to share workloads for a common tenant. It then presents those workloads to clients as services — meaning, a client system can contact them through the network, pass some data through to them, and after a moment or two of waiting, collect a response.
This is distributed data processing, which used to take place in the confines of a single, monolithic application. Kubernetes exposes this entire process to observability and manageability.
In managing these services, Kubernetes changes the layout of the network as necessary. As more clients make requests of a certain type or class of workload, the orchestrator makes more replicas of it available; likewise, as requests subside, it reduces the number of replicas. This is the process that brought Kubernetes to fame, which IT operators call scaling out and scaling back, respectively. When services are subdivided into individual functions, or microservices, that contact each other through the network instead of through the memory and processor they would otherwise be sharing, Kubernetes can scale-out individual microservices and scale them back, as demand increases and falls, just the same as if they were complete applications.
The business case for Kubernetes
The information technology platforms upon which our businesses, our systems of commerce, and sizable portions of our society, are built, are showing their age. Replacing them is an issue of benefits outweighing costs. As any telecommunications service provider grappling with the 5G Wireless transition will attest, it’s difficult for an organization to justify the cost of replacing its entire infrastructure unless its near-term business model can come close to guaranteeing profitability.
Kubernetes is not only making the argument for business profitability but is racking up some test cases to prove itself. In its favor is a growing amount of evidence that the ever-increasing costs organizations pay today to keep up their existing infrastructures, are becoming less and less justified:
- The cloud is based on the first-generation virtualization, which is being rendered obsolete and perhaps, in due course, irrelevant. An image of the software that would normally have been installed on a server’s main hard drive, is rendered in the memory and storage of a remote server so that software can run there like it always has before. Now there’s no need for software to be made to run like it always has before. The business case for continuing to produce monolithic applications has evaporated, even in the case of massively multiplayer online games whose underlying, proprietary platforms are the exclusive domains of their manufacturers.
- The Internet is mapped using a domain system that maps addresses to their registered owners rather than the functions and services being used. Service meshes are overlaying those maps with more relevant ones, enabling distributed applications to find each other over vastly dispersed networks. And these service meshes are bound tightly to Kubernetes, providing the second most relevant service of the system after workload orchestration.
- Mobile devices are dependent upon mobile apps that distribute “smart” functionality to the client-side, mainly to minimize the information exchanged between clients and servers. With wireless bandwidth no longer a premium commodity, it may become more practical and cost-effective to shift that functionality back to the server-side, enabling a new class of devices that are significantly “dumber” than their predecessors — albeit with really great cameras — yet accomplish the same tasks at conceivably greater speeds.
- Public cloud data centers are massive, “hyperscale” facilities that service tens of thousands of tenants simultaneously, oftentimes from distances several hundreds of miles away. With more highly distributable computing, it may become more practical and more desirable to have greater numbers of much smaller data centers, scattered in closer proximity to their users.
- Artificial intelligence comprises the upper class of software, mainly because of its relatively high cost in memory, storage, and other resources. Using distributed service models, comprising a myriad of containers, each with a much smaller footprint, AI may become far more commonplace, to the extent that software that draws better inferences (e.g., “Look out for that tree 30 yards away!”) won’t be called “smart” as much as it’s called “standard operating equipment.”
- Containerization makes business software easier to manage. In the context of server-based computing, a container is a package that enables workloads to be virtualized (portable, self-contained, running in isolation) while still hosted by an operating system (as opposed to a hypervisor). Modern applications are made portable among servers by containerizing them, which is not just about packaging put deployment. In a containerized environment, the code for software is retrieved or “pulled” from repositories (some public, others private), then immediately deployed and run in the production environment. This automated deployment method enables software to be improved not just every eighteen months or so, but potentially every day, not just by its originators but by its users as well. In turn, this dramatically improves data center system integrity as well as security.
What “orchestration” means
Orchestration is the effective management and execution of multiple workloads cohabiting an IT platform. In Kubernetes’ case, certain workloads may arrive on the platform having already been subdivided into microservices. They still work together but as independent units. Kubernetes orchestration enables those units to be multiplied and redistributed as necessary and phased out when no longer in use.
LIKE THE CONDUCTOR OF AN ORCHESTRA?
Wrong analogy. A conductor ensures that a piece is executed in the proper time and rhythm. In the data center, the operating system continues to play that role — Kubernetes does not change this. An orchestrator coordinates the execution of all the parts in the composition for maximum efficiency and smooth performance, so one part cannot drown out the other, and all the parts play their contributing roles effectively. Because these parts may be distributed widely among several locations, an orchestrator also assembles all the resources that parts may require to contribute to the same task at hand.
ENTERPRISE SOFTWARE
- Windows 7 end of life: Months from patch cut-off, millions still haven’t upgraded
- 100+ critical IT policies every company needs, ready for download
- Windows 10 CPU spikes? 1903 update brings fixes but also high usage issues
- Adding AI to enterprise collaboration tools (ZDNet YouTube)
- VR lets you fire a virtual employee to practice doing it for real (CNET)
- How to use Gradebook to enhance Google Classroom (TechRepublic)
CONTRASTING AN ORCHESTRATOR FROM AN OPERATING SYSTEM
An operating system on a computer, among other things, makes it feasible for a program to be executed by its processor safely and as expected. Kubernetes fulfills that role for multiple workloads simultaneously, that are distributed among a plurality of servers in a cluster.
This is not to say Kubernetes is an operating system that’s scaled up. The OS still plays the role of marshaling the execution of each program. And in a containerized environment (at least, its native environment as it was originally designed) each container’s host is not the hypervisor, as it is with vSphere or KVM, but rather the OS.
In one respect, though, what an operating system is to a single computer, an orchestrator is to a cluster of servers: It oversees the execution of software in a system whose infrastructure resources — its processing power, memory, storage, and networking facilities — have all been merged. Kubernetes settled the matter of which orchestrator the data center would prefer, in an extremely brief period, like the allied troops who liberated Kuwait. Like Operation Desert Shield, Kubernetes had a simple strategy that was swiftly executed.
WHERE DOES ALL THE SOFTWARE GO?
In the modern data center, software does not need to be “installed” on a computer. Rather, it’s more like a book that’s borrowed from a library, only one that is capable of publishing the book before it’s loaned out. In the containerization realm, this library is called a registry. Open-source packages loaned from a registry come in fully assembled containers. The act of making an application or service available via a registry for introduction into a Kubernetes-managed environment is called deployment. So when we talk about “deploying workloads,” we’re referring to the act of preparing software for delivery to a server cluster, where it is managed and orchestrated.
Kubernetes is built to retrieve workload packages from registries, queue them for deployment in the system, manage their distribution among the clusters they oversee and govern their access to resources made available through these clusters.
WHY IS CONTAINERIZATION SO IMPORTANT IF IT CAN HAVE SUCH A LOUSY NAME?
Containerization is the trend officially started by Docker Inc., then propelled into warp speed by Google, and now joined by most everyone else in the platform space, including Microsoft and VMware. It was an esoteric aspect of data center management, we were told four years ago, that would go unnoticed by the everyday user. Yet every viewer of Netflix and Amazon Prime, and every user of Alexa and Siri, has felt this impact first-hand, even if she wasn’t capable of identifying its source. Shifting the focus of data center management from machines to workloads revolutionized the way applications and services are delivered.
Rather than “containerization,” which sounds like a way to industrialize a Tupperware party, it could be called “the workload revolution.” Networks are now being routed towards functions rather than towards machines. It’s difficult to see the importance of this idea in practice without a sufficient, real-world analogy: How many telephone numbers do you recall off the top of your head? Are there greater or fewer patterns of digits in your mind, now that smartphones have contact lists and can respond to your voice?
WHAT’S ALL THIS “WORKLOAD” BUSINESS?
A program that runs on a computer is still “software,” invoking the term that NASA engineers originally coined during the Apollo era as a pun. And an application is still a program designed to be operated by multiple users and referred to by name.
By comparison, a “workload” is a bit fuzzier. It’s composed of one or more pieces of software. It may use a database, though it could be the same database that other workloads are using. It may be comprised of more than one package in a registry, assembled on the fly and sharing functionality within a cluster. But it typically has one principal purpose and is capable of operating as one cohesive unit, even if it has any number of composite parts.
Software developers typically don’t sit down to their desk and compose workloads. They still write programs. But in the process of deploying containers assembled around those programs, the instructions given to an orchestrator such as Kubernetes end up declaring the working parameters of an active workload. So in the act of deployment, the software becomes a workload. Its effects on the resource consumption of a data center can be measured and mitigated, just like the effects of a workload in the everyday realm of people and things, can be measured and mitigated for employees.
SPECIAL FEATURE
The Cloud v. Data Center Decision
While companies used to have to justify everything they wanted to migrate to the cloud, that scenario has flipped in recent years. Here’s how to make the best decisions about cloud computing.
ALL RIGHT, THEN, WHAT’S A “SERVICE?”
A service in the modern data center is a very different thing from an application. That might not seem sensible, because applications are often described as performing useful services. But architecturally speaking, a service is software that, given specified inputs and pointed to relevant data, produces a predictable set of outputs. Databases are often queried using services.
An application provides its user with an environment (usually a visual one) in which services may be put to use. A service need not concern itself with that function.
Today, most orchestrated, containerized programs are services. They may perform the most important business of an application, but they are independent units. Microservices are self-contained, individual, self-reliant services that tend to be small (although recently, software architects have argued, they don’t have to be). An orchestrator can invoke (or “instantiate”) as many clones of microservices as may be necessary, or allowed, to respond to requests being directed to them.
An API (originally short for “Application Program Interface”) is a set of services with a specified communication protocol. In networked computing, an API is designed to be contacted remotely, usually by a Web browser, using a URL crafted to relay a command or statement to the receiving server. That command may also upload a data package along the way. The responder to that command is a service. Kubernetes’ forte is orchestrating services.
Yes, a service is a type of workload. Perhaps the most prominent example of modern service architecture is the so-called serverless function. It’s called this because its source — the server or server cluster that hosts it — does not have to be addressed by name, or even indirectly, to be invoked by another service or by its user. Instead, those details are filled in on the requester’s behalf, with the result being that the user of that function can pretend that it exists locally on the client. Like the contacts list on your smartphone, it leads you into thinking that numbers have become irrelevant.
The components of Kubernetes
You may have noticed that this article escaped through its first ten paragraphs without invoking either the word “container” or the even less self-explanatory word “containerization.” The decoupling of Kubernetes from containers is one of the most unanticipated changes in recent months to the scope of Kubernetes. In a few moments, you’ll understand why.
One of the key objectives of orchestration is to make things available in a network. Up to now, we’ve been mainly calling these things “containers,” although we’ve noted that, since its beginnings, Kubernetes referred to the entities it coordinates and effectively orchestrates as pods. In this context, the term “pod” has been defined as simply as “a group of containers.”
PODS AND RESOURCES ON THE CONTROL PLANE
Each server (physical or virtual) in a Kubernetes cluster is called a node. If it hosts some aspect of Kubernetes and is addressable through the network maintained by the orchestrator, it’s a node. There is a master node and any number of worker nodes (sometimes called minions). The network of components responsible for controlling the system is separate from all other networks, to form the control plane. On this exclusive plane, you’ll find three components:
- The API server (kube-apiserver), which validates all incoming requests, including to services running inside pods.
- The controller manager (kube-controller-manager). The individual components of Kubernetes that have direct responsibility for managing some resources within the system are called controllers. Provisioning a job for a pod-based service to undertake, for instance, is a task for the job controller. Here is where things get interesting: Kubernetes may be extended through the addition of further controllers, making it the orchestrator of things other than just containers.
- The scheduler (kube-scheduler), which is not so much about time as the delegation of workloads to pods. When a pod is provisioned, the scheduler delegates it to the worker node best suited to handle it, given its current state of availability.
Controllers are located inside the Kubernetes control plane. For the ones that are shipped with Kubernetes, their principal function is to monitor the state of resources on the network infrastructure, in search of any changes. It takes an event — a signal of such a change — to trigger an evaluative function that determines how best to respond. The class of service that may be delegated the task of responding is an operator. To make it feasible for the orchestrator to automate more complex systems, a service architect would add controllers to the control plane to make decisions, and operators on the back end to act on those decisions.
CUSTOM RESOURCES
It’s the extensibility of this controller scheme which may, in the end, be the masterstroke that cements Kubernetes’ position in the data center. As a result of an architectural addition called custom resource definitions (CRD), Kubernetes can orchestrate things other than these containers. Put another way, if you can craft a controller that effectively teaches Kubernetes to recognize something else as an orchestrated resource, it will do so. What are we talking about here — what could the “something else” be?
- Virtual machines (VM) — The classic, hypervisor-driven entities which support a majority of the world’s enterprise workloads. VMware, whose vSphere platform is the predominant commercial leader in VM management, has already begun a project to make Kubernetes its principal VM orchestrator.
- Massive databases whose engines and control jobs have in recent years moved to dedicated systems such as Hadoop and Apache Spark — and which could conceivably move off those platforms if developers become free once again to write workloads using languages other than a select few, such as Java, Scala, and R.
- High-performance computing (HPC) workloads for supercomputers, which have historically been governed by dedicated schedulers such as Slurm and, more recently, Apache Mesos. Their virtue in the data center as time-oriented scheduling agents is now being called into question as Kubernetes approaches near-ubiquity.
- Machine learning models, which require large data volumes with parallel access, as well as deterministic scheduling. You might think these factors alone would disqualify Kubernetes as the orchestrator or infrastructure facilitator, but there are projects such as Kubeflow where the database providers and schedulers that do provide these features, are themselves provisioned by Kubernetes.
SPECIAL FEATURE
Building the Software Defined Data Center
There are massive efficiencies, agility, and manageability benefits from virtualizing your data center and running it from software.
OBJECTS ON THE DATA PLANE
All these classes of workload-bearing entities that get collected into pods, plus whatever else Kubernetes may end up orchestrating in the future, become objects, for lack of a better word.
What explains one of these objects to the orchestrator is a file that serves as its identity papers, called a manifest. It’s an element of code, written using a language called YAML, which declares the resources that the object expects to use. This is where the controller is capable of previewing how much fuel if you will, the object will consume — how much storage, which classes of databases, which ports on the network. The controller does a best-effort attempt at meeting these requirements, knowing that if the cluster is already overburdened, it may have to do the best it can with what it’s got.
Inside each of the pods is a remote agent called kubelet, which receives requests from the operator and manages the pod’s components. In a conventional, container-based system, it’s kubelet that spawns processes for the container engine. This is where Docker used to have a reserved place at Kubernetes’ table — it used to be the de facto exclusive provider of container engines. It even created a universal runtime called runC (pronounced “run · see”) and released it to the open-source community. Now the Kubernetes project has spawned its own alternative, called CRI-O (“cry · O,” though occasionally said like “Creole”), which is the preferred container engine of Kubernetes-based platforms such as Red Hat OpenShift.
Kubernetes’ vanquished competitors
Before much of the tech press came to the collective realization that the server cluster orchestration space was the hottest battleground in the modern data center, the battle was already over. Commodity markets rarely tolerate competing standards for very long. It’s why there is one HTML, one Facebook, and one Kubernetes.
DOCKER SWARM
Docker Inc., the company whose engineers were responsible for triggering the container revolution, established a business philosophy early on based on commercializing large-scale deployment, security, and support for a free and open-source core. Revenue would come from attachments and reinforcements of the Docker core that were Docker-branded but for which substitutes were commonly available, for a business model it called, “Batteries Included but Replaceable.” If containers were to become ubiquitous, Docker’s leaders maintained, it should hold no claim, intellectual or otherwise, over the thing that makes containers a commodity. A broader market would, the company believed, lead to greater numbers of willing customers.
To that end, in 2015, Docker backed the creation of the Open Container Initiative (OCI, originally either the Open Container Project or the Open Container Foundation, though for a multitude of reasons soon requiring a name change), under the auspices of the Linux Foundation. In making the announcement during a company conference, then-CTO Solomon Hykes told his audience he did not like standards wars, which he described as arguing over “details like the size and shape of the box.” For that reason, among others, Hykes announced the replacement of the runtime component of Docker containers — the part that makes them operable over a network — with runC.
In the very same week, many of the same founding members of the OCI announced the establishment of the Cloud Native Computing Foundation, another project of the Linux Foundation. Ostensibly, the CNCF’s mandate would be to promote and advance the use of open-source application deployment technologies. The first project CNCF would steward, beginning the following March, would be Kubernetes, a project that originated at Google.
Meanwhile, after a few experiments with less versatile and, on occasion, awkward attempts at deployment platforms, Swarm became Docker’s orchestrator. By most accounts, Swarm was a worthy contender. Admins said it had a much less daunting learning curve. Its overlay networking model, which divided inter-container traffic from host traffic, each in its plane with a bridge between them, was perceived as clever, especially compared to Kubernetes’ flat network overlay model. In a multi-cloud deployment model, a Swarm container cluster could be delegated to a slower public cloud, while traffic on the control plane could be more closely contained on a lower-latency cluster. In terms of performance and manageability, experts were slow to choose favorites.
If performance alone determined the outcome of technology battles, Sun Microsystems would have conquered the desktop long ago, and we’d all be talking about it on our BlackBerrys.
CNCF made it its mission to advance and promote the widespread deployment of an entire open source ecosystem, including performance monitoring, service discovery, data volume management, and security, all centered around one workload deployment engine. Docker had already begun to launch its extensibility model, but immediately got entangled in the esoteric, philosophical quagmire over whether extensible architecture violated a certain dogma of application design called “statelessness.”
At this same time, while Kubernetes had been cast as a purely vendor-agnostic platform, during those early days, Google put its full weight and muscle behind its marketing, tailoring a theme and a consistent pitch to both consumers and journalists, while weaving the Kubernetes name into its branding. Throughout 2017, enterprises evaluating Kubernetes perceived it as a Google product. When the legalities and formalities were explained to them, many waved them off, saying none of it mattered as long as the final result was something called Google Kubernetes Engine. During more than a few conversations I participated in, IT admins and other expert enterprise practitioners told me, if it comes down to Google vs. Docker, what the Sam Hill is Docker?
Yet Google could not maintain the appearance of sole defender of the orchestration faith for long. Way back in 2015, Red Hat made the momentous decision to replace the engine of its OpenShift container deployment platform with Kubernetes. By 2017, that decision was paying off for it in buckets. Red Hat had become a top-tier Kubernetes contributor. Up north, Microsoft, in an effort to stave off the possibility of being shut out of another wave of change in the enterprise, hired two of the open-source community’s most visible engineers: Gabe Monroy, who was co-founder and CTO of Deis, a key factor in the Kubernetes ecosystem for building and deploying containerized applications (a bulwark Docker had hoped to defend for itself); and Brendan Burns, one of the Google engineers who had created Borg, the Kubernetes prototype [PDF]. This time, Microsoft would not hide its newest hires in the back closet of some research division side project. They took the lead in remaking a significant chunk of Azure in Kubernetes’ image.
The dam was breaking, and in several places at once.
FEATURED
- Google Pixel 4 and Pixel 4 XL: How to pre-order and where to find the best deals
- Google’s new Pixel and Nest devices: Prices, release dates, and where to buy
- This autonomous ship aims to steer itself across the Atlantic ocean
- Pixel 4 storage fail (ZDNet YouTube)
APACHE MESOS
The established leader in workload scheduling for distributed server clusters was Apache Mesos. It pioneered the master/worker architecture (although Mesos used a different word for “worker”), and was one of the first schedulers to be extended into a private PaaS platform, called Marathon. Mesos’ first major deployment was at Twitter, where Ben Hindman was an engineer. In 2013, Hindman left to found Mesos’ premier commercial vendor, Mesosphere. Working with Microsoft, Mesosphere produced one of the first public cloud-based PaaS to enable orchestrated, hybridized deployments: DC/OS, which looked as though it would become the choice workload deployment platform for Azure. Mesos had the virtue of several years of deployment experience, so it was a platform that not everyone had to fathom from the beginning.
But the incumbent Mesos could not escape the effects of an insurgent challenger with a full head of steam. In August 2017, VMware leveraged the resources of its sister company, Pivotal, to launch a cloud-based Kubernetes platform called Pivotal Container Service, with an automated deployment mechanism called Kubo that came up through the ranks from Cloud Foundry. Soon, Azure followed suit, effectively back-burnering its DC/OS project. Then in June 2018, the stalwart Amazon surrendered its defensive position, opening up its Kubernetes deployment platform. And finally, few believe that IBM’s acquisition of Red Hat, which closed last July, was about IBM needing a better Linux distribution. OpenShift had already paved routes into the distributed data center that IBM found it no longer needed to pave again.
The defeat was so complete that Mesosphere could no longer do business with that name, rechristening itself D2IQ last August, and vowing to establish a “Ksphere” of its own. And in early October, Docker suggested that its users try running Kubernetes and Swarm side-by-side. “New users find it much easier to understand Docker Swarm,” its company blog post read. “However, Kubernetes has evolved to add a lot of functionality.”
Where Kubernetes goes from here
Up to now, much of the discussion about data center re-architecture has centered around the topic of migrating old workloads to new models. Applications as we have come to know them have been called “monoliths” because, like the mysterious object in the movie “2001,” they’re singular, practically solid, and just as inexplicable after sitting in the theater for four hours as they were at the outset. They’re comprised of code that only its creator knows how to change.
Moving to Kubernetes has been described as a process of migrating monoliths. Some have said this can only be done by rebuilding microservices networks that behave like their monolithic predecessors, but that replace them entirely. Others say it’s possible to wrap an API around a monolithic service and distribute that API through a network in a microservices fashion. It’d be easier to do, and would not involve so much effort replicating the same functionality that businesses already own.
Now, thanks to Kubernetes’ CRD, to paraphrase Arlo Guthrie, there is a third possibility that no one even counted upon: Kubernetes itself can migrate to meet the needs of existing workloads. Being perhaps the world’s most active open-source software project, Kubernetes is being maintained by literally hundreds of expert engineers who could assist businesses in devising or adapting the controllers and operators they would need to automate their software supply chains.
The people who created Kubernetes said a few years ago there would be a time when their creation became so much a part of everyone’s data centers, that they’d be boring and no one would read an article about it. From what I’m witnessing, that day is still at least several years away.