Five Surprising Ways Enterprises Are Putting Kubernetes To Work
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-forbes.com
About five years have passed since Google handed over control of Kubernetes, its open-source container management platform, to the Cloud Native Computing Foundation. Since then, Kubernetes has quickly become the leading container orchestration solution on the market. A 2019 survey that our company conducted with Aqua Security revealed 87% of respondents were running applications in containers, with nearly 90% of them running them in production.
While Kubernetes has become the de facto standard for orchestrating applications in containers, the way it’s being used is also expanding as developers, vendors and enterprise users extend the technology to serve other important functions in the stack. This rapid innovation is one of the key benefits of a thriving, open-source ecosystem, and having a single technology platform that addresses a broad range of needs greatly simplifies IT management.
Here are five surprising ways enterprises are leveraging Kubernetes and providing insight as to how the platform might evolve in the near future.
1. On-prem use as well as the cloud
For some enterprises, entrusting sensitive data or IP to a public cloud provider is simply not an option. Others have on-premise capacity that they want to keep using.
However, none of that means they can’t use Kubernetes and containerized applications, even if they’re most closely associated with the cloud. Kubernetes enables IT teams to quickly build, ship and scale applications across both public cloud and on-premise environments. Cloud-native applications better support digital transformation initiatives, and the underlying technologies work just as well in private cloud environments as they do in the public cloud.
2. Deploying stateful apps as easily as stateless apps
In the early days, containers were used primarily for stateless applications, where data generated in one session did not carry over into the next. This wasn’t very realistic for enterprises that relied heavily on stateful applications for their traditional, mission-critical business needs.
PROMOTED
Google Cloud BRANDVOICE
| Paid Program
How To Spot The Biggest Tech Developments
UNICEF USA BRANDVOICE
| Paid Program
Taking A Page From Former UNICEF Executive Director Jim Grant’s Revolutionary Playbook
Grads of Life BRANDVOICE
| Paid Program
The Four Pillars Of The Coaching Mindset
Kubernetes and the container ecosystem have since expanded to deploy stateful applications on top of various database types and other enterprise infrastructure. IDC found (via Container Journal) that 76% of enterprises can now use containers broadly for mission-critical applications—something once impossible without support for stateful apps.
3. Full control over app lifecycle management
The lifecycle of an application is reliant upon the lifecycle of its data. How you manage that data from start to finish determines the longevity of an application. Kubernetes management supports applications (and their data) via extensions that have been added for high availability, disaster recovery, backup and compliance, ensuring consistent access and pinpoint accuracy.
Using the same technology, Kubernetes for both app deployment and app lifecycle management yields greater efficiencies. To use banking as an example, your account traditionally backs itself up in a machine-centric way (i.e., if you retrieve a backup, it will recover the entire machine image with all other application information). With Kubernetes management, a backup request can recover data from a specific month, day or even minute, which is far more efficient and demands fewer resources.
4. A new control plane for infrastructure (in the data center and cloud)
Kubernetes was born as a way to orchestrate apps across the data center and in the cloud, but it’s increasingly being used to manage underlying storage, network and compute, as well. This has been made possible by two specifications from the Cloud Native Computing Foundation: the container storage interface and container networking interface. These advancements are needed to keep pace on the infrastructure layer against the speeds of container modernization in the application layer. It makes sense to have the same technology platform do both.
5. A rapid enabler for AI
Kubernetes is well suited for AI because it can automate the exploration, training and deployment of new algorithms and applications for AI and machine learning. This can be seen in new software and service offerings from big vendors such as Google and HPE that enable AI in containers.
Kubernetes allows data scientists to work within an isolated environment on different datasets without tying up IT resources or contaminating shared spaces. Once an AI model is built, Kubernetes supports the independent scaling of compute as data scientists work to incrementally improve accuracy. When an AI application is ready for deployment, Kubernetes streamlines the rollout of each containerized application as microservices, accelerating release times and allowing for reuse in other applications.
It’s been exciting to see this expansion of Kubernetes to address critical enterprise IT needs. The same benefits of speed, flexibility and scale that containers brought to application development are quickly being extended to infrastructure management and other areas.
Having a single technology platform that meets these needs simplifies IT management and ensures that infrastructure developments can keep pace with the rapid changes occurring at an application layer. Kubernetes is at an exciting point in its development, and I look forward to seeing what new challenges the open-source orchestration platform will tackle next.