Building the hybrid serverless multiclouds of the future
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source- siliconangle.com
Developers frequently compose solutions as patterns that hybridize different approaches to developing, hosting and managing application resources.
Any solution pattern may be hybridized, with the approach usually contingent on the developer having access to an abstraction layer designed to enable the complex application environment of interest. In cloud computing, we increasingly see hybridization in the form of public and private clouds interoperating to handle transactions, analytics, integration and other complex workloads.
At the platform-as-a-service level, we also see it at the level of cloud-native computing hybrids that federate two or more Kubernetes clusters that run on distinct clouds or application platforms. In these established patterns, the abstraction layer consists of the cloud-native containerization, orchestration and other interfaces provided in an integration toolkit, such as the recently launched IBM Multicloud Manager.
Where serverless computing is concerned, though, Wikibon hasn’t yet seen a significant push toward hybridization of two or more public or premises-based function-as-a-service clouds. But that sort of hybridization is certainly possible and in fact, is anticipated in several industry initiatives within the cloud-native computing community. Indeed, it’s almost inevitable, considering the development simplicity, operating efficiency and scale economies of serverless computing as an alternative to full-blown containerization and orchestration, a la Kubernetes, Docker and kindred projects in the cloud-native stack.
For example, one might build cloud-native applications that call the application programming interfaces of two more public serverless offerings, such as AWS Lambda, Azure Functions, Google Functions or IBM Cloud Functions. Likewise, it’s even possible to have more complex hybrids that encompass public serverless environments and various premises-based serverless environments, such as Oracle Fn and Red Hat OpenShift Cloud Functions.
Considered in this context, the serverless hybridization possibilities fall into three broad categories:
- Multi-public function code deployment: A developer might write a functional local app that directly invokes the APIs of stateless, event-driven code that executes in two or more serverless public clouds. For example, a mobile app might consume file-transfer-triggered event notifications from IBM Bluemix OpenWhisk, batch data processing that’s executed in AWS Lambda, and real-time stream processing streams sourced from Microsoft Azure Functions.
- Multi-public function cross-invocation: A developer might deploy functional microservices code into two or more serverless public clouds. Each of those microservices might invoke stateless, event-driven functions in the other public serverless environments via the APIs that each exposes.
- Hybrid public/private function deployment and/or cross-invocation: A developer might even deploy functional microservices into private serverless infrastructures as well as one or more public serverless clouds, with API-based cross-invocation among those environments.
Building hybrid serverless apps would be simpler if developers were able to access these patterns inside their cloud-native coding workbenches. And that, in fact, is the genesis for the Virtual Kubeletspecification. This abstracts the core Kubernetes kubelet function — an agent that runs on all Kubernetes nodes to manage workload lifecycles — so that it can connect orchestrated, containerized microservices to other APIs, such as those exposed by serverless environments.
Within this abstraction, Virtual Kubelet is an application that runs inside a container within a Kubernetes cluster, masquerades itself as a node and interfaces via the Kubernetes API to external serverless and other pluggable application environments. It exposes a pluggable provider interface so that practically any serverless environment can be set up to directly invoke and be invoked by any containerized microservice running on a Kubernetes-orchestrated cloud-native fabric. Currently, the Virtual Kubelet abstraction enables interoperability between Kubernetes and serverless offerings from several cloud providers, including Alibaba Cloud, Amazon Web Services and Microsoft Azure.
However, Virtual Kubelet isn’t yet broadly adopted in cloud developer toolkits. Consequently, developers who want to build apps that tap into two or more serverless clouds should consider writing and deploying function-as-a-service logic from within infrastructure-as-code power tools such as HashiCorp Terraform or Gloo, which have hooks into different serverless cloud platforms.
When looking for tooling to monitor, manage and secure hybrid serverless environments, cloud administrators should explore the open-source Knative project. Developed by Google in collaboration with Pivotal, IBM, Red Hat and SAP, Knative is a Kubernetes-based platform for driving DevOps workflow around the unified development of serverless and containerized apps for deployment across heterogeneous public and private cloud platforms.
Going forward, Wikibon expects that Knative, in conjunction with Virtual Kubelets, will catalyze the development of more varied and sophisticated hybridization of serverless and containerized cloud-native applications. We urge the Knative community to submit the project to the Cloud Native Computing Foundation so that it can be developed as a core component of tomorrow’s multi-cloud computing architectures.