Diamanti Adds Support for Google Cloud Platform
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-https://containerjournal.com/
Diamanti this week announced that it adds support for the Google Cloud Platform (GCP) to its Spektra platform for managing data on Kubernetes environments that can be extended across a hybrid cloud computing environment.
In addition, Diamanti adds support for Cri-O, the container runtime interface for Kubernetes, defined by the Technical Oversight Committee that oversees the development of the open source container orchestration platform.
Diamanti also adds debugging terminals for each application that can be directly accessed via the Spektra user interface. This addition reduces operational overhead and simplifies navigation between different applications and allows for easier search through application-level logs to diagnose issues.
Spektra is already available on Amazon Web Services (AWS) and Microsoft Azure, as well as on-premises IT environments that can deploy Spektra on servers from Lenovo, Dell Technologies, Hewlett-Packard Enterprise (HPE) or x86 infrastructure provided by Diamanti.
Brian Waldon, vice president of product for Diamanti, says all those instances of Spektra provide a data plane through which IT teams can employ a unified data fabric across a hybrid IT environment. That data plane complements an existing Ultima control plane from Diamanti that provides the means for managing the underlying network, storage and security services.
Data fabrics are becoming more critical because, with the advent of workloads based on artificial intelligence (AI), the amount of data that needs to be transferred between platforms has dramatically increased, Waldon says. Massive amounts of data are now aggregated on cloud platforms running data lakes that are being created to train AI models.
The rise of those AI workloads, most of which are deployed using containers, is also helping to drive increased deployments of stateful applications on Kubernetes clusters that require access to some form of persistent storage. Initially, the bulk of container applications were stateless in the sense that they typically stored data outside of a Kubernetes cluster. However, as IT teams become more adept at managing Kubernetes clusters, the rate at which organizations are starting to unify the management of compute and storage on the platform has been increasing steadily. A recent survey conducted by the Cloud Native Computing Foundation (CNCF) found 55% of respondents have now deployed stateful container applications in production, with another 11% planning to deploy them in the next 12 months. Another 12% are evaluating them, according to the survey.
It’s not clear at what rate organizations are moving to finally unify the management of data across multiple platforms. Currently, there’s a major effort to build data lakes in the cloud. However, most organizations are still likely to wind up deploying multiple data lakes on multiple clouds that will, at least for now, need to share data with data warehouses that are likely deployed in on-premises IT environments.
Regardless of how data is managed going forward it’s clear that some type of fabric to unify data management is required. In fact, most organizations will never be able to truly embrace hybrid cloud computing without one. The challenge now is laying the foundation for those hybrid cloud computing environments that, without some ability to easily move data, would defeat the purpose of making the effort in the first place.