SPARKING THE GAP BETWEEN AI AND DEVOPS
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-nextplatform.com
The term “AIOps” is getting a lot of work these days, and it shouldn’t come as a surprise. Massive amounts of data is coming at enterprises from all directions, not only from inside their own datacenters but also from the cloud and the edge, driven by such trends as the Internet of Things (IoT), the growing use of multiple clouds – both public and private – a proliferation of devices at the edge and applications being created and moved in rapidly increasing numbers of containers.
Contained inside all that data is the crucial information needed to make these organizations faster and more efficient, able to make the necessary business decisions more quickly and get a competitive edge over their rivals. Being able to keep up with all that data – to collect, store, analyze, and move it all – is no easy matter and enterprises whose IT staffs are growing nearly fast enough are turning to AIOps vendors and their platforms that leverage artificial intelligence to drive automation into IT operations.
The term has been around for several years, but the demand for it is growing fast as organizations look for ways to better develop and drive services to their customers. According to AppDynamic’s App Attention Index 2019, up to 84 percent of global consumers experienced problems with digital services from vendors, while 76 percent are saying their expectations of how well digital services should perform are increasing. Half of consumers surveyed by the index said they’d be willing to pay more for a product or service from an organization if the digital services were better than that of a rival.
“This marks a significant shift in consumers’ loyalty to a superior digital experience and brings to light that businesses across every industry are realizing that applications are the business,” Whitney Satin, director of product marketing at AppDynamics, tells The Next Platform. “To keep pace with ever increasing user demands, IT teams are increasingly adopting cloud technologies to enable them to iterate quickly and compete on experience. Highly distributed, multicloud, microservices, APIs, containers, IoT environments, and relentless code releases introduce constant change, and all of this is happening at a staggering scale. That creates a world where the traditional enterprise is managing a complex myriad of legacy technologies like mainframes, datacenters, private clouds, and trying to implement cloud-native technologies.”
Making The Move
Cisco Systems – as part of its multi-year effort to shift from being primarily a hardware vendor to one that sells enterprise software and solutions for both on-premises and cloud environments – bought AppDynamics for $3.7 billion in 2017. The companies are pushing forward in an AIOps market that is growing fast and is highly competitive, with such players as Splunk, IBM, BMC, Zenoss, OpsRamp and others.
AppDynamics and its parent company took a big step last year when they unveiled the Central Nervous System (CNS) for IT, a broad platform that leverages AI and machine learning to drive automation into IT operations to help business better manage applications and data. Through CNS, enterprises can monitor and mange their applications and collect, manage and analyze data from multiple domains, including the cloud, on premises, and IoT. The platform also can detect and remediate security and performance issues.
The first product out of the box was Cognition Engine, a tool with machine learning capabilities that enable users to managed applications and infrastructure with the help of diagnostics and automated root-cause analysis to detect anomalies. At the Cisco Live event this week in Barcelona, Spain, Cisco and AppDynamics are adding to the list with a number of new products – including the first major product integration since CNS was rolled out – to use the data being collected from both applications and infrastructure to optimize the performance of both and the customer experience. The companies introduced AppDynamics’ Experience Journey Maps, a tool that looks at how an enterprise uses mission-critical applications using business metrics and the experience with the application. The Experience Journey Maps give business and application groups a single and correlated view that touches on business performance, application performance and user experience.
COMING SOON
The companies also are enabling the Experience Journey Maps and Cisco’s Intersight Workload Optimizer to share and correlate data to provide a single, shared view of how the infrastructure is impacting applications. There is a common language, common tooling and shared datasets between the two. AppDynamics’ Satin says the coupling of the two technologies is “the latest proof point to Cisco’s software strategy. … [It] enables the exchange and correlation of formally disparate data streams to bring shared context to the application and infrastructure teams. This creates a shared view and common language to better drive application performance, user experience and ultimately business impact.”
Cisco’s Intersight Workload Optimizer is aimed at improving workload performance and compliance as well as costs across hybrid application architectures by leveraging both historical and real-time data to detect potential problems and to reduce infrastructure overprovisioning. In addition, the company’s new HyperFlex Application Platform is aimed at providing container-as-a-service capabilities to simplify provisioning and Kubernetes work from the datacenter to the cloud and edge for both IT and DevOps groups. It includes open-source tools and automation of routine tasks and uses AppDynamics and Intersight for real-time monitoring and application development across multicloud environments.
These products will be available in the second quarter.
CLOSING THE LOOP
A key goal of all this is to create a closed-loop operating model for infrastructure and DevOps teams that uses common vocabulary and tooling and share datasets that both sides are operating from the same game plan. The number of applications continues to grow and become more distributed, according to Liz Centoni, senior vice president and general manager of Cisco’s Cloud, Compute and IoT group. In a blog post, she also noted that the dependency between applications also will increase – by about 250 percent over the next 12 months – and that organizations in North America are spending as much as 43 percent of their time and $700 billion a year of their money to troubleshoot problems.
Having application and IT teams working separately is costing companies time and money, Centoni wrote, adding that “on one hand, the complex, distributed nature of modern applications from the edge to the cloud (and everything in the middle) makes it difficult to root-cause the source of a poorly performing application. While on the other hand, blatant overprovisioning of infrastructure capacity (whether its on-prem or cloud) for ‘peak scale’ leads to gross underutilization because for 364 days in the year, you’re not running at peak. Add data gravity limitations, containers, a myriad of opensource components, and the idiosyncrasies of every cloud in the sky – you’re looking at 25 hours a day, just to keep the lights on.”