AWS Applies Machine Learning to Optimize Cloud Deployments

Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Source:-devops.com

Amazon Web Services (AWS) this week at its re:Invent 2019 conference unfurled two tools based on machine learning algorithms to optimize cloud application deployments.

Amazon CodeGuru is a service available as a preview, through which the cloud service provider will inspect application code using machine learning algorithms to profile the code and then identify bottlenecks and what part of that code is the most expensive to run on the AWS cloud.

AWS Compute Optimizer, meanwhile, identifies optimal Amazon EC2 instance types, including those that are a part of Auto Scaling groups, for specific types of workloads. It analyzes the configuration and resource utilization of a workload, including historical metrics, to identify dozens of characteristics to recommend optimal AWS compute resources. AWS Compute Optimizer is accessed via the AWS Management Console. Rather than rely on humans to optimize cloud platforms, AWS is making a case for reducing the time and effort required to determine which of dozens of instance types will deliver the highest performance at the lowest cost possible.

Amazon CloudGuru can pull code from either GitHub or CodeCommit repositories, with support for other repositories planned. It requires developers to insert agent software developed by AWS into their code. Once a pull request is made, Amazon CodeGuru will automatically start evaluating the code using trained artificial intelligence (AI) models that AWS developed using data gathered from thousands of different open source software projects by AWS and its parent company.

Once the analysis is completed, Amazon CodeGuru will generate a “flame graph” showing, for example, latency issues and CPU utilization rates alongside human-readable recommendations to surface specific issues and recommended remediations that include example code and links to relevant documentation for any line of code. Amazon CodeGuru can observe application runtimes and profile application code every five minutes.

AWS CEO Andy Jassy told conference attendees that Amazon is already employing Amazon CodeGuru across 80,000 applications, which has resulted in increases in infrastructure utilization rates that have reached 325% in some cases.

While machine learning algorithms have a clear role to play in terms of enabling DevOps teams to build and deploy more efficient code, it’s not clear to what degree DevOps teams will want to give AWS that level of access to what are often highly proprietary applications. Many DevOps teams may prefer to employ machine learning algorithms within the context of a continuous integration/continuous deployment (CI/CD) environment to drive code to multiple cloud computing platforms. Whatever the approach pursued, it’s clear machine learning algorithms are about to play a much larger role in DevOps. In fact, Jassy this week made it clear AWS will be applying machine learning algorithms broadly to enhance everything from enterprise search to identifying potential fraud.

What is less clear is how best DevOps practices will need to evolve to account for machine learning algorithms. Many of the processes that make up a DevOps toolchain are increasingly being automated. That doesn’t eliminate the need for the toolchain, but it will sharply reduce the amount of time and effort required to build and optimally deploy applications.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x