Azure-hosted AI for finding code defects emitted – but does it work?
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-theregister.co.uk
How many defects has it found? Never mind that, check out the architecture flow chart
Altran – in association with Microsoft – has pushed out an open source project to find code defects via AI whenever you commit code.
It is easy to see why Microsoft is keen. This is a self-managed service where you download the code and purchase Azure resources on which to host it. The project from Altran – a consulting and software engineering firm that’s part of Capgemini – uses two Azure Web Apps (one for the UI and one for the API); an Azure virtual network, VPN gateway and virtual network security group; Azure container registry; Azure MariaDB database service; and a container for scheduling the prediction code. Not to mention, it also includes Azure Application Insights for that all-important traffic analysis.
The UI web app is to be configured with two P1v2 Linux plans, each setting you back around £61 per month, and the AI web app with two P2V2 Linux plans at around £123.00 each per month. The MariaDB database is 4 vCores at £221 per month – so you are paying £600 per month before adding in storage costs, virtual network gateway and so on. The code and installation instructions are on GitHub here. The guide explains everything you need to know about deploying the application, but nothing much about how to use it.
The way the system appears to work is that a bug detection model is trained by inspecting open source GitHub projects, including the history of reported bugs in the codebase. Then, say the docs rather vaguely, “Once we have a model that has acceptable value of precision and recall, selected model is deployed for prediction on new commits.” Code that seems likely to be buggy is flagged for further investigation and testing.
According to Altran, “Code Defect AI relies on various ML techniques including random decision forests, support vector machines, multilayer perceptron (MLP) and logistic regression. Historical data is extracted, pre-processed and labelled to train the algorithm and curate a reliable decision model. Developers are given a confidence score that predicts whether the code is compliant or presents the risk of containing bugs.”
Code Defect AI supports integration with third-party analysis tools and can itself help identify bugs in a given program code. Additionally, the Code Defect AI tool allows developers to assess which features in the code have higher weightage in terms of bug prediction, i.e., if there are two features in the software that play a role in the assessment of a probable bug, which feature will take precedence.
The “solution” page looks glossy but The Reg noticed a surprising omission in the presentation. Normally such things have enthusiastic testimonies from happy users. We did find a video in which Altran VP Software Engineering and Cybersecurity Khalid Sebti said that “customers who have used our code defect solution predictions are able to identify over 20 per cent of the defects before the testing process begins.”
But how?
Where is the evidence, though? We asked Altran, which first sent us a document entitled “PowerPoint Presentation” assuring us that this was “an AI powered solution that identified bugs” but with no data to back up the claim. We asked again and were referred to the code on GitHub and in particular the “flowchart of the architecture.” There are several such flowcharts, one which has “storage and data lake Azure” stuck on the side for good measure. We did learn from the aforementioned video that this is merely phase one of the project. In phase two, “the solution expands to other Microsoft AI/ML technologies including ML Server and SQL Server.” The flow chart for this includes the magic word Kubernetes for extra goodness, as well as both SQL Server and Cosmos DB.
Phase two of the project is expanded to use increasing numbers of Azure services
Phase two of the project is expanded to use increasing numbers of Azure services
The question perhaps is whether it makes sense to train your own AI model on defects found in open source GitHub projects when it might be better to subscribe to a service that has done its own model training at scale. On its security blog, Microsoft posted last month that “since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 per cent of the time and accurately identifies the critical, high priority security bugs 97 per cent of the time.”
At Microsoft, the post explained, “47,000 developers generate nearly 30 thousand bugs a month,” which strikes us as good going. The methodology for this is also promised to be open-sourced on GitHub “in the coming months” though it does not appear to be related to the Altran project.
AWS trained its CodeGuru profiler on “hundreds of thousands of internal projects, as well as over 10,000 open source projects in GitHub.” This is the kind of scale which, like Microsoft’s security project, may yield interesting results.
Is this an open source project hoping to attract the support of a sufficient collaboration community to sustain momentum? It looks unlikely; some code has been dumped in GitHub but there is no evidence yet of any community around it. Think of it more as a sample showing how you can code-host an AI project on Azure. In that respect it may be useful, but the “better code faster with Azure AI,” that the PowerPoint promises? We are sceptical but await customer reports with interest.