UNDERSTANDING MICROSOFT AZURE’S RESPONSIBLE MACHINE LEARNING MODEL

Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Source:-analyticsinsight

The Microsoft Azure Machine learning with the help of AI opens up the opportunity for interpretability, accountability and fairness.
With the advent of COVID 19 outbreak, the aviation sector had to stop its operations, thus having a halt in flight travels. Many airlines felt the burden of this unprecedented situation, however Scandanavian airline (SAS), felt another threat looming over their head.
The SAS has a loyalty program EuroBonus, which provides points to the airline’s customers every time they travel. However, owing to the pandemic, this program had to halt operations until air travel can be resumed. However, EndoBonus scammers have been trying to gain many points by either booking travel rewards or by selling the tickets.
This has been a major concern for the management of SAS, as due to the fraud, the airlines’ fears of their legitimate customers missing the opportunity for renewing the seats and SAS dropping out on business revenue.
To counter the issue with Eurobonus scammers, Microsoft azure has designed a machine learning program that would alert the officials to understand, protect and control the scams
Building Interpretability
The SAS AI system incorporates rivulets of real-time flight, transactions, awards and other data with a machine learning model, that accounts for thousands of parameters for finding suspicious threats.
The Microsoft Azure has a machine learning capability called interpretability which is required in detecting fraud. This capability is powered by InterpretML toolkit and defines the parameters for identifying the important parameters for suspicious behaviour. i.e. a scam of pooling points from ghost accounts to book flights.
The Model interpretability also helps in building confidence and trust in model predictions.
Building Fairness and Accountability
As the use of Machine learning is transforming from research labs to the use by developers, many developers are concerned regarding the fairness, accountability, Transparency and Ethics of this new AI model. The need for having a non-discriminatory model which can also comply with privacy regulations is being observed across the industry.
To address this issue, Microsoft has announced innovations for responsible use of AI and machine learning, to assist developers in understanding, protecting and controlling their Models, throughout the lifecycle of machine learning. The model would also be deployed by Fairlearn Toolkit, which will include the application of the same Interpret ML toolkit used for detecting EuroBonus scammers.
Another Toolkit for differential privacy for developers is also available for experimenting within open source on GitHub, and also can be accessed using GitHub. With the help of this Toolkit, it will be possible for the user to derive insights from the data but with providing statistical assurances of private data.
The Azure Machine learning is also creating a capability for developers known as Machine Learning and Operations (MLOps), that would help them to track and automate the entire process of building training and deploying the model. This MLOps has an audit trail which aids the organizations to meet regulatory and compliance requirements.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x