Azure opens up DevOps Pipelines to Auditing

Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Source:- devclass.com

Microsoft has given curious managers greater insight into who is doing what in their Pipelines, with a public preview of Auditing for Azure DevOps.

According to a blog post by Azure DevOps senior engineer, Octavio Licea Leon, “When an auditable event occurs, a log entry is recorded. These events may occur in any portion of Azure DevOps.” He listed Git repo creations, permission changes, resource deletions and accessing the auditing feature as typical auditable events.

This means you – or alternatively your boss – will be able to check out who created the event along with their IP, timestamps, and the outcome, amongst other nuggets.

Further features are set for the coming months, including the ability to stream logs to SIEM tools, which Licea Leon said will “give you more transparency into your workforce and allow for anomaly detection, [and] trend visualization…”

Advertisement

The default is that auditing can only be accessed by Project Collection Administrators, and the feature will be automatically turned on for all Azure DevOps organisations. Events will be stored for 90 days, though they can of course be downloaded or backed up to another service and kept indefinitely.

Talking of backups and storage, Azure has also announced general availability of its Azure Premium Files storage service. As Microsoft puts it, “Premium files offers a higher level of performance built on solid-state drives (SSD) for fully managed file services in Azure.”

This should suit IO-intensive workloads, such as “databases, persistent volumes for containers, home directories, content and collaboration repositories, media and analytics, high variable and batch workloads, and enterprise applications that are performance sensitive.”

Users are, apparently, able to scale up performance instantly, from 100GiB and a baseline 100 IOPS up to up as far as 100TiB in capacity and 100,000 IOPS, and 10GiB/s of throughput. Those IOPS figures can be increased by automated bursting which promises up to a 3x increase in IOPS, based on a “credit system” – ie, credits accumulate in a “burst bucket” when traffic for a file share drops below the baseline figure.

The service offers three redundancy options, with data being replicated either locally, across three clusters in a single region, or to a completely separate region.

This all costs, of course, and while pricing for standard storage starts at $0.06 per used GiB, Premium Storage costs $0.24 per provisioned GiB, though there are no transaction fees.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x