Too Many Scripts Can Kill Your Continuous Delivery
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source – infoq.com
Avantika Mathur, product manager at Electric Cloud, spoke at Continuous Lifecycle London last month on the costs associated with an ever increasing number of scripts in a Continuous Delivery pipeline. Besides the maintenance cost, the lack of visibility and auditability on exactly what activities are being carried out before deploying a change to production is another major cost not many organizations are aware of.
Solving this problem requires first recognizing the issues and establishing guiding principles for a new approach to pipeline orchestration. Mathur recommends these principles as a starting point:
- ensure repeatability and consistency between deployments
- separate application definitions from the environment(s) where it will be deployed/run
- aim for portability between environments
- avoid lock-in to certain tools and technology (in other words, make sure the practices are guiding the work, rather than the tools)
In terms of actually moving away from the script sprawl, Mathur’s suggested approach is to first refactor scripts into parameterized common functions and later replace them – where possible – by tools that can do the same job as well or even better.
However, tackling a large number of scripts at once can be challenging (from both technical and people point of views) and ineffective (focusing on low ROI first). Mathur recommends an iterative approach. First, run value stream mapping exercises to identify immediate bottlenecks and dependencies slowing down delivery and/or obfuscating the delivery process. That will help prioritize which scripts need to be refactored first. Mathur also suggested categorizing existing scripts into buckets (configuration, deployment, test automation, etc) to identify duplicate tasks, classifying them in terms of complexity to estimate effort, measuring how often the script is run to estimate potential benefit, and, finally, checking if there’s a better (not home-grown) alternative to minimize cost.
Mathur has seen first hand the effects of these “scripting nightmares”, from 80% of a team’s engineering time being spent on their maintenance (not evolution) or scripts that are simply automating ineffective, slow processes, rather than help deliver faster and/or safer. Typical “smells” that indicate scripts are out of control and insufficient care is being taken on the pipeline orchestration include engineers dedicated to maintaining scripts, fear of making changes to fragile scripts, lack of visibility on what was executed, and lengthy processes to prepare for audits.
In short, Mathur recommends “treating the pipeline as a product”, making sure each change to the pipeline is tested and fully vetted before “production” (i.e. available to everyone in engineering). This also means making the pipeline visible to everyone, measuring and improving performance with metrics and benchmarks, and reusing existing pieces as much as possible.