Real-World Lessons from DevOps: Dockerizing Applications
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-containerjournal.com
One of the benefits of working for an application management vendor is the ability to see the various and myriad ways development and operations teams architect, build and deploy applications. After all, we are all (including our own company) constantly trying to deliver better software faster. It’s only fitting that as a site reliability team, we find different things that work for us to share with the world. After all, there’s more than one way to improve application performance. In that vein, we wanted to share our experiences from dockerizing our one application that wasn’t already running on Docker.
Instana is not alone in shifting away from classic software delivery models. Almost every organization in the world is making some kind of adjustment as they try to optimize their application delivery processes. We were looking to optimize our delivery and support of what are (or were) the two different applications we had to support: our cloud and self-hosted versions of the Instana back-end (or management servers).
Let’s go through the process we took to standardize our on-prem customers’ experiences installing, configuring and updating the Instana back-end. That was the desire as we took the steps to convert to a fully Docker-based installation process. Let’s look at why we wanted to dockerize, including some decision points on whether you should do that, the reasons we picked now to do it and a few dockerization tips we discovered.
Should You Dockerize? If So, Should You Do It Now?
The first thing you have to decide is if you should Dockerize your application, and this really comes down to three distinct questions:
Should you re-architect your application to containers?
Is Docker the right technology for conversion?
Should you put the effort in right now?
First, a bit about our distribution architecture and methodology for our on-prem clients. Remember, we have a dual delivery mechanism—customers can choose to run either SaaS or self-hosted on-premises offerings. We had always shipped our SaaS to self-hosted platforms by packaging the binaries we created for our SaaS platform into RPM/DEB packages to support different Linux distributions. We wired together all the components with a Chef cookbook that grew over time to cover all the edge use cases we kept discovering.
While this approach had not necessarily been a problem, demands/needs shifted as our customer base grew. We had to support various Linux distributions, outgrowing the original simple single-host installation. We also had to support running across different data center environments including Amazon Web Services, Google Cloud Platform and Microsoft Azure, in addition to private data centers. We wanted to stop the impacts this had on release cycle predictability, as well as simply making it easier for customers to update versions.
From an internal perspective, we wanted to attack the lag time between our SaaS solution updates and the time it took for those updates to make it into the on-prem solution. The on-prem release cycles had always slightly lagged SaaS releases, but the lag time was increasing. We conducted methodical customer interviews to look at where they wanted to improve operational experience. We knew we needed a solution that would enable us to ship an enterprise-ready self-hosted version of our product that is scalable and continuously upgradable with as little effort from the customer as possible.
Choosing and Implementing Docker
While there are other choices for containers, we focused on Docker, especially given its enterprise adoption. We actually discovered that even organizations that don’t yet have containerized workloads in production allow vendors to install Dockerized versions of their applications, as long as you follow specific processes. It helps that the Open Container Initiative (OCI) is widely adopted with new container technologies such as Cri-o, which gives us longevity of our deployment architecture in the future.
It also helps that we use containerized packages in the SaaS version of our platform. We use Nomad and Kubernetes (K8s) to manage that environment. By moving our self-hosted platform to the same deployment architecture and platform, we make it easier on ourselves, because we can use the exact same artifacts for both versions—SaaS and self-hosted—of the platform.
While our own components were already containerized from the SaaS version, we still had to create containers for third-party infrastructure and platforms such as Clickhouse, Cassandra, other databases and more. We also had to add them into the release pipeline.
The reality is that creating a database container isn’t necessarily difficult. The difficult part is managing the configuration of the container and the platform running on it—and, perhaps more importantly, passing down that configuration in a repeatable trackable process. For the databases, we went with a mount point in the container itself to handle that part of the process, which led to a surprise benefit. While we felt it was necessary for database containers, we found that method to be repeatable across all containers, creating even more standardization. It’s also easier to maintain over time since every change only impacts configuration files accessible on the host system. Finally, we can collect debug bundles without hooking into the container, which leads to a simplified Docker run directive.
With the new containerized platform in hand, we went to the customers and started replacing their prior versions with the dockerized platform. The results have been even better than we imagined.
First, the actual deployment of the platform is more repeatable, scales better and is much easier to implement. This not only increased our self-hosted flexibility, but it also drastically reduced the overall complexity of self-hosted releases. That, in turn, has led to more predictable release cycles. From a QA perspective, it sped up qualification cycles, especially for operating systems. All of these together led to a surprise benefit that was not the goal of the project, but is certainly the best outcome: Our SaaS and self-hosted platforms are now released practically at the same time.
Ultimately, though, the results have exceeded our original expectations, making it easier to deploy, easier to maintain and quicker to release our self-hosted version. Those benefits extend to any environment, whether on-prem or cloud-based.
Dockerizing Applications: Lessons Learned
While development, testing and deployment went as expected, there were some interesting nuggets that came up. These are some interesting issues to think about when dockerizing applications.
ulimits on CentOS
One of our first rollouts was on CentOS, since many of our self-hosted clients run on that OS. We immediately saw that Clickhouse couldn’t write to disk as needed and expected. Our initial thought was to correct an obscure database configuration item. However, the problem was the ulimits setting. We were surprised to find that the default setting for ulimits is actually a little low. Once corrected, future rollouts came off with no problems.
A quick word on future-proofing here: One method we considered was to manipulate the ulimit on the host, but we worried that each customer could have their own setup (even changing where and how ulimits was configured). Good news, though. The –ulimit setting for Docker containers overrides the default setting. That fixed any concerns we had for deployed databases. Extra bonus: It’s actually the last parameter within the override hierarchy, so it will always be the exact setting we need.
Eliminating Docker Network Overhead
One of our concerns (and probably one of yours) is any network latency/overhead created by adding the Docker layer. Since we are standardized on a single-host deployment model, we tapped into the host network directly. This has the added benefit of removing any bottleneck that could possibly cause network overhead.
Binding Systemd Services
Talk about the laws of unintended consequences: Operations users were so confident in the new rollout quality that they began issuing updates to the systems through their automated maintenance solutions. But you can’t rely on the Docker startup policy alone to handle things properly when a mandatory startup order is executed. To alleviate this and ensure that our components always start up healthy, we added multiple systemd bindings to docker.service within our Systemd setting.
Should You Dockerize Your Application?
After six months, we are ecstatic with the results—as are our clients/users. As mentioned earlier, we’ve been able to keep our SaaS and self-hosted platforms in lockstep through three months of releases, which led to some key valuable benefits:
All customers have the latest (and new) capabilities available when they’re released.
Dev and Ops organizations all use a single development/deployment cycle.
Our support team rarely has to worry about a unique version in the field.
Our recommendation? Don’t wait! Dockerize applications right now!
What’s Next?
Now that we’ve standardized on Docker packaging, we’re going to tackle scalability and leverage the next part of the SaaS architecture: microservices. Spoiler alert: We’re going to allow deployment of the self-hosted platform with Kubernetes. We’ll let you know what we learn from that project in the near future.