Seven ways the mature Docker container architecture changes the game
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:- techtarget.com
What does the architecture of a containerized system look like, and how have Docker container architectures evolved as Docker matures? The answer to this question can be found by examining the different layers of infrastructure required to host containerized applications.
What is a container architecture?
Simply put, a container architecture is the hardware and infrastructure set up to host an application within containers.
Managing a Docker container architecture is different from running just a single container. Running a single container simply requires a host OS, a container image and Docker. There’s not much architecting involved.
If you want to run containerized applications in production, however, you’ll need much more than that. You’ll want to run many containers in order to ensure high availability of your application. You’ll need to distribute your application across multiple physical servers. You’ll require a way to automate the process of starting and stopping containers. And so on.
The components you implement for performing these various tasks, and the way you connect them together, form your container architecture.
Docker container architecture today
In the early days, a Docker container architecture was relatively simple. It involved a Linux host OS, an orchestrator and, typically, a registry for storing container images. It didn’t generally include anything else because there was not much else to include in what was, at the time, a small and uncomplicated Docker ecosystem.
But that ecosystem expanded, and there were suddenly many more choices when building a Docker container architecture. Following are some of the decisions you’ll have to make, which reflect the way container architectures have evolved in the last year or so:
Hardware options. Containers can be hosted by virtual machines (VMs) or on physical bare-metal servers. Using VM hosts gives you some extra flexibility because you can move and copy virtual machine images fairly easily. But if you want your containerized apps to take full advantage of bare-metal hardware — something containers, unlike VMs, can do — then you’ll need to run everything on bare metal. A third option is to host Docker containers using a system container platform that promises to provide the flexibility of virtual machines with the performance of bare metal, such as OpenVZ or LXD.
Host OS. Until recently, Linux was the only operating system that could serve as a Docker host. That changed in fall 2016, when Docker announced official support for Windows Server 2016. As of now, Docker’s Windows support remains basic, and I would not recommend using Windows as a host for Docker production environments just yet. But expect it to become a realistic option in the relatively near future.
Cloud or on-premises. The advent of cloud-based containers as a service (CaaS) platforms like Amazon Elastic Cloud Compute Container Service and Azure Container Service has made it easier than ever to deploy containers in the cloud. If you don’t want to use CaaS, you can always set up Docker on a virtual server in the cloud. But remember: Running containers in the cloud comes with all the general baggage associated with cloud infrastructure, including less control and the potential to run into compliance issues.
Orchestrator. Any production-ready container cluster needs an orchestrator to provision and manage it. Fortunately, there are now lots of mature orchestrators to choose from. Swarm (which now comes built into Docker itself), Kubernetes and Mesos are leading options, but the list goes on ad infinitum. Unfortunately, the range of orchestrators you can realistically use may be constricted by the other components you choose to include in your Docker container architecture. This is particularly true if you use CaaS; in many cases, each CaaS only supports a certain type of orchestrator. For instance, Red Hat OpenShift runs only Kubernetes, and Docker Datacenter requires Swarm.
Image registry. Do you want to host your container images in the cloud? If so, a public registry service like Docker Hub is a good fit. For more privacy and control, you’ll want to consider an on-premises registry. There are plenty of on-premises registry options out there, including Private Docker Registry, VMware Harbor and the registries that come built into many CaaS platforms.
Security tools. Your container architecture is not complete enough for production use if it’s not secure. To keep it secure from cyberattacks, you need to lock down each layer of the architecture by selecting and implementing the right tool. For example, image scanners such as Clair can help secure container images inside registries. Security-Enhanced Linux and AppArmor can harden the Docker daemon on your Linux host. And container-centric security vendors like Aqua Security Software are now rolling out security solutions that promise to secure the entire Docker container architecture at once.
Monitoring. Last but not least, your architecture should include a way to monitor containerized apps and aggregate logs. The way you do this will depend in part on which other components you have in your infrastructure. If you run containers on the Amazon Web Services public cloud, for example, CloudWatch is your logging and monitoring option. But you could also deploy monitoring tools designed specifically for containers. CAdvisor is a popular open source choice, and vendors like New Relic and Netuitive offer container monitoring as well.
So, now you know that today’s Docker container architecture consists of various moving pieces. That means you have a lot of choices to make when designing your container stack. And as the container ecosystem becomes even larger, the number of choices is poised to increase further still.