Securing Docker Containers: A Primer
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-containerjournal.com
There are many challenges when building an application, but one of the most crucial is making sure it’s secure. Whether storing hashed passwords, sanitizing user inputs or constantly updating package dependencies to the latest and greatest, the effort to attain a secure application is never-ending. While containerization has made it easier to ship better software faster, there are still plenty of considerations to take when securing your infrastructure.
More specifically, if an application uses Docker as its container platform, there are several baseline tactics one can take to ensure that it’s optimized for safety. When it comes to protecting Docker containers from malicious actors, following these guidelines (broadly applicable to any containerization solution!) will help ensure improved cloud security.
Treat It Like a Real Machine
Many make the assumption that because processes within a container are isolated, they are inherently secure. It’s possible to have multiple applications running within a single host and, as such, it’s important to grant access to resources for the least privileged user.
However, in a “traditional” desktop environment, that means making sure directories and files are not world-writable, running containers with a non-root user and using such features as namespaces or cgroups to classify who has access to what.
Only Use Images From Trusted Sources
Open source is great, isn’t it? If one needs to bring an OS, language or application image into a Docker container, chances are that someone on the internet has already made one. However, pulling in a public repository can look like one thing on the outside but can be completely different once downloaded. Open source places a lot of trust in the author, and even if you go through the container line by line, you don’t know for certain whether corrupted files are included.
Luckily, there are plenty of reputable sources to fetch from. Don’t run the risk of grabbing something that isn’t hosted in a validated package registry. Docker Hub, for example, automatically scans containers and even provides certification that ensures the container is legitimate.
A data breach formally responded to at DockerCon 19, which impacted 190,000 accounts, would suggest that implicitly trusting Docker Hub isn’t enough. For production environments, you will want to have your Docker client enforce content trust. This will help guide users to only use images that have been cryptographically signed by the image authors. You can read more about how it works here.
Use Docker Bench for Security
It’s inevitable that the likelihood of security issues increases as time goes on. Rather than attempting to stay on top of these problems, why not have a script do that for you?
Docker Bench for Security is an official service from the Docker crew. It’s a small bash program that ensures that the Docker container is deployed using the recommended best practices. In doing so, IT infrastructure employees can easily incorporate their existing continuous integration (CI) workflow, either by bringing it into docker-compose.yml file, or by cloning the repository and running the provided shell script.
Limit Direct Aaccess to Kubernetes Nodes
Kubernetes is a popular platform for orchestrating Docker containers across a fleet of hosts. Just as with any part on your network, access to the server that runs Kubernetes should be limited to a very small subset of technical administrators. As such, organizers should set up identity and access management policies that guarantee this.
Preventing secure shell access to the nodes entirely also mitigates the risk for unauthorized access to the Kubernetes host machine. If developers need to run commands against a node, they should do it using kubectl exec. This grants them direct access to the container’s environment without the ability to access the host itself.
Isolating Nodes Within Kubernetes
Since several nodes can run within a single cluster, and Kubernetes can manage several different clusters, it’s essential to limit the scope of permissions between the clusters. That way, if one container or cluster is compromised, others in the fleet won’t be affected.
Kubernetes offers namespaces to partition shared resources into groups. Resources from one namespace can be hidden from other namespaces. By default, every resource is grouped into a namespace called “default,” so it’s important to look at your architecture and readjust the resource allocations as necessary. Kubernetes’ authorization plugins can help to create policies that divide resources into namespaces that are shared between different users. That way, every group can have an allocation explicitly defined, which provides assurance that no compromised machine will chew up your cloud service bill or affect your users.
Protecting the Network
Even after verifying how the Docker container runs and locking down access to Kubernetes, it’s important to create some kind of segmentation within the overall network. The goal is to limit any cross-cluster communication to continue reducing the effect of any potentially exploited vulnerability.
Kubernetes provides documentation on defining automatic firewall rules between the dynamic IPs of containers. Beyond that, setting up Ingress configurations also allows one to define which services can be explicitly accessed via an HTTP API.
Getting More Help
This has only scratched the surface of how to secure the Docker containers that run applications and every operational layer has different considerations to acknowledge. There are plenty of additional resources online with even more best practices: In terms of automation, Snyk has some guidelines on integrating best practices within your CI pipeline and this nifty cheat sheet also breaks down how to defend against the different types of potential exploits. The bottom line is that while applications inside containers are isolated, they are not invincible. Docker enables developers to make changes quickly, but this flexibility also brings with it additional security considerations. Stay safe!
Click to share on Twitter (Opens in new window)Click to share on Facebook (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Reddit (Opens in new window)