How to Secure Your Kubernetes Deployments
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-darkreading.com
As more companies shift their software to a microservices-based architecture and orchestrate their containerized applications in Kubernetes, distributed security controls become a must.
At a time when almost every company is to some degree a software company, digital transformation and cloud adoption are not just strategic but critical to enterprise success. Whether companies were born into the cloud or are just setting foot into it, it’s important to know that the traditional security practices of firewall-based network segmentation are no longer dependable in this new frontier.
Indeed, the effectiveness of traditional firewalls is fundamentally minimized by the scale and elasticity of cloud infrastructure, virtual private cloud networks, and cloud-native applications, and the many stakeholders that build, ship, and operate those applications. As more companies shift their software to a microservices-based architecture and orchestrate their containerized applications in Kubernetes, distributed security controls become a must.
In cloud environments, and in Kubernetes specifically, the threat and risk model should account for internal-born threats already present inside one of the running components. Examples include a rogue software library imported for use or a container image coming from an untrusted source.
Kubernetes has solid native security controls compared to open-platform native techrnologies or even proprietary virtual machine-based platforms. Kubernetes offers flexible authentication machinery, mature role-based access control (RBAC) for authorization, fine-grained controls on how processes run, validation of resources before admitting them into a Kubernetes cluster, and adaptive pod (colocated containers) east-west network segmentation.
Implementing fine-grained microservices network segmentation has a high impact as far as reducing and limiting the attack surface, limiting the ability to pivot from one component to another, exfiltration of data, and other forms of lateral movements.
Microsegmentation Management
Undeniably, one of the biggest challenges with microsegmentation is managing it over time. As of Kubernetes v1.8, the following native network policy APIs are generally available:
• By default, Kubernetes workloads (pods) are not isolated; pods accept traffic from any source, and pods are allowed to send traffic to any destination.
• Kubernetes network policy semantics only enable east-west (cluster internal) segmentation, as well as specifying Classless Inter-Domain Routing (CIDR) blocks. It does not support domain names (or domain wildcards) in the policy syntax.
• Kubernetes NetworkPolicy captures application intent by specifying how groups of pods are allowed to communicate with each other and other network endpoints (CIDR).
• Kubernetes NetworkPolicy resources use labels to select pods and define rules that specify what traffic is allowed to the selected pods.
• The Kubernetes Container Network Interface (CNI) plugin must support the network-policy APIs in order to enable network policy enforcement. Some popular plugin choices include Calico and Flannel, as well as the cloud provider CNI plugin that leverages the cloud service provider virtual private cloud (VPC) networking. All of the recommended plugins can be found in the Kubernetes documentation.
Right off the bat, one simple policy you can set to flip the open-by-default paradigm and close your pods off to traffic is the deny-all policy, also known as blacklisting. Blacklisting a pod denies all traffic to and from other pods. The best practice is to blacklist all of your pods, then set additional network policies to explicitly allow communication between pods as needed, also known as whitelisting. You can do this with a default deny-all policy, which changes the namespace’s default to deny all non-whitelisted traffic.
Additional network security configurations that control which traffic sources (network blocks) are allowed to be ingested into the cluster by load balancers and layer-7 proxy (Kubernetes Ingress) are available in the form of special resource annotations. This configuration comes in the form of special tags that are consumed by a Kubernetes cloud controller, a glue layer between Kubernetes, and the underlying platform the cluster runs on. The cloud controller programs the underlying VPC networking security configuration as well as load balancers in accordance with those special annotations.
Not Far Enough
While this seems a healthy amount of network security controls, Kubernetes’ native controls are not sufficient:
• Workloads (pods) that run on the host network are not subject to whatever network policies were configured on the host network.
• Kubernetes policies are additive and adhere to a whitelisting approach. It lacks very basic semantics of drop-actions in network policy rules. Whitelisting extensions can be achieved with third-party tools and open source projects such as Calico.
• Workloads that require access to resources outside the cluster are denoted by domain endpoints (such as database or SaaS services like Slack) that can’t be segmented on their egress paths.
• Identity-based access controls are not addressed by the Kubernetes native controls and require side-car based proxies to establish such controls.
• Kubernetes infrastructure does not expose policy violation statistics or logs, which means the substance that intrusion detection and prevention systems rely on is absent.
• The domain name system (DNS), Kubernetes’ underlying service discovery, is open by default for every pod in the cluster. This means exfiltration methods such as DNS tunneling and abusing inherent weaknesses in DNS protocol require specialized network security analysis to detect anomalies and threats.
Take Control of Your Own (Security) Fate
Kubernetes is still relatively new and can have a steep learning curve. Ultimately, understanding that Kubernetes is open by default is the most important step you can take toward securing your cloud-native applications and preventing unwanted traffic. With this understanding, you can change the default and take control of the traffic flowing through your application.