Portable Security Policies: A DevSecOps Primer
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source :- devops.com
Protecting critical data and applications is a challenge under any circumstances, but it’s especially daunting when resources reside in the cloud. Most organizations today operate a significant portion of their workloads in the cloud, which adds to the complexity of the security problem—a security team can’t fully control cloud environments but is responsible for securing workloads and applications running there.
Cybercriminals are exploiting the situation. They’re becoming more aggressive and ingenious in their efforts, taking advantage of the fact that there is confusion about who is responsible for which aspects of security under the Shared Responsibility Model. Adding to the chaos is the differentiation between cloud providers’ security offerings and capabilities. Quite simply, in most cases, cloud providers are responsible for the security of the cloud and consumers are responsible for security in the cloud, meaning that security and networking teams still need to ensure their organizations’ workloads and applications are free from malware or other tampering.
To accomplish this, security and networking teams can’t rely solely on firewalls, threat detection and vulnerability patching to secure vital digital assets. Security tools developed for traditional data centers are not effective at securing workloads in the cloud due to the cloud’s dynamic nature. Security and networking teams must rethink their approach.
Unfortunately, many organizations are still relying on old and ineffective strategies—further hardening the perimeter, investing in additional threat detection and adding new tools and controls that focus on identification rather than prevention. This security approach has its place, but it does nothing to prevent malicious actors from moving laterally across the network once they achieve a foothold, which leaves cloud-hosted apps and workloads extremely vulnerable.
Instead, security teams need to enable zero trust in cloud environments, which treats all internal communications as untrusted by default. With zero trust, only authorized applications and services are allowed to send and receive communications, and only in specific ways governed by the principle of least privilege.
The Second Layer
Traditional tools aren’t effective in enabling zero trust in the cloud because security and ops teams have a very limited ability to manage all layers of the cloud’s open systems interconnection (OSI) model. Specifically, the lack of Layer 2 controls in infrastructure-as-a-service (IaaS) environments makes tools developed for discrete networks all but useless for locking down ports and controlling or scrutinizing IP addresses.
Even if an organization manages to implement these tools in the cloud, it cannot provide the level of granularity required to stop malicious lateral movement. For example, when on-premises tools are repurposed for cloud environments, they usually rely on subnets to define the boundaries of communication. Because cloud subnets are designed to handle elastic workloads, they need to be far larger than those in on-premises environments. As a result, cloud workloads are more exposed than is necessary.
Securely migrating workloads and applications to a cloud environment is a huge and cumbersome job when relying on traditional tooling. The best way to make workloads portable and secure is to decouple workload protection from the underlying infrastructure. In this way, organizations can be certain that any policies applied on-premises can migrate to the untrusted environment of the cloud, without the complexity of mapping changing network addresses, writing policy exceptions or losing fine-grained control during the process.
Decoupling workload protection from the network requires abandoning a network address-based approach in favor of one based on identity. By using the immutable properties of workloads, security teams can create cryptographic identities for applications and services, which then can be used to determine what is allowed to send and receive communications. Not only does this model provide stronger security, but it also works regardless of where applications are located, because these identities are unaffected when network elements change. What’s more, they can be constructed to tolerate upgrades. By building identity-based policies, security teams can create microperimeters around applications that follow workloads wherever they go, and the policies verify communication across boundaries every time an application attempts to communicate.
Implementing network-agnostic policies doesn’t just harden security. Software developers benefit as well. Without the roadblock of translating application-speak to network-speak, developers can build software and applications in any environment favorable to their workflow while defining policies to protect their work. Once the software is built, they can securely deploy the finished product to the production environment—even if it is hosted by a third party.
No organization should have to compromise security to benefit from the efficiencies offered by the cloud and ephemeral auto-scaling containers. Building identity-based policies that enable zero trust and are independent of the underlying network ensures that companies never have to consider making that trade-off.