Aligning Cloud Security to the Cybersecurity Exec Order

It’s encouraging to see alignment between the Biden administration and industry around the critical nature of cybersecurity and see pragmatic steps forward. The White House’s issuance of an executive order (EO) on improving the nation’s cybersecurity and last month’s follow-up meeting with industry leaders are encouraging milestones.

As technology touches every aspect of our lives in the United States, we’ve grown to expect (and sometimes take for granted) that critical infrastructure is “always on.” Whether it’s power for alarm clocks and water for showers to start our day or internet for video conferences and the cellular network we can’t imagine living without, we expect it to be instantly available when we need it. As security professionals, we have long realized that consistent controls are fundamental to minimizing disruption. This heightened focus between the government and private sector creates room for optimism.

The move to cloud and containers brings a generational shift in application development methodologies, creating a unique opportunity to reduce risk by getting security right.

A few factors reduce risk in the cloud:

  • The cloud vendors secure the hardware infrastructure and some aspects of software infrastructure as part of the shared security model. They are specialists, so the risk is lower than when individual organizations with less expertise handle security on their own.
  • Using code to automate repetitive tasks, such as defining security policies for software images and infrastructure, minimizes human error. When unexpected changes, or drift, in configurations or policies are identified, they can be fixed in code so changes are ‘remembered’ going forward. DevOps and GitOps incorporate tooling that automates many steps in the deployment of cloud infrastructure and applications. These tools can also be used to define security policies-as-code.
  • Containers are black boxes that interact with other services but do not provide visibility into what’s happening inside. This reduces opportunities for attackers. Container images must be scanned for known vulnerabilities to confirm the exterior cannot be easily penetrated. The downside of this opaque nature is that traditional security tools cannot gain visibility to activity to investigate security alerts. Security tools must be designed specifically for container runtime threat detection.
  • The goal of building security into development and operations processes is finally happening with the move to secure DevOps processes. DevOps teams are taking ownership of and responsibility for identifying and fixing vulnerabilities as part of the CI/CD pipeline. Tools such as the Kubernetes admission controller allow security teams to define policies that DevOps teams can implement to guide developers on which vulnerabilities they need to fix before images are deployed to production. Runtime security rules continue this automation of policy by flagging anomalous behavior and compliance issues.

How Cloud and Container Security Differs

  • Kubernetes and the CI/CD pipeline are built on open source, and security tools are moving to open source technologies as well. A key tenet of the executive order is to remove barriers to the sharing of threat information. Open source software shares detection rules, compliance frameworks and core pieces of intellectual property that accelerate innovation and drive standardization. Speed of innovation is critical to improving security capabilities. Open source projects, based on an industry standard, may have access to data that proprietary tools do not have. For example, Falco open source (a CNCF project created by Sysdig) has unique access to system activity in AWS Fargate serverless containers for the purpose of threat detection. Similarly, extended Berkeley packet filter (eBPF) provides a standard way to instrument the Linux kernel, thus enabling an avenue for many vendors to have access.
  • Deep visibility across containers and clouds requires a different approach. Linux syscalls are able to go a layer below the container to gain visibility to all activity, which can be used both for runtime security and incident response. The cloud vendors provide logs that capture activity across their services. This is helpful, as these vendors add services frequently. By using their logging services, you will automatically have coverage for any new services they offer. Tools are available to alert on unusual activity based on these cloud logs.
  • Security responsibilities are shared with the DevOps team, so a secure DevOps approach is fundamental. In some ways, this is similar to the way the security teams work with network security teams in some organizations. The policies are typically defined by the security team, and the DevOps team implements those policies into the build, run and respond stages of the life cycle.

The Biden administration has tasked the National Institute of Standards and Technology (NIST) with defining security measures for critical software use. The good news is most security professionals agree that the NIST guidelines are common sense. NIST defined the objectives for security measures as follows:

  • Protect the software from unauthorized access and usage
  • Protect the confidentiality, integrity and availability of the data used by the software
  • Protect the software and platform from exploitation
  • Quickly detect to, respond to and recover from threats and incidents involving the software
  • Strengthen the understanding and performance of human actions to foster security

The security measures can be mapped into a checklist as you address cloud and container security:

Continuously Check Cloud Permissions and Configurations

Inventory the cloud services your organization is using, confirm daily they are configured using best practices and that users and services have the appropriate level of permissions. Inspect cloud logs continuously to check for unexpected changes to configurations and permissions and unusual behavior that could indicate unauthorized access.

Scan Container Images for Vulnerabilities and Risky Configurations

Scan software images for vulnerabilities and enforce a consistent policy for requiring fixes based on vulnerability level in OS and non-OS images. Scan everywhere you have images—including within registries, pipelines and also check for new vulnerabilities identified for images running in production. Be sure to scan hosts as well as containers. Check images for risky configurations, such as containers running as root. Dockerfile best practices help you identify these configuration issues.

Limit Communication between Containers

NIST recommends taking a zero-trust or least-privilege approach to limit risk in several areas. You can use Kubernetes network policies to limit communication between pods, services and applications using a zero-trust approach. Suspicious network activity can be spotted by auditing connection attempts that fail.

Detect Threats and Respond to Alerts

Once you have taken all preventive measures, detection of threats is still required. You need a modern intrusion detection system that alerts on anomalous behavior that could be malicious. Mapping alerts to a framework your security operations center (SOC) team uses for incident response, such as the MITRE ATT&CK framework, can improve efficiency.

It is important to capture a detailed record of exactly what happened before, during and after an event. This visibility can be particularly challenging with containers, but it is critical because containers have short lives. By correlating information across containers and cloud services based on Linux syscall data for containers and cloud vendor logs, you can understand how threats can move laterally through your environment.

As you move to the cloud and deploy applications in containers using Kubernetes, the general guidelines for security are consistent with those followed for on-premises environments. By using tools and processes designed for secure DevOps best practices, you can implement these guidelines and manage risk without slowing down application delivery.

Avatar photo

Janet Matsuda

Janet Matsuda is the Chief Marketing Officer at Sysdig. She is a computer scientist turned marketer who has participated in creating categories that transformed product development and IT delivery. Prior to joining Palo Alto Networks, Matsuda served in marketing and business leadership roles at Palo Alto Networks, Blue Coat Systems, and SGI. Matsuda holds a B.S. in computer science from the University of Iowa and an M.B.A. from Harvard Business School.

janet-matsuda has 1 posts and counting.See all posts by janet-matsuda

Secure Guardrails