Using OPA to safeguard Kubernetes

Open Policy Agent addresses Kubernetes authorization challenges with a full toolkit for integrating declarative policies into any number of application and infrastructure components

micro segmentation security lock 2400x1600
Denis Isakov / Getty Images

As more and more organizations move containerized applications into production, Kubernetes has become the de facto approach for managing those applications in private, public and hybrid cloud settings. In fact, at least 84% of organizations already use containers in production, and 78% leverage Kubernetes to deploy them, according to the Cloud Native Computing Foundation.

Part of the power and allure of Kubernetes is that, unlike most modern APIs, the Kubernetes API is intent-based, meaning that people using it only need to think about what they want Kubernetes to do — specifying the “desired state” of the Kubernetes object — not how they want Kubernetes to achieve that goal. The result is an incredibly extensible, resilient, powerful, and hence popular system. The long and short of it: Kubernetes speeds app delivery.

However, changes in a cloud-native environment are constant by design, which means that runtime is extremely dynamic. Speed plus dynamism plus scale is a proven recipe for risk, and today’s modern environments do indeed introduce new security, operational, and compliance challenges. Consider this: How do you control the privilege level of a workload when it only exists for microseconds? How do you control which services can access the internet — or be accessed — when they are all built dynamically and only as needed? Where is your perimeter in a hybrid cloud environment? Because cloud-native apps are ephemeral and dynamic, the attack surface and the requirements for securing it are considerably more complex.

Kubernetes authorization challenges

Moreover, Kubernetes presents unique challenges regarding authorization. In the past, just that simple word, “authorization” brought up the concept of which people can perform which actions, or “who can do what.” But in containerized apps, that concept has greatly expanded to also include the concept of which software or which machines can perform which actions, aka “what can do what.” Some analysts are starting to use the term “business authorization” to refer to account-centric rules, and “infrastructure authorization” for everything else. And when a given app has a team of, say, 15 developers, but is made up of dozens of clusters, with thousands of services, and countless connections between them, it’s clear that “what can do what” rules are more important that ever — and that developers need tools for creating, managing, and scaling these rules in Kubernetes.

Because the Kubernetes API is YAML-based, authorization decisions require analyzing an arbitrary chunk of YAML to make a decision. Those chunks of YAML should define the configuration for each workload. For instance, enforcing a policy, such as “ensure all images come from a trusted repository,” requires scanning the YAML to find a list of all containers, iterating on that list, extracting the particular image name, and string-parsing that image name. Another policy might be, for example, “prevent a service from running as root,” which would require scanning the YAML to find the list of containers, iterating on that list to check for any container-specific security setting, and then combining those settings with global security parameters. Unfortunately, no legacy “business authorization” access control solutions — think role-based or attribute-based access controls, IAM policies, and so on — are powerful enough to enforce policies as basic as the one above, or even things as simple as changing the labels on a pod. They simply were not designed to do so.

Even in the rapidly evolving world of containers, one thing has remained constant: Security is often pushed out to the end. Today, DevOps and DevSecOps teams are striving to shift security left in development cycles, but, without the proper tools, are often left to identify and remediate challenges and compliance issues much later on. Indeed, to truly meet the time-to-market goals of a DevOps process, security and compliance policy must be implemented much earlier in the pipeline. It’s been proven that security policy works best when risk is eliminated in the early phases of development, meaning it’s less likely that security concerns will arise toward the end of the delivery pipeline.

Yet, not all developers are security experts, and manual reviews of all YAML configurations is a guaranteed path to failure for already overburdened DevOps teams. But you shouldn’t have to sacrifice security for efficiency. Developers need appropriate security tooling that speeds development by implementing hard guardrails that eliminate missteps and risk — ensuring that their Kubernetes deployments are in compliance. What’s needed is a way to improve the overall process that is beneficial to developers, operations, security teams, and the business itself. The great news is there are solutions built to work with modern pipeline automation and “as-code” models that reduce both error and exhaustion.

Enter Open Policy Agent

Increasingly, the preferred “who can do what” and “what can do what” tool for Kubernetes is Open Policy Agent (OPA). OPA is an open-source policy engine, created by Styra, that provides a domain-agnostic, standalone rules engine for business and infrastructure authorization. Developers often find OPA to be a perfect match for Kubernetes because it was designed around the premise that sometimes you need to write and enforce access control policies — and plenty of other policies — over arbitrary JSON/YAML. As a policy-as-code tool, OPA leads to increased speed and automation in Kubernetes development, while improving security and reducing risk. 

In fact, Kubernetes is one of the most popular use cases of OPA. If you don’t want to write, support, and maintain custom code for Kubernetes, you can use OPA as a Kubernetes admission controller and put its declarative policy language, Rego, to good use. For instance, you can take all of your Kubernetes access control policies — which are typically stored in wikis and PDFs and in people’s heads — and translate them into policy-as-code. That way, those policies can be enforced directly on the cluster, and developers running apps on Kubernetes don’t need to constantly refer to internal wiki and PDF policies while they work. This leads to fewer errors and eliminates rogue deployments earlier in the development process, all of which results in higher productivity.

Another way that OPA can help address the unique challenges of Kubernetes is with context-aware policies. These are policies that condition the decisions Kubernetes makes for one resource on information about all the other Kubernetes resources that exist. For example, you might want to avoid accidentally creating an application that steals another application’s internet traffic by using the same ingress. In that case, you could create a policy to “prohibit ingresses with conflicting hostnames” to require that any new ingresses are compared to existing ingresses. More importantly, OPA ensures that Kubernetes configurations and deployments are in compliance with internal policies and external regulatory requirements — a win-win-win for developers, operations and security teams each.

Securing Kubernetes across hybrid cloud

Oftentimes, when people say “Kubernetes,” they’re really referring to the applications that run on top of the Kubernetes container management system. That’s also a popular way to use OPA: have OPA decide whether microservice and/or end-user actions are authorized within the application itself. Because when it comes to Kubernetes environments, OPA offers a full toolkit for testing, dry-running, auditioning, and integrating declarative policies into any number of application and infrastructure components.

Indeed, developers often expand their use of OPA to enforce policies and increase security across all of their Kubernetes clusters, particularly in hybrid cloud environments. For that, a number of users also leverage Styra DAS, which helps to validate OPA security policies in pre-runtime to see their impact, distribute them to any number of Kubernetes clusters, and then continuously monitor policies to ensure they’re having their intended effect.

Regardless of where organizations are on their cloud-native and container journeys, what’s clear is that Kubernetes is now the standard for deploying containers in production. Kubernetes environments bring new, unique challenges that organizations must solve to ensure security and compliance in their cloud and hybrid-cloud environments — but solutions do exist to limit the need for ground-up thinking. For solving these challenges at speed and scale, OPA has emerged as the de facto standard for helping companies mitigate risk and accelerate app delivery through automated policy enforcement.

Tim Hinrichs is a co-founder of the Open Policy Agent project and CTO of Styra. Before that, he co-founded the OpenStack Congress project and was a software engineer at VMware. Tim spent the last 18 years developing declarative languages for different domains such as cloud computing, software-defined networking, configuration management, web security, and access-control. He received his Ph.D. in Computer Science from Stanford University in 2008.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2020 IDG Communications, Inc.