Americas

  • United States

Asia

Oceania

Chris Hughes
Contributing Writer

7 tenets of zero trust explained

Feature
Jul 27, 20216 mins
Network SecurityZero Trust

Cut through the hype. NIST's core zero trust elements provide a practical framework around which to build a zero trust architecture.

Conceptual image of a network labeled 'Zero Trust.'
Credit: Olivier Le Moal / Shutterstock

There’s no shortage of definitions of zero trust floating around. You’ll hear terms such as principles, pillars, fundamentals, and tenets. While there is no single definition of zero trust, it helps to have a shared understanding of a concept. For that reason, the National Institute of Standards and Technology (NIST) published NIST SP 800-207 Zero Trust Architecture, which describes the following seven tenets of zero trust.

1. All data sources and computing services are considered resources

Gone are the days of considering only endpoint user devices or servers as resources. Networks today consist of a dynamic array of devices from traditional items such as servers and endpoints to more dynamic cloud computing services such as function-as-a-service (FaaS), which may execute with specific permissions to other resources in your environment.

For all data and computing resources in your environment you must ensure you have basic, and when warranted, advanced authentication controls in place as well as least-permissive access controls. Feeding into subsequent tenets, all these resources are communicating to some extent and can provide signal context to help drive decisions made by the architectural components in zero trust, which are discussed in tenet 7.

2. All communication is secured regardless of network location

In zero trust environments, the concept of zero trust network access (ZTNA) is implemented. This contrasts with traditional remote access paradigms where a user may authenticate to a VPN and then have unfettered access within/across a network.

In a ZTNA environment, the access policy instead is a default-to-deny. Explicit access must be granted to specific resources. Furthermore, users operating in ZTNA environments won’t even have awareness of applications and services within environments without those explicit grants of access existing. It is hard to pivot to something you aren’t aware exists.

Today’s geographically dispersed workforce, further exacerbated by the COVID pandemic has made tenet 2 even more critical for organizations, which now have large portions of their workforce accessing internal resources from many locations and devices.

3. Access to individual enterprise resources is granted on a per-session basis

“Just like seasons, people change.” This saying is even more true for digital identities. In the dynamic nature of distributed compute environments, cloud-native architectures and a distributed workforce constantly exposed to a barrage of threats, the idea of trust should not extend beyond a single session.

This means that just because you trusted a device or identity in a previous session doesn’t mean you inherently trust them for subsequent sessions. Each session should involve the same rigor to determine the threat posed by the device and identity to your environment. Anomalous behavior associated with a user, or the change in a device’s security posture are among some of the changes that may have occurred and should be used with each session to dictate access and to what extent.

4. Access to resources is determined by dynamic policy—including the observable state of client identity, application/service, and the requesting asset—and may include other behavioral and environmental attributes

Modern computing environments are complex and extend well beyond an organization’s traditional perimeter. One way to cope with this reality is to make use of what are known as “signals” to make access control decisions within your environments.

One great way to visualize this is through Microsoft’s Conditional Access diagrams. Access and authorization decisions should be taking signals into consideration. These could be things such as the user and location, device and its associated security posture, real-time risk and application context. These signals should support decision-making processes such as granting full access, limited access, or no access at all. You also can take additional measures based on the signals to request higher levels of authentication assurance, such as multi-factor authentication (MFA) and limit the level of access granted based on these signals.

5. The enterprise monitors and measures the integrity and security posture of all owned and associated assets

In the zero trust model, no device or asset is inherently trusted. Every resource request should trigger a security posture evaluation. This includes continuously monitoring the state of enterprise assets that have access to the environment, whether they are owned by the organization or another entity, if they have access to internal resources. This includes quickly applying patches and vulnerability remediations based on insight gained from the ongoing monitoring and reporting. Going back to the earlier example regarding access on a per session basis, the device posture can be examined to ensure it doesn’t have critical vulnerabilities present or is lacking important security fixes and patches.

From this dynamic insight and monitoring of both integrity and the security posture of owned and associated assets, policies and decisions can be made around the level of access granted, if at all.

6. All resource authentication and authorization are dynamic and strictly enforced before access is allowed

As discussed in the previous example, the concept of granting access and trust is occurring in a dynamic and ongoing fashion. This means it is a continuous cycle of scanning devices and assets, using signals for additional insight and evaluating trust decisions before they are made. This is an ongoing dynamic process that doesn’t stop once a user creates an account with associated permissions to resources. It is an iterative process with a myriad of factors coming into play with each policy enforcement decision. 

7. The enterprise collects as much information as possible about the current state of assets, network infrastructure and communications and uses it to improve its security posture

Technology environments are subject to myriad threats, and enterprises must maintain a continuous monitoring capability to ensure they are aware of what is occurring within their environments. Zero trust architecture is made up of three core components as mentioned in the previously discussed NIST 800-207 as well as an excellent blog post from Carnegie Mellon’s Software Engineering Institute:

  • The policy engine (PE)
  • The policy administrator (PA)
  • The policy enforcement point (PEP)
hughes zero trust 1 NIST

Core zero trust components

The information collected from the current state of the assets, network infrastructure and communications are used by these core architectural components to enhance decision making and ensure risky decision approvals regarding access are avoided.

Zero trust is a journey

A common mistake many organizations make is thinking of zero trust as a destination. If they just buy the right tool, they will have implemented zero trust within their environments. This isn’t how it works. Of course, tools can help implement aspects of zero trust and move your organization closer to a zero trust architecture, but they are not a panacea. As with most things in IT and cybersecurity, it consists of people, processes, and technology.

As laid out in the National Security Agency (NSA) publication Embracing a Zero Trust Security Model, the leading recommendations include approaching zero trust from a maturity perspective. This includes initial preparation and basic, intermediate, and advanced stages of maturity, as described by the NSA.

hughes zero trust 2 NSA

With that said, the first step is preparing. Identifying where you are, where your gaps exist, how your architecture, practices, and processes align with the zero trust tenets laid out above, and then creating a plan to address them—and most importantly, accepting that it will take time.

Chris Hughes
Contributing Writer

Chris Hughes currently serves as the co-founder and CISO of Aquia. Chris has nearly 20 years of IT/cybersecurity experience. This ranges from active duty time with the U.S. Air Force, a civil servant with the U.S. Navy and General Services Administration (GSA)/FedRAMP as well as time as a consultant in the private sector. In addition, he also is an adjunct professor for M.S. cybersecurity programs at Capitol Technology University and University of Maryland Global Campus. Chris also participates in industry working groups such as the Cloud Security Alliances Incident Response Working Group and serves as the membership chair for Cloud Security Alliance D.C. Chris also co-hosts the Resilient Cyber Podcast. He holds various industry certifications such as the CISSP/CCSP from ISC2 as holding both the AWS and Azure security certifications. He regularly consults with IT and cybersecurity leaders from various industries to assist their organizations with their cloud migration journeys while keeping security a core component of that transformation.

More from this author