Cybersecurity

ANALYSIS

Realistic ‘Zero Trust’ for Your Cybersecurity Program

If you’re a cybersecurity practitioner, chances are good that you’ve heard the term “zero trust” over the past few months. If you attend trade shows, keep current with the trade media headlines, or network with peers and other security pros, you’ve probably at least heard the term.

Counterintuitively, this large-scale attention from the industry at large can make understanding the concept — and potentially adapting it for your security program — more difficult than otherwise would be the case.

Why? Because depending on whom you’re talking to, you’ll get a different answer about what it is, how you might employ it, and why it’s a useful way to think about your organization’s security posture. For example, talking to a network infrastructure vendor might elicit one answer, while talking to an manage security service provider, or MSSP, might net you another.

This is unfortunate, because “zero trust” itself can be a powerful way to reimagine your approach to security. It can be a powerful tool to help you select better tools, better harden internal resources against threats, and better define your control environment. With that in mind, following is a breakdown of what “zero trust” is, why it’s powerful, and how you might realistically adapt these principles to your security efforts.

What Is Zero Trust?

The “Zero Trust Model,” originally developed by John Kindervag of Forrester, is, at its core, not super difficult to understand. It refers to the amount of trust (i.e., zero) an organization places on the technology substrate where users interact with services, traffic flows, and business gets done.

Said another way, it is the philosophy — and the associated implications that derive from that philosophy — that everything on the network (whether inside the “perimeter” or outside of it) is explicitly untrusted, potentially hostile, and should be subjected to scrutiny before being relied upon.

One expedient way to understand this is in contrast to longstanding perimeter-based models that predate it, which organizations have espoused for decades. For example, consider an organization employing network segmentation to separate “good” internal network traffic from the “bad” traffic of the Internet. Under that model, anything on the internal side of the firewall — users, applications, and hosts — is assumed to be trustworthy while anything on the other side is potentially hostile.

The problem with that approach is that it fails to account for the fact that adversaries can sometimes breach that perimeter — or that sometimes internal nodes (or users) are less trustworthy than expected.

With “zero trust,” there is no “perimeter” — at least not as we think of it today. This is because the core assumption is that everything is hostile, potentially already compromised, or otherwise spurious. While this is a straightforward concept, the implications that follow from it are staggering and complex.

Since you can’t trust any given subset of traffic (for example, traffic between two “internal” addresses), it follows that you need to secure it all: Confidentiality needs to be protected from the devices next to it, access to resources needs to be gated against potentially hostile users, and each connection (regardless of source) needs to be monitored and inspected.

As a practical matter, constraining this practice to any single layer of the network stack undermines the core premise. Since users are assumed to have the potential to be problematic, the same way that hosts are, it’s necessary to implement application-aware controls and network-aware controls — and they need to work in tandem.

In short, you’re securing internal services the same way that you’d approach securing a cloud service, business partner ingress point, or any other untrusted interface point.

Practical Application

How does one implement this from a practical point of view? This is where the situation gets tricky. First, you can’t implement any single technology and “turn on” zero trust. Instead, since it’s a philosophy or mindset that defines your whole approach, implementation requires multiple technologies working together. This might include identity and access management (IAM) systems, network equipment and technologies, authentication technologies, operating system services, and numerous other technologies up and down the stack.

On the plus side, adopting the zero trust mindset may not require that you buy anything new — only that you rethink how you use what you already might have.

The challenge is that most existing networks, applications and other services were not designed using this mindset. Since wishing doesn’t make it so, this means that if you want to adopt the mindset, then it’s likely that everything you have in place now (with the possible exception of public cloud environments) will become hair-on-fire problematic.

A data center, for example, might be completely copacetic when viewed from a perimeter-centric point of view, but things could get very scary very quickly if you should start assuming that you couldn’t trust any device or user within its scope.

Ultimately, there are two ways to approach practical implementation of zero trust. The first is to apply it fully to new environments. For example, if you’re migrating a data center to the cloud, implementing a containerized application deployment approach, or otherwise migrating existing environments, then applying a zero trust mindset to just those operations can be a good starting point.

Just as you would evaluate and select controls in the past based on a perimeter-defined assumption, so too will you select the combinations of controls that will enforce your security goals from a zero trust point of view. The process is exactly the same — it’s just the set of assumptions you employ that is slightly different.

Starting with a defined subset like this is beneficial because it can help you get familiar with looking at technology deployments in this way. Likewise, it can help you hone the combinations of technologies that you’ll use to re-address other legacy environments in the future.

Looking further down the road, you’ll want to start to incorporate the same approaches into legacy deployments that you might have, such as existing data centers, networks, applications, and so forth. As you deploy new systems, design new applications, and make changes to your environment, espouse the zero trust mindset. Your progress will be slow, but over time you will get closer to where you ultimately want to be.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Ed Moyle

Ed Moyle is general manager and chief content officer at Prelude Institute. He has been an ECT News Network columnist since 2007. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development. Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as author, public speaker and analyst.

1 Comment

  • Day by Day it becomes very difficult to trust any security program news like leaks of data by big Giants is still in news and therefore the trust level is going down for this security program as no one come clean.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Ed Moyle
More in Cybersecurity

Technewsworld Channels