We all know that some things are easier to do than others. In fact, what separates an average manager from a great one is the ability to balance decisions based on two almost totally unrelated sets of criteria: ease of accomplishment on the one hand vs. value to the organization on the other.
Think about it this way: A manager that only focuses on the quick-to-accomplish “low-hanging fruit” isn’t going to last long because he or she is not focusing on what’s critical to the organization and of the highest value. In other words, these folks might get a lot of individual tasks done, but what they’re doing isn’t in line with what the organization most needs. On the other hand, a manager who can’t show progress to the next tier up (because they’re focusing only on complex, slow-moving, high-visibility, initiatives) isn’t likely to last long either.
We need to show progress while at the same time understanding that sometimes the things that are most important take a while and need doing too. In short, it’s all about moving forward — moving forward at a rapid rate for those things that are easy to do, but also being mindful that there are also forces that might make our projects take longer. For example, it’s natural that a complex project with lots of moving parts will take a bit longer than a simple one.
However, sometimes organizations get caught in a dynamic where they can’t make any progress forward. No matter how simple or complex, they become locked in a “death spiral” where absolutely nothing gets done. It’s true — and breaking out of it can be nigh-on impossible.
Think of it like the inertia we all learned about back in Physics 101: It’s a situation where we’re not able to move for whatever reason, but also where inaction today facilitates (or in some cases actually ensures) further lack of action tomorrow.
Information Security Inertia
First of all, it’s important to recognize that setbacks and inertia are not the same thing. There are always forces that are going to slow down progress on any particular project, and we shouldn’t start freaking out just because we’ve run into an unexpected dependency or another situation that slows us down.
For example, we might run into a situation where we want to pen-test a particular environment, but we find out we need to get permission from the system owners first. That’s just a dependency — and once we overcome it, we move forward again at about the same pace we did before.
Another example: We want to build a disaster recovery plan, but we need the business impact results to come in first — and those folks are behind schedule. Again, not an ideal scenario because it impacts our dates, and we’re stuck waiting for someone or something else that we didn’t originally expect. However, all these things are natural with projects.
Real project “inertia,” on the other hand, is a self-reinforcing, systemic culture of inaction. It stops work from happening today and guards against it getting done in the future. Whereas a setback slows us down for a limited period of time, inertia mires us down and leaves us stagnant — unable to proceed until a major shake-up knocks everything loose.
It’s easiest to explain this by example. Imagine a situation wherein an organization erroneously decides that a particular regulation is not applicable to their business — perhaps a hospital or educational institution deciding that PCI is not in scope. Those familiar with PCI might point out that accepting credit cards via a gift shop/cafeteria/bookstore/charitable contributions office translates directly to the requirement to comply with PCI; those people would be right, but not everyone’s there yet.
Since our hypothetical university or hospital already made the decision that PCI is out of scope, they’ve forgone taking specific steps to comply in the short term. At the same time, efforts to become compliant in the future have also become incrementally harder. That’s because there are now barriers to overcome that weren’t there before: key decision-makers would need to be convinced that they were wrong in the first place, the message of the incorrect conclusion would need to be socialized around, and any documentation that says it’s not in scope would need to be superseded.
Inertia isn’t always the result of a single “bad call,” either. Another example would be staff failing to track their work efforts through a ticketing system (tracking their efforts via the system is usually one of the first things to go when staff are overworked.) However, since the ticket tracking system measures what they do, they might appear under-utilized and have more work assigned to them. Today’s inability to meet all expectations for their role sets the ground work for inability to meet the same or other tasks down the road.
So what tends to happen? Inertia builds. No action gets taken today, and it’s less likely to occur tomorrow. The situation compounds over time, getting worse and worse until “boom” — nothing gets done without herculean effort.
This situation is non-ideal outside of security, but very concerning when it relates to InfoSec. Why? Because the root cause is still there. In our example, the situation is highly problematic because they’re not PCI-compliant (thereby putting themselves at risk) — and since it’s unlikely to get fixed in the future (because the organization has locked itself into future inaction), the situation is actually worse than if they had just done nothing.
Recognizing the Warning Signs
Unfortunately, because inertia is usually cultural, it’s very difficult to fix using any kind of top-down strategy; trying to go from a culture where action has been stalled for months/years to a culture of dynamic activity just isn’t going to happen overnight (at least not without turning over your whole department.) As such, it’s much easier to prevent inertia than it is to fix it when it occurs.
That’s why keeping an eye open for the warning signs to identify an inertia situation when it occurs is so important. Some signs are going to be unique to the specific organization and will vary, but other indicators are certain to reflect catastrophic inertia in almost any organization.
Let’s run through a couple of the common signs:
- Repeated Stalled Attempts: There are times when some initiatives are destined to fail. However, if you notice that folks a) recognize an area of improvement, b) have a strong desire to try to change it, but c) are unable to do so on multiple occasions, then this may indicate a situation where cultural factors are making progress more difficult.
- Ego-Based Decision Making: In the PCI example above, the factor that cemented the organization’s inability to move was the undesirable nature of pointing out the flaw in the original decision. An environment that puts too much focus on executive ego is more likely to facilitate inertia. Why? Because staff will wear blinders to potential problems to avoid seeming like the bad guy. Staff members don’t want to be the “bad guy” by pointing out the flaws … so they don’t.
- Lack of Written Process: Organizations that rely heavily on “tribal knowledge” vs. documentation/codification of knowledge are more likely to experience inertia. Once consensus is reached, staff will assume that the matter is “put to bed” — in absence of documentation outlining the original assumptions and thought processes, the tribal knowledge assumption holds sway.
- Inappropriately Lean Staffing: Nowadays, running lean (some might say “being understaffed”) is the rule rather than exception. ut breaking free of inertia means questioning longstanding assumptions — staff that are at or near the red line in terms of workload have less ability to reflect on those assumptions, less time to look for ways to make things better, and are therefore less likely to bring up areas of concern.
Hopefully, knowing that this kind of situation can occur — and knowing what signs to be on the lookout for — can put you on guard against information security intertia within your own organization.
Ed Moyle is currently a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner ofSecurity Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.