If you’re in IT and your job involves securing your organizations’ infrastructure, you’ve probably spent a good deal of time thinking through control selection — in other words, picking the controls that most directly help you accomplish the goal of securing your environment. And you’ve probably also spent an equally large amount of your and your staff’s time evaluating how the controls you’ve selected perform.
Specifically, think about the effort involved in picking controls. Since risks are always changing, new technologies are always emerging, and since regulatory requirements are always changing, the needs are always changing. Because it could take a year or more to budget for and implement a particular control once you’ve identified you need it, you’re probably always in the process of deploying some control somewhere.
I’m not saying that every shop is the same here. Quite the contrary — how this process happens varies significantly from organization to organization. For example, some organizations use a risk assessment process to pick controls, some base decisions on customer demand, others on regulatory requirements, etc. But the fundamental fact is that selecting and deploying controls is both continuous and time-consuming.
The same is true for keeping the controls running effectively. Think about the amount of effort that goes into validating that individual controls are performing up to par. This includes manual and automated testing of individual controls (e.g. vulnerability scanning or penetration testing), evaluation of controls using simulated drills or other exercises that evaluate performance during an emergency or incident response situation. It also includes efforts spent collecting and analyzing metrics about how well controls perform over time.
The point is, selecting the right controls for our business and making sure they’re doing what we expect is a significant amount of work. And since these two tasks are non-discretionary and time-consuming, sometimes we don’t have time to address other hidden dimensions of control operations. This situation is understandable, but for managers who wish to run as tight a ship as possible, it’s important to understand that there are other things to consider as well.
Effectiveness and Coverage are important but Not the Whole Story
To illustrate what I mean, consider for a moment two very different strategies to implement the same control: in this example, manual vs. automated user account provisioning, role assignment, and deprovisioning.
For the manual process, say that there’s a team of admins whose job it is to create user accounts when employees join, disable user accounts when employees leave, and assign and remove access to applications and systems based on changes in employee role, status or job function. On the automated side, imagine a system that consumes automated feeds from HR and payroll and uses that information to automatically create user accounts when employees come on board, disable them when they leave, and modify roles based on function. For the sake of argument, assume that, in aggregate, both systems perform about equally well (i.e. mistakes happen under both, but they happen at about the same rate).
Keeping both of these two implementations in your mind, ask yourself how they’re different. The fact that they are different is beyond obvious, but specifically, how are they different? They both address the same objective (ensure personnel access is appropriate), and they both perform about as well. But what about the cost dynamics? How expensive the control is to operate? What about resiliency? How does each process fare in light of external factors like employee attrition? What about user experience and satisfaction?
The point is that they’re not the same, even though many security programs will view them as being the same, at least in every way that matters, by virtue of risks they address and effectiveness. This myopic view, when you expand it over all the controls in your security program, has a few unfortunate ramifications. Without evaluating the efficiency of controls, it means you’re probably paying more than you need to when it comes to keeping controls operational. Without evaluating resiliency of controls to change, it probably means that controls are disrupted when employees leave or when processes change.
Unless you focus on these other areas as well, the security program can’t be at its best. If you want to move to a higher level of sophistication, you’ll need to start looking at other dimensions of your controls: beyond just how they address risks (appropriateness) and how they work (effectiveness), but the other aspects as well. Namely, you’ll need to address control maturity.
Understanding Maturity
So say you want to expand your understanding of your organization’s control implementations along other dimensions. How do you start?
There are a near-infinite number of ways to go about this, but one solid way to begin is by borrowing a page from the playbook of folks who’ve made a science of understanding process maturity. You’ve heard of the CMMI — the Capability Maturity Model Integration that’s helped improve process maturity in areas like software or hardware development? Similar methods of evaluation and refinement apply when it comes to security. Specifically, the SSE-CMM (Systems Security Engineering Capability Maturity Model) is designed exactly for this purpose. It includes both target areas to consider as well as a model for evaluating and scoring maturity.
The challenge for employing the SSE-CMM is that you might already be using a different framework to select, evaluate and implement your controls. For example, maybe you’re using the control objectives from ISO/IEC 27001:2005 Annex A or a similar catalog like NIST’s SP800-53; maybe you’re using specifically required controls from a regulatory requirement like HIPAA or PCI DSS, or a harmonized framework like the Unified Compliance Framework. In that case, you may be disinclined to create a whole new, parallel framework to use just for maturity. If so, you can borrow maturity designations from CMMI, from SSE-CMM, or from COBIT to “layer on” an assessment of process maturity in the same tracking artifacts you use when selecting controls and evaluating their effectiveness.
The point is, if you’re going to put the effort in to select controls and track how they perform over time, you should probably also spend some time thinking through the maturity of their implementation as well. You’ll be glad you did.