This is a first of ten or so posts that describe each of the
Ten Dimensions of Cyber Security Performance. The aim of these posts is to convey:
- What's included and not included in each
- Why these are performance dimensions, not just functions or categories
- How these dimensions are interrelated as a dynamical system to enable agile and continually innovative cyber security
As a visual heuristic, I'm going to build these dimensions into a composite block diagram where the position of each block is intended to signify something important about how it is related to the others. But I'm intentionally keeping the diagram simple so a lot of details about interaction and dependency are omitted.
Context
Here's a simplistic context diagram that will serve as the foundation. All it says is that cyber security for an organization involves
Actors who interact with
Systems and this interaction leads to
Events.
- Actors are people inside or outside the organization, or people acting through systems (computers, devices, etc.). Actors could also be organizations. Actors can be legitimate, illegitimate/malicious, or anything in between. This includes Actors who might make errors.
- Systems are combinations of technology, processes, and information, and also the people and physical facilities that provide direct support.
- Events are outcomes, results, incidents, etc., including both good and bad, intentional and unintentional, beneficial or detrimental.
This context intentionally includes all the
desired and positive uses of systems, not just the bad outcomes and incidents that other cyber security frameworks focus on. Why? Because I want to emphasize that all cyber security performance is fundamentally about supporting and enabling the desired and positive uses of systems. If, on the other hand, we only focus on minimizing or avoiding bad outcomes, then we'd just unplug all the computers and bury them in a deep hole in the ground!
Three points need to be emphasized before we dive in:
- These dimensions do not necessarily map to specific teams or departments. In particular, these are not solely relevant to security specialists, Chief Information Security Officers, or any other role, department, or function. These performance dimensions shared across the organization.
- Several common functions or activities (e.g. "user awareness training", "business engagement", "risk management") do not appear as separate dimensions because they are distributed across several of performance dimensions. Others, such as "compliance" and "governance", are subsumed in single a performance dimension.
- Performance in each dimension is a relative to each organization's objectives, as in Management By Objectives originated by Peter Drucker. I'll describe this more in a later post.
Now that we have a context and preliminaries, here's a discussion of the first performance dimension.
Dimension 1: Optimize Exposure
The starting place for cyber security performance is to understand and optimize the exposure of information and systems to attacks, breaches, violations, or other misuses.
Exposure mediates between Actors and Systems, as suggested by it's place in the block diagram. "Exposure" means that information and systems are "accessible", "visible", "potentially vulnerable", and the like. Notice that this is a broader concept than "vulnerable", which appear in other frameworks. I like the broader concept because it immediately puts attention on the balancing act between protecting information and systems by reducing exposure, on the one hand, and promoting positive uses by maximizing exposure on the other.
Therefore, performance in this dimension is
optimization, i.e. balancing the tradeoffs as best as we can.
Furthermore, performance is composed of an
intelligence component and an
execution component.
By "intelligence", I mean constantly striving to discover and understand
what is and is not being exposed,
what should be and what should not be exposed, and how to
communicate to appropriate people what the exposure is or should be. This necessarily involves engaging with the people who own the information and systems, as well as the systems themselves.
By "execution", I mean making decisions about exposure and implementing those decisions. This includes many common security practices and policies, including access control, privileges, system configurations, vulnerability detection and remediation, and the like. But it also includes legal and HR rules and practices for confidentiality, propriety, and privacy. Basically, these rules define what information is confidential or the degree of confidentiality and who can or cannot have access to that information. Likewise for proprietary information and private information. Again, this is a balancing act because too rules that are restrictive can be as harmful as rules that are too loose.
Optimizing exposure is a broad responsibility. Each department and team that owns data has responsibility to make decisions on confidentiality and propriety, for example. Even individual people have some responsibility to manage and optimize the exposure of their personal information and systems. This requires that they develop some awareness and understanding of what data they have, how it might be exposed, and how to make tradeoff decisions.
Ideally, there would be a single composite measure of exposure. One candidate might be a variation on the "attack surface" metric, i.e "exposure surface". The concept of attack surface has been formalized and measured in research on software design, but I think it could be generalized. Intuitively, the bigger the "exposure surface" the more opportunities there are for threat agents to find vulnerabilities and exploit them, and similar. It probably would be important to talk about "regions" within the "exposure surface" that are more or less accessible. But more research will be needed to put these ideas into practice.
Short of having a composite measure, I think it's feasible to measure performance in the Exposure Dimension by evaluating evidence that address these questions:
- How much do we know about our information and systems exposure?
- Do we know where our blind spots are? Are we making discoveries and developing understanding in the most important areas of exposure?
- Are we being successful in executing -- making decisions and then implementing them?
- Do we have evidence that we are making good trade-off decisions (a.k.a. optimizing)?
- Is our capability to optimize exposure getting progressively better? Or are we stuck in a place of inadequate performance?
Each of these questions will have it's own metrics or performance evidence, according to the objectives of the organization. I'm offering these as suggestions for further thought and refinement, and not as the last word.
In summary, the
Optimize Exposure performance dimension includes many of the activities, policies, and practices that people associate with information security, privacy, IP protection, and so on. I'm not suggesting much that is new except that the integration of these elements can and should be measured in terms of performance.
(Next dimension:
2. Effective Threat Intelligence)