Monday, July 8, 2013

Dimension 8: Effective Agility & Learning

This is the eighth post defining each of the Ten Dimensions of Cyber Security Performance.  Like dimension 7. Effective External Engagement, performance in this dimension shapes and structures operational cyber security as a whole.  On the block diagram, I'm showing this as being with the focal organization but somewhat apart from the others to indicate that is concerned with holistic performance.

The dimension of Effective Agility & Learning includes the processes of reconfiguring, reengineering, redesigning, and radically innovating the cyber security program as a whole.  This is "Second Loop Learning" in the Double Loop Learning model, and is in contrast to the Single Loop Learning for operational cyber security that was discussed in a previous post.  In a later post, I'll discuss how dimensions 7 through 10 come together as Second Loop Learning.  But for now, you can view this dimension as the processes that execute and make operational this outer learning loop.

Including this dimension in our framework is vital for achieving the goals of dynamic and rapidly innovating cyber security.  Existing cyber security frameworks either omit it or subsume it in other categories in a marginal or vague form.


Here, learning means managing the process of learning

Metrics is a topic that is frequently mentioned in cyber security, especially regarding the lack of metrics or lack of agreed-upon metrics, especially regarding aggregate or enterprise-level cyber security.  In dimensions 1 through 6 that comprise operational cyber security and also in the post on Single Loop Learning, I made brief mention of metrics.  These are metrics that aim to support continuous improvement of each performance dimension, either individually or in specific combinations.  But these operational metrics are not aimed at measuring performance overall, especially in the second loop of Double Loop Learning.

Thus, the eighth performance dimension is the home of enterprise cyber security metrics.  A starting place would be to measure performance on each of these ten dimensions.  Since enterprise security metrics is an nascent area of research, it's not yet clear what a full or complete portfolio of cyber security metrics might look like, especially for organizations with high dependence on cyber security such as critical infrastructure or defense-industrial base.  Therefore its important to be constantly exploring and experimenting with new portfolios of metrics.  The key is to tie each metric to specific decisions or potential actions.  This provides the metrics have a practicality and significance criteria and prevents them from becoming vacuous "beauty contest" scores.

Learning also means discovering and intelligently exploring possibility space

(Yes, this is one of the reasons that my blog is named "Exploring Possibility Space".)

In operational cyber security, "learning" means incremental and continuous improvement.  This basically means refining your existing program with little or no structural change.  It's parameter tuning.  In contrast, "learning" in this performance dimension is about discontinuous innovation, including radical innovation, big "leaps" in the space of possibilities, and "creative destruction" where existing capabilities are left behind and brand new capabilities are created.   Most important, this dimension includes the possibility of creating capabilities that are new to the world, including those that emerge in the context of changes to the socio-technical environment.  The adversarial innovation race in cyber security is, in some aspects, a Red Queen arms race (also see this paper) that produces a perpetual stream of novelty and perpetual restructuring of the possibility spaces.  At the extreme, this could means that everything you know and believe today will eventually become irrelevant, distracting, or even counter-productive.   In practice, it means that we need to constantly reevaluate what we know, what we believe, and how we architect our cyber security program to keep up with the changes and innovations in the environment.  (By the way, the "environment" includes other defenders and related organizations/institutions, as defined in dimension 7.)

Measuring innovation is a tricky problem and not completely solved.  Organizations that measure their performance dimension will probably have a diverse set of component measures and evidence.  However, I want to suggest two performance measures that might be widely applicable.

The first measure is innovation cycle time -- the time from first discovery/ideation to realization/operationalization, either in a pilot project or in full production.   Consider a new standard, e.g. PCI-DSS.  How much time elapsed between the first suggestion or proposal of such a standard and the first implementation of it in a firm?  And now much time elapses between major revisions (which we can presume embody the full learning cycle involved)?  I don't know the exact answers to these questions, but I believe the answers are on the order of 3 to 5 years.  In fact, I don't know of any regulatory or compliance regime that have an innovation cycle time shorter than 18 months.  Usually it is much longer, and can be as long as a decade or more.  There are several drivers for this long cycle time, including:

  • The requirement to get consensus, often around a "lowest common denominator"
  • The requirement to maintain political and cultural acceptability, credibility, and authority
  • The desire to minimize the magnitude of change
  • The power of factions to veto or derail change
  • The need to fit within the existing ontologies, boundaries, and mental frames
  • The need to explicitly justify changes, which deters experiments in radical innovation
  • The desire to avoid blame for mistakes, errors, or waste

In contrast, consider the innovation cycle time in the "black hat community" of malicious agents in information security.  Both for technologies and also socio-economic institutions (e.g. "exploit as a service"), the innovation cycle time is usually less than 18 months, and often as short as one month.

If you accept my rough estimates, then you'll see that any "agile cyber security framework" based on regulatory or compliance regimes is doomed to failure because it's not feasible to innovate on a fast enough cycle.  This is exactly the principle behind John Boyd's OODA Loop (Orient-Observe-Decide-Act).  The key point is that if you (attacker) can operate inside your opponent's (defender's) OODA loop, you can render their defenses useless, and neutralize their efforts to improve their defenses.  This is based on the insight that whoever has the slower OODA loop will face debilitating lags in information and uncertainties, and thus will often waste vital resources and create vulnerabilities that the  attacker can easily exploit.

The second measure is investments in a portfolio of experiments.   "Experiments" here means any activity that is structured primarily as a vehicle of learning rather than productive output.  Thus, failure and potential for failure is more important than successes or success rate.  If an experiment cannot fail, or if there is no capacity to learn from failure, then it's not an experiment.  This measure focuses attention on the exploration process, and especially on getting the portfolio right.  Some experiments will be only "thought experiments", e.g. sending a staff member to a conference or workshop to understand coevolution in biology, and then imagining how the principles it could be implemented in the home organization.  Other experiments could be pilot projects or "skunk works", or joint projects with academic researchers or outside consultants.  Some portion of experiments will look "crazy" to normal people, yet these are vital to the process of discovering and exploring the fast-changing space of possibilities.

The managerial value of this measure is straight-forward: if you aren't investing in true experiments, and if you aren't taking a portfolio approach, then you have have no basis for expecting discontinuous innovation and, more broadly, Second Loop Learning.

Here, agility is rapid reconfiguration of resources and structures

In disaster and crisis scenarios, Nature has imposed upon us the need to rapidly reconfigure our assets and resources since many of the normal structures are obliterated or disabled.  Thus, its pretty easy for us to understand "agility" in this context.

I propose that "agility" in this dimension has a similar quality, but instead of Nature imposing the need for change, the need for change comes internally from the learning processes discussed above.  In the academic literature of management, this is called "dynamic capabilities".

Agility performance can be measured several sub-dimensions:
  • People and organizations -- how easy is it to change organization structures, relationships, and responsibilities? 
  • Technologies -- how easy is it to change the enterprise architecture or technical infrastructure components?
  • Processes -- how easy is it to reengineer business processes?
  • Legal structures -- how easy is it to reengineer contracts or other legal structures?
  • Information -- how easy is it to reengineer the information and data architecture?
  • Culture -- how easy is it to make significant changes to the organization's culture? (the hardest thing to change is culture!)
I'm certainly not advocating that every organization needs to have agility in all these dimensions or to the same degree.  Objectives for agility need to be set to mesh with the overall objectives, plus the influence of the other dimensions of cyber security performance.

But I do argue that most organizations neglect this dimension.  Furthermore, attention to this dimension first might pay the most dividends because it can be a driver for change in the other dimensions.

(Next dimension: 9. Optimize Total Cost of Risk)

No comments:

Post a Comment