Thursday, July 4, 2013

Dimension 4: Quality of Protection & Controls

This is the fourth post defining each of the Ten Dimensions of Cyber Security Performance. These are some initial thoughts presented in a sketchy fashion.  All are subject to much refinement, revision, and condensation.

Conceptually, protection and controls intercede between Threats and Exposures, including sometimes directly through modifications to Systems (e.g. system configuration).  It also envelops Design & Development because it takes the design of Systems and the rest of the cyber security program as given, essentially.

There is also a new interaction path with Actors, meaning that people and organizations of all stripes can encounter and interact with protections and controls as artifacts in themselves, and not just as they are implemented to protect Systems.  A prime example would be security and privacy awareness programs.

This box only goes half-way around because the other half is covered by Dimension 5: Effective/Efficient Execution & Operations.

This performance dimensions includes a large portion of what specialized teams do when they implement cyber security.  Examples include access control, identity management, intrusion detection and prevention, exfiltration controls, malware detection and removal, and so on.  These components are widely known and discussed so I won't spend time elaborating them here.  In fact, it sometimes tempting to reduce all cyber security into a set of relevant protections and controls.  This is a mistake, and one of the purposes of this framework is to correct that mistaken thinking.


So far, I haven't said very much that is new.  My novel contribution, I hope, is to argue that performance in this dimension should be measured in terms of quality,  in the same sense as "product quality" or "service quality" in the Six Sigma and Total Quality Management paradigms.  I'll elaborate, below,  but first I want to recognize that the term "Quality of Protection" (QoP) has already been proposed as a metric for information security (e.g. in this book), using  "Quality of Service" (QoS) as an analog.  QoS comes from the world of communication networks and has been useful in network design and management.  There used to be an academic workshop with that name, though the name was changed to MetriSec to more accurately reflect the broad focus on metrics for information security, and not just the QoP concept.  As far as I know, QoP has not emerged as a clearly defined metric or measurement method except in some very narrow applications.  Therefore I am taking the liberty to appropriate the term and redefining it somewhat.

How to measure quality

To evaluate protection and controls, I propose a fairly general but narrow definition of quality, phrased alternatively as:
  • "Ability to perform satisfactorily in service and is suitable for its intended purpose."
  • "Conformance to specifications."
  • "The characteristics of a product or service that bear on its ability to satisfy stated or implied needs.
  • "A product or service free of deficiencies."
To measure the quality of anything, you first have to define "intended purpose", "satisfactory performance", and "deficiencies".  This can and should be done for each individual control or protection process, and maybe also for collections.  I think there is ample documentation on this within existing information security frameworks, so I won't dwell on it here.

My main message is this: Quality of Protection & Controls can and should be measured directly, both off-line through lab experiments and on-line through time series data analysis with the control in use.

The simplest evaluation is "Does this control work as intended?  Are there unintended consequences?"  This is a major issue and rarely gets the attention it deserves since unintended consequences are frequent and widespread in most organizations, especially those with strict or elaborate controls.

A more detailed evaluation comes from evaluating the implementation of the control to its specification. For example, if all employee access to a specific area is supposed to be controlled by two forms of identification validated against a database of authorized personnel, it is possible to measure conformance to this specification -- Are the two forms of identification always presented, or is there a "pass" if a person has just left the area and has returned?  How often are invalid identifications detected successfully?  How often is the database of authorized personnel out of sync with the identification system?   (and so on)

A powerful general method for measuring quality is to view each control as a hypothesis test.  Example: for anti-virus (AV) software, the null hypothesis is "This software is free of  malware" and the alternative hypothesis is "This software contains malware".  The AV software uses evidence (signatures, behavior, others) and logic to either accept or reject the null hypothesis.  The quality metric involves measuring two types of errors in hypothesis testing:
  • Type I error: incorrectly rejecting a null hypothesis that is actually true ("false positive")
  • Type II error: failure to reject a false null hypothesis ("false negative")
In AV software, a Type I error is to label a piece of software as malware when it is actually clean.  A Type II error is failing to label a piece of software as malware when it is actually malware.  The Wikipedia article has other examples.

The focus of most controls is to minimize Type II errors, i.e. to stop as much bad things as possible.  This is natural and necessary because if the control doesn't stop bad things, then it doesn't qualify as a control at all!  But neglecting Type I errors can lead to some very poor decisions and outcomes.  In another blog post discuss "The cost of false positives in detection (lessons from public health)".

It is very important to understand the trade-off relationship between Type I and Type II errors for any given control design.  Tweaking a control to improve one usually causes the other to get worse.  In statistical hypothesis testing, there is a measure for this trade-off, called the power of the test given the experimental conditions (e.g. sample size, distribution of underlying population, etc.).  With the very same math we should be able to calculate the power of a control.  This can then be used to evaluate whether it delivers "satisfactory performance" relative to the "specifications" and "intended purposes", as listed in the definition of quality, above.  Below, I will use "quality" instead of "power", but I hope you see how they are related.

Measuring the quality of a collection or system of controls

Evaluate the quality of a collection of controls depends on how they interact, relative to both threats and exposures.  In an academic paper using Game Theory, Jens Grossklags proposes three distinct types of "games" that are relevant to this purpose:
  1. Total Effort: quality of protection is proportional to the sum of all the control qualities.  This arises when the controls are mutually supporting and interacting as a "team" or collective.
  2. Weakest Link: quality of protection is proportional to the weakest control (lowest quality).  This arises when the controls operate in parallel, and failure of anyone can result in a breach.
  3. Best shot: quality of protection is proportional to the strongest control (highest quality).  This arises when the controls operate in series, and the success of any one control can prevent a breach.  This is sometimes called "defense in depth".
Therefore, to measure the quality of protection for any collection of controls, or all controls in an enterprise, you would map out the series, parallel, and summation relationships, given the threats and exposures.

No comments:

Post a Comment