Tuesday, July 30, 2013

The Cost of a Near-miss Data Breach

[This post originally appeared in the New School of Information Security blog on October 6, 2009.  This idea was incorporated in the breach impact model presented in this paper, where you can find more details: ]
How Bad Is It?–A Branching Activity Model to Estimate the Impact of Information Security Breaches

If one of your security metrics is Data Breach Cost, what is the cost of a near miss incident? This seemingly simple question gets at the heart of security metrics problem.

Jerry escapes death, but is it cost-free?
Consider the gleeful Jerry Mouse in this cartoon. Tom the Cat has just missed in his attempt to swat Jerry and turn him into mouse meat. Is there any cost to Jerry for this near miss? Is Jerry’s cost any different than if he was running with Tom no where in sight?

By “near miss” I mean a security incident or sequence of incidents that could have resulted in a severe data breach (think TJX or Heartland), but somehow didn’t succeed. Let’s call the specific near-miss event “NM” for short. For sake of argument, let’s assume that the lack of attack success was due to dumb luck or attacker mistakes, not due to brilliant defenses or detection. Let’s say that you only discover NM long after the events took place. For simplicity let’s assume that discovering NM doesn’t result in any extraordinary costs, meaning that out-of-pocket costs are the same just before and immediately after NM. Finally, assume that your expected cost of a successful large-scale data breach is on the order of tens of millions, with the worst case being hundreds of millions of dollars.

How much does NM cost? The realist answer is “zero”. (Most engineers are realists, by disposition and training.) There is a saying in street basketball that expresses the realist philosophy about losses and associated costs: “No blood, no foul”. If you ask your accountants to pour over the spending and budget reports, they will probably agree. Case closed, right?

Not so fast….

Monday, July 29, 2013

Think You Understand Black Swans? Think Again.

"Black Swan events" are mentioned frequently in tweets, blog posts, public speeches, news articles, and even academic articles.  It's so widespread you'd think that everyone knew what they were talking about. But I don't.

Coming soon: 23 Shades of Black Swans
I think the "Black Swan event" metaphor is a conceptual mess.

Worse, it has done more harm than good by creating confusion rather than clarity and by serving as a tool for people who unfairly denigrate probabilistic reasoning.  It's also widely misused, especially by pundits and so-called thought leaders to give the illusion that they know what they are talking about on this topic when they really don't.

But rather than just throwing rocks, in future posts I will be presenting better/clearer metaphors and explanations -- perhaps as many as 23 Shades of Black Swan.  Here are the ones I've completed so far:
  1. Grey Swans: Cascades in Large Networks and Highly Optimized/Critically Balanced Systems
  2. Green Swans: Virtuous Circles, Snowballs, Bandwagons, and the Rich Get Richer
  3. Red Swans: Extreme Adversaries, Evolutionary Arms Races, and the Red Queen
  4. Disappearing Swans: Descartes' Demon -- the Ultimate in Diabolical Deception
  5. Out-of-the-Blue Swans: Megatsunami, Supervolcanos, The Black Death, and Other Cataclysms
  6. Orange TRUMPeter Swans: When What You Know Ain't So
  7. The Swan of No-Swan: Ambiguous Signals Tied To Cataclysmic Consequences
  8. Swarm-as-Swan: Surprising Emergent Order or Aggregate Action
  9. Splattered Swan: Collateral Damage, Friendly Fire, and Mis-fired Mega-systems 
In this post, I just want to make clear what is so wrong about the "Black Swan event" metaphor.

Saturday, July 27, 2013

QOTW: Strategy is not predicting the future. Instead, it's about making decisions today to be ready for an uncertain tomorrow - Drucker

This quote is from Peter Drucker, p 125 of his 1974 classic book: Management: Tasks, Practices, Responsibilities (Harper and Row).  Though it talks about strategic planning, the same applies to risk management (emphasis added):
"Strategic planning does not deal with future decisions.  It deals with the futurity of present decisions.  Decisions only exist in the present.  The questions that faces the strategic decision-maker is not what his organization should do tomorrow.  It is, 'What do we have to do today to be ready for an uncertain tomorrow?'  The question is not what will happen in the future.  It is, 'What futurity do we have to build into our present thinking and doing, what time spans do we have to consider, and how do we use this information to make a rational decision now?'"

Friday, July 26, 2013

Visualization Friday: 14 dimensions represented in 2D using MDS, Colors, and Shapes

For the last three years I've been building an Agent-based Model (ABM) of innovation ecosystems to explore how agent value systems and histories mutually influence each other.  The focus to this point has been on Producer-Consumer relationships and the Products they produce and consume.

One of my key challenges has been how to visualize changes in agent value systems as new products are introduced.  Products have surface characteristics defined as a 10-element vector of real numbers between 0 and 1.  Consumers make valuation decisions based on their perception of these 10 dimensions compared to their current "ideal type".  But they realize utility after consuming based on three "hidden" dimensions.  Adding on the dimension of consumption volume, this means I need to somehow visualize 14 dimensions in a 2D dynamic display.

The figures below show my solution.  Products are represented by black squares, while Consumer ideal points are represented by blue dots.  (There are about 200 Consumers in this simulation.)  Products that are not yet introduced are represented by hollow dark red squares.  The 10 dimensions of Product surface characteristics are reduced to 2D coordinates through Multi-Dimensional Scaling (MDS).  Therefore the 2D space is a dimensionless projection where 2D distances between points is roughly proportional to distances in the original 10 dimensions.

The three utility dimensions are represented by colored "spikes" coming off of each Product.  The length of each spike is proportional to the utility offered by that Product on that dimension.

Finally, the proportion of the Product population is represented by a dark red circle around each product (black filled square).

These two plots show the same simulation at different points in time, about 300 ticks apart, showing the effect of the introduction of several new products.  What we are looking for is patterns and trajectories of Consumer ideal points (blue dots).

Putting these all together:

  • Dots are close or distant in 2D space according to how they are perceived by Consumers based on surface characteristics.
  • Products that are close to each other in 2D space may or may not have similar utility characteristics (spikes).  This reveals the "ruggedness" of the "landscape", and thus the search difficulty faced by Consumers.
  • The circles around each product allow easy identification of popular vs unpopular products.
Initial Consumer ideal points (blue dots) after 1000 ticks, given 5 initial Products (black filled squares in center), plus one new Product (far left).  Red arrow points to a product with low utility on 3 dimensions.  -- click to see larger image.
Consumer ideal points (blue dots) after 1367 ticks, showing influence of new Product (black dot on far left).  Notice large increase in popularity of product pointed to by red arrow.  Though it has relatively low utility on all three dimensions, it is a "bridge" between products on right side and new (high utility) product on left side. -- click to see larger image.




Thursday, July 25, 2013

Tutorial: How to Value Digital Assets (Web Sites, etc.)

[This originally appeared in the New School of Information Security Blog in two posts, Oct 20 and 23, 2009]

Many security management methods don’t rely on valuing digital assets.  They get by with crude classifications (e.g. “critical”, “important”, etc.).  Moreover, I dont believe that it’s absolutely necessary to calculate digital asset values to do quantitative risk analysis.  But if you need to do financial justification or economic analysis of security investments or alternative architectures then you might need something more precise and defensible.

This tutorial article presents one method aimed at helping line-of-business managers (“business owners” of digital assets) make economically rational decisions. It’s somewhat simplistic, but it does take some time and effort. Yet it should be feasable for most organizations if you really care about getting good answers.

Warning: No simple spreadsheet formulas will do the job. Resist the temptation to put together magic valuation formulas based on traffic, unique visits, etc.

(This is a long post, so read on if you want the full explanation…)

Wednesday, July 24, 2013

The Bayesian vs Frequentist Debate And Beyond: Empirical Bayes as an Emerging Alliance

This is one of the best articles I've ever seen on the Bayesian vs Frequentist Debate in probability and statistics, including a description of recent developments such as the Bootstrap, a computationally intensive inference process that combines Bayesian and frequentist methods.
Efron, B. (2013). A 250-year argument: Belief, behavior, and the bootstrap. Bulletin of the American Mathematical Society, 50(1): 129-146.
Many disagreements about risk analysis are rooted in differences in philosophy about the nature of probability and associated statistical analysis.  Mostly, the differences center on how to handle sparse prior information, and especially the absence of prior information. "The Bayesian/frequentist controversy centers on the use of Bayes rule in the absence of genuine prior experience."

What's great about this article is that it presents the issue and alternative approaches in a simple, direct way, including very illuminating historical context.  It also presents a very lucid description of the advantages and limitations of the two philosophies and methods.

Finally, it discusses recent developments in the arena of 'empirical Bayes' that combines the best of both methods to address inference problems in the context of Big Data.  In other words, because of Big Data and the associated problems people are trying to solve now, pragmatics matter more than philosophical correctness.  Another example of empirical Bayes is Bayesian Structural Equation Modeling that I referenced in this post.

Tuesday, July 23, 2013

The Rainforest of Ignorance and Uncertainty

One of the most important books I've ever read is Michael Smithson's Ignorance and Uncertainty.  It gives a tour of many varieties of ignorance and uncertainty and the many strategies that have been developed in different disciplines and professional fields.  Through this tour, it becomes very clear that uncertainty is not a single phenomena, and not even a couple, but instead is like a rainforest ecosystem of species. (My metaphor, not his.)

One vivid illustration of this is the taxonomy of ignorance and uncertainty.  Here's the original taxonomy by Smithson in Ignorance and Uncertainty:
In 2000, I modified this for a presentation I gave at a workshop at Wharton Business School on Complexity Science in Business.  Here's my taxonomy (2000 version):

Smithson and his colleagues have updated their taxonomy, which is presented as Figure 24.1 in Chapter 24 "The Nature of Uncertainty" in: Smithson, M., & Bammer, G. (2012). Uncertainty and Risk: Multidisciplinary Perspectives. Routledge.   (I can't find an on-line version of the diagram, sorry.) If you are looking for one book on the topic, I'd suggest this one.  It's well edited and presents the concepts and practical implications very clearly.

I don't think there is one definitive taxonomy, or that having a single taxonomy is essential for researchers.  I find them useful in terms of scoping my research, relating it to other research (esp. far from my field), and in selecting modeling and analysis methods that are appropriate.

Of course, there are other taxonomies and categorization schemes, including Knight's distinction between risk (i.e. uncertainty that can be quantified in probabilities) and (true) uncertainty (everything else).  Other categorization you'll see is epistemic uncertainty (i.e. uncertainty in our knowledge) and aleatory uncertainty (i.e. uncertainty that is intrinsic to reality, regardless of our knowledge of it).  The latter is also known as ontological uncertainty.  But these simple category schemes don't really capture the richness and variety.

The main point of all this is that ignorance and uncertainty come in many and varied species.  To fully embrace them (i.e. model them, analyze them, make inferences about them), you can't subsume them into a couple of categories.

[Edit:  Smithson's blog is here.  Though it hasn't been updated in two years, there's still some good stuff there, such as "Writing about 'Agnotology, Ignorance and Uncertainty'".]

Monday, July 22, 2013

Where are the NLP or Text Mining Tools for Automated Frame Analysis?

I'd like to do Frame Analysis (Goffman 1976, Johnston 1995) on a medium-sized corpus of text (articles, speeches, blog posts and comments) and I'm looking for NLP or text mining tools to help with it.  Strangely, I can't find anything. All the examples of frame analysis in published research (e.g. Nisbet 2009) use purely manual methods or computer-augmented manual analysis.

Frame Analysis requires sophisticated semantic analysis, filtering, situational understanding, and inference on missing text. Near as I can tell, this level of sophistication is beyond the grasp of the common NLP and text mining tools.  Is this true?  If not, do any of you know of fully automated tools for Frame Analysis?

I should add that I have two use cases. First, the most demanding, is automatic identification of frames followed by text classification.  Second, more feasible, is automatic classification of texts given frame definitions and sample texts.  The latter fits the classic machine learning model of supervised learning, so I assume that as long as my training set is large enough and representative enough, I can probably find an adequate ML classification algorithm.

[Edit 7/23/2013:  This is the best summary I could find of available tools: Frame Analysis: Software]

__________

Goffman, E. (1976). Frame Analysis: An Essay on the Organization of Experience. Harvard University Press.
Johnston, Hank (1995). A methodology for frame analysis: from discourse to cognitive schemata. In Social Movements and Culture (pp. 217–246). University of Minnesota Press.
Nisbet, M. C. (2009). Communicating Climate Change: Why Frames Matter for Public Engagement, Environment. Science and Policy for Sustainable Development, 51(2): 12–23.

Call For Speakers: Cybersecurity Innovation Forum Jan 28-30 `14, Baltimore MD. Due 9/3/13

There's a new event that could be very important in to promote innovation in cyber security.
Cybersecurity Innovation Forum, Jan. 28-30, 2014, Baltimore MD
Call for Speakers, submissions due Sept. 3, 2013

There are four tracks, two slanted toward technical solutions and one slanted toward social/organizational solutions, and one mixed (see bold text):
  • Trusted Computing – Trust through device and system integrity 
  • Security Automation – Automate with trust to speed informed decision making 
  • Information Sharing – Openly and confidently share the information we need to share to make informed decisions and enact automated responses 
  • Research – Explore end-state research themes for designed-in security, trustworthy spaces, moving target, and cyber economic incentives.
I'm probably going to submit one or two proposals to the Research track. One might report on "How Bad Is It?" (breach impact estimation) and Ten Dimensions of Cyber Security Performance.  The second might be on the topic of innovative research models to improve industry-academic-government-citizen research collaboration, focusing on metrics, economics, social and organization aspects.

I'd also like to see a proposal from my brothers and sisters from SIRA on the state of the art in risk analysis and opportunities for research collaborations.

It would be great if this conference had good attendance from innovators in academia and industry.  It sure would help their cause if they had a strong cross-sector program committee.  That was one thing the National Cyber Leap Year folks got right.

Sunday, July 21, 2013

Path Dependence in Tools, or Why I Use Mathematica When Everyone Else Uses Python & R

Nearly every job posting I see requires or desires experience in Python or R or both.  They have clearly won the programming language race for data science applications.  Sadly, I'm just getting started with them, because three years ago I made an impulsive decision that leaves me competent and skilled in Mathematica and not in the others.  The easy path is to continue using it on each new project.

Saturday, July 20, 2013

Quote of the Week: Machiavelli on the Enemies of Reform, Including the 'Incredulity of Mankind'

I found this choice quote in: Paquet, G. (1998). Evolutionary cognitive economics. Information Economics and Policy, 10(3): 343–357.  Emphasis added.
"For the reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order, this lukewarmness arising partly from fear of their adversaries, who have the laws in their favour; and partly from the incredulity of mankind, who do not truly believe in anything new until they have had actual experience of it." -- Machiavelli, 1537
Full citation: Machiavelli, N., 1952 (orig. 1537) The Prince, New York: Mentor Book, p 49-50.

Let’s Shake Up the Social Sciences

Given that I'm a student in the first-ever Department of Computational Social Science, I strongly agree with Nicholas Christakis in his New York Times article "Let’s Shake Up the Social Sciences".  I especially like the points he makes regarding teaching social science students to do experiments early in their education process.

Friday, July 19, 2013

Visualization Friday: Probability Gradients

I'm fascinated with varieties of uncertainty -- ways of representing it, reasoning about it, and visualizing it.  I was very tickled when I came across this blog post by Alex Krusz on the Velir blog.  He presents a neat improvement over "box and whiskers" plot for representing uncertainty or variation in data points which he calls "probability gradients".




Guest on the Risk Science Podcast

On Episode 3 of the Risk Science Podcast, I had a nice conversation with my friends Jay Jacobs and Ally Miller.  The topics included on the balance in simplifying complexity, the need to get more industry people involved in the WEIS conference (as participants and presenters), writing winning movie plots about cyber war, and the learning curve for R.

In case you don't recognize it, the Risk Science Podcast (@Risksci on twitter) is new and improved.  Previous iterations were the Risk Hose Podcast, and before that, the SIRA Podcast.

Wednesday, July 17, 2013

On the performance value of "cyber hygiene"

One idea that keeps coming up in the NIST Cyber Security Framework process is that we should be collectively promoting good "cyber hygiene" -- common things that everyone should be doing by habit to protect themselves on-line.  Analogies are made to personal health and hygiene and also personal safety (auto seat belts).  Vinton Cerf claims to have coined the term.  It is being widely promoted in cyber security awareness programs, including by outgoing DHS Secretary Janet Napolitano at this public event.  There are non-profit organizations focused on it, e.g. Stay Safe Online and Stop, Think, Connect. There's even a certificate in cyber hygiene offered.  These are often oriented at consumers and individuals, but the same ideas are being promoted for organizations, including those in critical infrastructure industries.
A real "cyber hygiene" promotion poster.  Let's all be smart chipmunks!
While most people seem to believe that it is possible to define "good cyber hygiene" and also worthwhile to promote it, not everyone agrees.  One commentator believes it puts too much burden on individuals and distracts us from the institutional and systematic forces that create or perpetuate the risks in the first place.

Of course, I have to try to answer these questions: where does "cyber hygiene" fit in to the proposed Ten Dimensions of Cyber Security Performance?  Can we define "good hygiene practices" in each dimension that serve as the common baseline for all organizations, as a minimum acceptable performance level, as a common entry level at the lowest level of maturity, or similar?

In my opinion, it is possible to define a common set of "cyber hygiene" practices for most individuals and most organizations.  They are good.  Do them.  But don't think you are achieving adequate or even minimum acceptable cyber security performance in an organization by simply implementing good "cyber hygiene".  At best, "cyber hygiene" is a set of practices that helps your organization be "anti-stupid".

Tuesday, July 16, 2013

How simple can we make it?

In the Q&A session of the first day of the 3rd NIST Cybersecurity Framework (CSF) workshop, someone asked if there was a way to simplify the proposed five functional categories in the Core.  Basically, he was saying that he needed to persuade non-specialists, especially executives, and that the five functional categories plus subcategories was too complicated. (full question and answer is on the video at 1:18:00).  When I heard that, I nearly sprayed coffee all over my keyboard.  "You want it even SIMPLER??" I yelled out (to my screen).

I immediately thought of this: cyber security in one dimension using the Grissom Dimension, which is named after Astronaut Gus Grissom.  Grissom gave a speech in 1959 to the workers at the Convair plant where the Atlas rocket booster was being built.  The entire speech:  "Do good work." (remembered by a worker)  Yes, we could reduce all of cyber security to the Grissom Dimension, then it would be simple, right?

I'm a bit sensitive to this because I know many people will say my Ten Dimensions are too complicated.  I wonder myself if it is too complicated and I'm certainly interested in ways to simplify it.  Parsimony is good.  Occam's razor keeps our models clean.

Nice example of "Execution & Operations"

Ally Miller wrote a nice post called "Quant Ops" describing the relationship between DevOps and risk, which fits nicely into Dimension 5. Effective/Efficient Execution & Operations.  Best of all, she works through an example of how it works in practice.  Give it a read.

Monday, July 15, 2013

Why economics and management PhD dissertations are collections of papers

I'm back in California after two wonderful weeks at Summer School in Trento, Italy.  I learned many things, both great and small.  One of them was resolving the mystery as to the organization of PhD dissertations in economics and management.  In engineering, computer science, and sociology, dissertations are generally organized as a report on a one project -- a cohesive whole.  In contrast, many or most dissertations in economics and management are collections of "essays", usually numbering three.  For a long time, I couldn't figure out why.  Now I know.

The driving force is the hiring criteria in the academic job market in economics and management.  I was told by a professor that, to even get an interview at a top university, an applicant had to have one or more publications in a high ranking journal, and that having many conference publications or a top-quality dissertation were not enough.  Therefore, from the start of the dissertation process, candidates are guided toward journal publication, even at the expense of fully researching the thesis as is done in engineering, computer science, and sociology.

Another reason, I suspect, is a shortage of post-doc positions in economics and management, particularly the latter.  I don't have any data to back this up, so maybe I'm wrong, but it seems that PhDs in management are pushed into the job market for tenure-track positions immediately after completing their dissertations.  If this is so, then it would explain why they would be structuring their dissertation as a collection of three "publishable units".

Thursday, July 11, 2013

Communicating about cyber security using visual metaphors

For a workshop with non-computer people, I needed a simple visual metaphor to communicate how messy and complicated information security can be (and, by extension, cyber security).  This is what I came up with.  Seems to get across the main point on a visceral level. Enjoy.




Monday, July 8, 2013

Recommendations for NIST Cyber Security Framework (CSF) Workshop 3

Here is my input to the NIST CSF process prior to Workshop 3 (July 10-12 in San Diego).
________________________________________________________________________
Dear workshop attendees:

I hope that you find the following recommendations to be helpful.  Wish I could be there!

Russ
________________________________________________________________________

Recommendations:
  1. In my judgement, the five "Cyber Security Functions" described in the July 1 draft are inadequate to support agile and continuously innovative cyber security.  As detailed in this analysis, the five functional categories have serious deficiencies:
    • "Know" is too broad and too vague
    • "Respond" and "Recover" are too narrow and could be combined
    • "Detect" does not adequately cover of Threat Intelligence
    • Missing:
      • Design & Development
      • Resilience
      • Execution & Operations
      • External Engagement
      • Agility & Learning
      • Total Cost of Risk
      • Responsibility & Accountability
  2. Rather than using functional categories which are nothing more than "buckets of content", it would be better to organize the framework around performance dimensions.  This will help make the framework more coherent and better justified.
  3. I recommend organizing the framework according to Ten Dimensions of Cyber Security Performance, (slides), which are explained individually the linked posts:
    1. Optimize Exposure
    2. Effective Threat Intelligence
    3. Effective Design & Development
    4. Quality of Protection & Controls
    5. Effective/Efficient Execution & Operations 
    6. Effective Response, Recovery, & Resilience
    7. Effective External Engagement
    8. Effective Learning & Agility
    9. Optimize Total Cost of Risk
    10. Responsibility & Accountability
  4. I recommend that the framework should explicitly support Double Loop Learning, which is described in these two posts:
  5. I recommend that pilot projects be started right away to design and test inference methods for estimating cyber security performance, as sketched in this post.

The key to measuring cyber security: inferences, not calculations

(It's getting very late here, so my apologies for any slips or gaps in communication.)

In many of the previous posts on the Ten Dimensions of Cyber Security Performance, I've hinted or suggested that these could be measured as a performance index.  But I'm sure many readers have frustrated because I haven't spelled out any details or given examples.  Still other readers will be skeptical that this can be done at all.

Sorry about that.  There's only so much I can put in each post without them becoming book chapters. In this post, I'll describe an approach and general method that I believe will work for all or nearly all the performance dimensions.  At the time of writing, this is in the idea stage, and thus it needs to be tested and proven in practice.  Still, I suggest it's worthy of consideration.

[Update 7/19/2013:  After some searching in Google Scholar, I discovered that the method I'm suggesting below is called Bayesian Structural Equation Modeling.  I'm very glad to see that it is an established method that has both substantial theory and software tool support.  I hope to start exploring it in the near future.  I'll post my results as I go.]

Here are my ideas on how to measure performance in each dimension.

NIST's "Cyber Security Functions" compared to the Ten Dimensions

On July 1, NIST posted a draft outline of the CSF.  It proposed five "cyber security functions" to serve as organizing categories for the framework.  Quoting from the draft:
  • "Know – Gaining the institutional understanding to identify what systems need to be protected, assess priority in light of organizational mission, and manage processes to achieve cost effective risk management goals."
  • "Prevent – Categories of management, technical, and operational activities that enable the organization to decide on the appropriate outcome-based actions to ensure adequate protection against threats to business systems that support critical infrastructure components."
  • "Detect –Activities that identify (through ongoing monitoring or other means of observation) the presence of undesirable cyber risk events, and the processes to assess the potential impact of those events."
  • "Respond – Specific risk management decisions and activities enacted based upon previously implemented planning (from the Prevent function) relative to estimated impact."
  • "Recover - Categories of management, technical, and operational activities that restore services that have previously been impaired through an undesirable cybersecurity risk event."
There are several important differences between these five categories and my proposed Ten Dimensions of Cyber Security Performance.  First, NIST is proposing categories of activities and functions to serve as buckets of content.  There's no formal relationship between the categories, at least not stated explicitly.  Second, the NIST categories only partially and imperfectly cover the space of the Ten Dimensions, as shown in this matrix (click to enlarge):


If you believe in the scope and organization of the Ten Dimensions, then the deficiencies of the NIST functional categories become apparent in the comparison:

  1. "Know" category is scoped too broadly. It is overloaded and contains too many performance dimensions.  I list five question marks (?) in the matrix because I can't tell if these would be included in "Know" or not.
  2. "Respond" and "Recover" categories map to a single performance dimension, implying that they are probably scoped too narrowly.
  3. A glaring omission is lack of coverage for Resilience, which is vital for critical infrastructure.
  4. Also there's no coverage of dimension 5. Effective/Efficient Execution & Operations, and probably no coverage of five other dimensions: 3. Effective Design & Development; 7. Effective External Engagement; 8. Effective Agility & Learning; 9. Optimize Total Cost of Risk; and 10. Responsibility & Accountability.
Thus, the NIST functional categories put too much attention in one or two areas and not enough in many others.  Most serious, there is no coverage in the second loop of the Double Loop Learning model, which implies that the NIST functional categories are inadequate to support agile and continuously innovative cyber security.

Agile Cyber Security and Double Loop Learning

In this post, I want to summarize dimensions 7 through 10, focusing on their interactions and relationships and how they deliver Double Loop Learning. (See this post for the full list of dimensions.)

Together, dimensions 7 through 10 provide the "dynamic capabilities" of an organization to achieve agility and rapid innovation in the face of constant changes in the landscape.  I mentioned this specifically in the context of dimension 8. Effective Agility & Learning, but the notion of dynamic capabilities extends to subsystem comprised of dimensions 7 -- 10, as well.


Dimension 10: Responsibility & Accountability

This is the tenth and last post defining each of the Ten Dimensions of Cyber Security Performance.  It's also the capstone of all the performance dimensions, tying them together from the perspective of leadership and management.

The dimension of Responsibility & Accountability include all processes that link the decision-makers in an organization (at all levels) with the stakeholders of the organization who are affected by cyber security outcomes, including:

  • The Board of Directors
  • Shareholders
  • Customers
  • Employees (as individuals)
  • Suppliers, distributors
  • Outsource partners
  • Regulators
  • Legal authorities
  • (others)
This dimension includes most of the processes of governance and compliance, at least the interfaces between organization units and the external interfaces.  But I chose not to call it "Governance & Compliance" because those are both formally codified processes and I felt it was important to include some of the less formal and tacit aspects.  This is especially important if we want to encourage wide-spread acceptance of responsibility and accountability beyond the core executives.  In addition, I felt the title "Responsibility & Accountability" focuses attention on performance, not just activity or formal structures.

Dimension 9: Optimize Total Cost of Risk

This is the ninth post defining each of the Ten Dimensions of Cyber Security Performance.  Like dimension 8. Effective Agility & Learning, this dimension applies to performance of the cyber security program as a whole, and therefore I've located it to the side of the other blocks as if it were a higher level.

The dimension of Optimize Total Cost of Risk includes all of the processes that assess and manage cyber security at an enterprise level, specifically in terms of resources (financial and non-financial), liability (potential and realized), risk mitigation (including insurance), and also balancing these against the upside of taking risks (i.e. the benefits of exposing and using information and systems).  Essentially, this is the financial side of risk management at the enterprise level.

Most existing cyber security frameworks either omit or exclude this dimension, or it is viewed merely as the aggregation of risk estimates for each and every possible attack on every asset, and every possible consequence.

In contrast, the approach I'm recommending for this performance dimension starts at the enterprise level and estimates costs associated with cyber security as a whole, both the costs of defenses and the costs of loss events.  It adapts the loss distribution approach (LDA) for Operational Risk (OR) within Enterprise Risk Management (ERM) that has been pioneered and refined in the financial services industry.  This is a fairly complex topic on it's own, so I won't attempt to give a full exposition here.  There are references with further detail, below.

Dimension 8: Effective Agility & Learning

This is the eighth post defining each of the Ten Dimensions of Cyber Security Performance.  Like dimension 7. Effective External Engagement, performance in this dimension shapes and structures operational cyber security as a whole.  On the block diagram, I'm showing this as being with the focal organization but somewhat apart from the others to indicate that is concerned with holistic performance.

The dimension of Effective Agility & Learning includes the processes of reconfiguring, reengineering, redesigning, and radically innovating the cyber security program as a whole.  This is "Second Loop Learning" in the Double Loop Learning model, and is in contrast to the Single Loop Learning for operational cyber security that was discussed in a previous post.  In a later post, I'll discuss how dimensions 7 through 10 come together as Second Loop Learning.  But for now, you can view this dimension as the processes that execute and make operational this outer learning loop.

Including this dimension in our framework is vital for achieving the goals of dynamic and rapidly innovating cyber security.  Existing cyber security frameworks either omit it or subsume it in other categories in a marginal or vague form.

Dimension 7: Effective External Engagement

This is the seventh post defining each of the Ten Dimensions of Cyber Security Performance.  Thus far, I've defined the six dimensions that comprise operational or day-to-day cyber security.  Dimensions 7 through 10 are qualitatively different in that they guide, structure, and set requirements and constraints for operational cyber security but do not directly control it or influence it.  These dimensions are might be more controversial as essential performance dimensions, but I hope to convince you that they are both necessary and also not subsumed in the first six dimensions.

Performance in the dimension of Effective External Engagement means identifying, understanding, negotiating, implementing, monitoring, and improving relationships with organizations and entities external to the focal organization.  "External relationships" includes technical or informational relationships with service providers such as Internet Service Provider (ISP), DNS service providers, registrars, certificate authorities, federated identity service providers, etc.  I can also include the entire supply chain for an organization regarding information technology and communications.  Obviously this include suppliers of information security products and services.

But it would be a mistake to view this dimension as solely a matter of managing technological relationships or supply chain management.  If it was, then it might be subsumed under the other dimensions.  The rest of this post explains what's new.

Sunday, July 7, 2013

Operational Cyber Security and Single Loop Learning

In this post, I want to summarize the previous posts and also describe the interactions and relationships among the first six performance dimensions.  (See this post for the full list of dimensions.)

The core of cyber security performance is at the operational level, which includes processes and activities that drive day-to-day results.  Existing cyber security frameworks include many of the performance dimensions I am proposing, but in my opinion they have gaps and inadequacies:
  • Resilience is often omitted or treated as a disjoint collection of processes.
  • Design & Development is often omitted or is too narrowly defined to include only software design.
  • Execution & Operations is often under appreciated and is subsumed under other categories.
  • Protection & Controls is often defined too broadly, as if nearly every pro-active cyber security practice could be defined as a new protection process or control.
  • Exposure is often treated too narrowly, focusing on identifying information assets and technical vulnerabilities, and not giving sufficient attention to the total "exposure surface" (analogous to attack surface) which includes all aspects of exposure -- people, processes, technology, and information.  Also, many existing frameworks focus on minimizing exposure and minimizing vulnerabilities, and thus ignore the balancing act of promoting access and use of Systems to achieve organization goals.
  • Threat Intelligence, in some frameworks, is focused too narrowly on methods of attack (a.k.a. the "threats") and not enough on the actors behind those methods (the "threat agents").  Also, they tend to put too much attention on malicious actors and not enough on actors how are prone to accidents and errors, and also actors who are exploitive-but-legal.
  • All together, existing cyber security frameworks tend to be inadequate in how they treat the intersection of information security, privacy, IP protection, industrial control protection, national/homeland security, civil liberties, and digital rights.
The Ten Performance Dimensions that I am proposing aim to remedy these gaps and inadequacies, which I believe is a useful contribution.  But even more important is the dynamic interaction of these dimensions, because it is those interactions that can yield a system that is agile and capable of rapid innovation.

Friday, July 5, 2013

Dimension 6: Effective Response, Recovery & Resilience

This is the sixth post defining each of the Ten Dimensions of Cyber Security Performance.



Unlike the five previous dimensions, the Effective Response, Recovery & Resilience dimension is concerned only with the stream of Events associated with cyber security. These Events include “incidents”, “breaches”, “leaks”, “compromises”, “violations”, “outages”, and the like. The dimension it is most closely associated is 5. Efficient/Effective Execution & Operations. Thus, it appears on the block diagram just below, and next to “Events”. There is a new interaction path with events, because its possible to have cascading events and processes after the initial incident. (See my recent paper on breach impact estimation.)

This performance dimensions includes processes of incident response, digital forensics, business and legal response and recovery (including regulatory processes), etc. It also includes processes and activities designed to promote resilience -- the ability to continue operating even in the face of cyber attacks or cyber-physical events. These processes are covered in many existing cyber security frameworks so I won't dive into details here. But new issues arise when these are considered as a performance dimension and not just a loose collection.

Shocker!! I won Bruce Scheier's 6th "Movie-Plot Threat Contest"

I think the stopwatch on 15 minutes of fame just started.  From Bruce's blog:
For this year's contest, I want a cyberwar movie-plot threat [in 500 words or less]. (For those who don't know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services -- people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.
On May 15, I announced the five semi-finalists. Voting continued through the end of the month, and the winner is Russell Thomas, with this entry:  

Dimension 5: Effective/Efficient Execution & Operations

This is the fifth post defining each of the Ten Dimensions of Cyber Security Performance.

The dimension of Effective/Efficient Execution & Operations is closely related to the previous one, 4. Quality of Protection & Controls.  It has nearly identical relationships with the other dimensions and thus is portrayed in the block diagram using the same form.  It has a complementary and mutually supporting relationship with protections and controls.  Crudely speaking, the protections and controls are the "brains" while execution and operations are the "brawn" -- i.e. the "engine" that gets work done within the cyber security system and also in the interface between cyber security and every other aspect of the organization.

It also opens a separate interaction path with Actors, who can engage with the related processes as artifacts themselves, not just as they are implemented in the organization.

This dimension includes processes related to implementing and monitoring cyber security, including logging, training, reporting, patching, configuring and updating (e.g. servers, firewall rules, access control rules), and so on.  Using similar reasoning, the software industry has recently developed a similar concept and method under the name "Development Operations" or "DevOps".  Here are some good summaries on DevOps applied to information security:
If anyone doubts the significance of execution and operations, consider the recent case of the Windows Azure outage, which was caused by an operational error regarding expired SSL certificates.  (described here and here).  This outage had significant financial consequences for Microsoft beyond the response and recovery costs because they offered a compensating credits to their customers.

Moving beyond the details of each of the processes, I'll focus on how to view execution and operations, taken as a whole, can be measured and managed as a performance dimension.

Thursday, July 4, 2013

Dimension 4: Quality of Protection & Controls

This is the fourth post defining each of the Ten Dimensions of Cyber Security Performance. These are some initial thoughts presented in a sketchy fashion.  All are subject to much refinement, revision, and condensation.

Conceptually, protection and controls intercede between Threats and Exposures, including sometimes directly through modifications to Systems (e.g. system configuration).  It also envelops Design & Development because it takes the design of Systems and the rest of the cyber security program as given, essentially.

There is also a new interaction path with Actors, meaning that people and organizations of all stripes can encounter and interact with protections and controls as artifacts in themselves, and not just as they are implemented to protect Systems.  A prime example would be security and privacy awareness programs.

This box only goes half-way around because the other half is covered by Dimension 5: Effective/Efficient Execution & Operations.

This performance dimensions includes a large portion of what specialized teams do when they implement cyber security.  Examples include access control, identity management, intrusion detection and prevention, exfiltration controls, malware detection and removal, and so on.  These components are widely known and discussed so I won't spend time elaborating them here.  In fact, it sometimes tempting to reduce all cyber security into a set of relevant protections and controls.  This is a mistake, and one of the purposes of this framework is to correct that mistaken thinking.

Tuesday, July 2, 2013

Dimension 3: Effective Design & Development

This is the third post defining each of the Ten Dimensions of Cyber Security Performance.

Design & Development determines the general characteristics of an organization and its systems relative to cyber security.  Design includes processes of thinking, planning, and formal definition, while Development includes the realization of designs, including refinements and adjustments.

In the block diagram, I've positioned it underneath the "Systems" block because, in a certain sense, it provides the conceptual foundation and architecture of the Systems and how they behave, i.e. how it generates output events in response to inputs and interactions.  Of course this is a simplification since the influence and scope touches nearly every other performance dimension.

While this obviously includes any software and hardware development performed internally, it's very important to note that this dimension include all relevant design and development activities, including:
  • business process design and development
  • organization design and development
  • enterprise architecture
  • information and data architecture
  • partners relations, including supply chain, distribution chain, and outsource partner relations
  • contracts
  • governance, both inside and outside the organization
  • incentive systems
Note that this definition excludes design and development performed outside the organization.  For example, if an electric utility firm does no internal software or hardware development (not even through contractors), it depends on software and hardware from independent vendors.  Notice that this firm has made a design decision, namely to "buy" rather than "make".  But beyond that, it has no influence over the design and development decisions of its venders.  Therefore, it would need to manage the performance of its vendors through a different performance dimension: 7. External Engagement.

Design & Development is extremely important to cyber security performance because it has the potential to yield systemic improvements and benefits.  Conversely, neglect could lead to persistent and systemic dysfunctions, with the negative consequences showing up in many areas and functions.  This is one of the important lessons from the Total Quality Management movement -- that quality (or lack of it) is often pre-determined at design time rather than at manufacturing time or later.

Dimension 2: Effective Threat Intelligence

This is second in a series of posts describing each of the Ten Dimensions of Cyber Security Performance.

In terms of the framework I'm proposing, Threat Intelligence mediates between Actors and Systems but only in the context of what is exposed, as suggested by it's position in the block diagram.

Performance in this dimension means developing on-going intelligence to these questions:

  • Who or what might be a threat to our information or systems?
  • In what setting or context?
  • What are the capabilities and interests of these threat agents?
  • How do threat agents benefit from our information or systems?
  • What are the negative consequences to us?

Dimension 1: Optimize Exposure

This is a first of ten or so posts that describe each of the Ten Dimensions of Cyber Security Performance.  The aim of these posts is to convey:
  • What's included and not included in each
  • Why these are performance dimensions, not just functions or categories
  • How these dimensions are interrelated as a dynamical system to enable agile and continually innovative cyber security
As a visual heuristic, I'm going to build these dimensions into a composite block diagram where the position of each block is intended to signify something important about how it is related to the others.  But I'm intentionally keeping the diagram simple so a lot of details about interaction and dependency are omitted.

Context

Here's a simplistic context diagram that will serve as the foundation. All it says is that cyber security for an organization involves Actors who interact with Systems and this interaction leads to Events.
  • Actors are people inside or outside the organization, or people acting through systems (computers, devices, etc.).  Actors could also be organizations.  Actors can be legitimate, illegitimate/malicious, or anything in between. This includes Actors who might make errors.
  • Systems are combinations of technology, processes, and information, and also the people and physical facilities that provide direct support.
  • Events are outcomes, results, incidents, etc., including both good and bad, intentional and unintentional, beneficial or detrimental.
This context intentionally includes all the desired and positive uses of systems, not just the bad outcomes and incidents that other cyber security frameworks focus on.  Why?  Because I want to emphasize that all cyber security performance is fundamentally about supporting and enabling the desired and positive uses of systems.  If, on the other hand, we only focus on minimizing or avoiding bad outcomes, then we'd just unplug all the computers and bury them in a deep hole in the ground!

Three points need to be emphasized before we dive in:
  • These dimensions do not necessarily map to specific teams or departments.  In particular, these are not solely relevant to security specialists, Chief Information Security Officers, or any other role, department, or function. These performance dimensions shared across the organization. 
  • Several common functions or activities (e.g. "user awareness training", "business engagement", "risk management") do not appear as separate dimensions because they are distributed across several of performance dimensions.  Others, such as "compliance" and "governance", are subsumed in single a performance dimension.
  • Performance in each dimension is a relative to each organization's objectives, as in Management By Objectives originated by Peter Drucker.  I'll describe this more in a later post.

Now that we have a context and preliminaries, here's a discussion of the first performance dimension.

Dimension 1: Optimize Exposure

The starting place for cyber security performance is to understand and optimize the exposure of information and systems to attacks, breaches, violations, or other misuses.

Exposure mediates between Actors and Systems, as suggested by it's place in the block diagram. "Exposure" means that information and systems are "accessible", "visible", "potentially vulnerable", and the like.  Notice that this is a broader concept than "vulnerable", which appear in other frameworks.  I like the broader concept because it immediately puts attention on the balancing act between protecting information and systems by reducing exposure, on the one hand, and promoting positive uses by maximizing exposure on the other.

Therefore, performance in this dimension is  optimization, i.e. balancing the tradeoffs as best as we can.

Furthermore, performance is composed of an intelligence component and an execution component.

By "intelligence", I mean constantly striving to discover and understand what is and is not being exposed, what should be and what should not be exposed, and how to communicate to appropriate people what the exposure is or should be.  This necessarily involves engaging with the people who own the information and systems, as well as the systems themselves.

By "execution", I mean making decisions about exposure and implementing those decisions.  This includes many common security practices and policies, including access control, privileges, system configurations, vulnerability detection and remediation, and the like.  But it also includes legal and HR  rules and practices for confidentiality, propriety, and privacy.  Basically, these rules define what information is confidential or the degree of confidentiality and who can or cannot have access to that information.  Likewise for proprietary information and private information.  Again, this is a balancing act because too rules that are restrictive can be as harmful as rules that are too loose.

Optimizing exposure is a broad responsibility.  Each department and team that owns data has responsibility to make decisions on confidentiality and propriety, for example.  Even individual people have some responsibility to manage and optimize the exposure of their personal information and systems.  This requires that they develop some awareness and understanding of what data they have, how it might be exposed, and how to make tradeoff decisions.

Ideally, there would be a single composite measure of exposure.  One candidate might be a variation on the "attack surface" metric, i.e "exposure surface".  The concept of attack surface has been formalized and measured in research on software design, but I think it could be generalized.  Intuitively, the bigger the "exposure surface" the more opportunities there are for threat agents to find vulnerabilities and exploit them, and similar.  It probably would be important to talk about "regions" within the "exposure surface" that are more or less accessible.  But more research will be needed to put these ideas into practice.

Short of having a composite measure, I think it's feasible to measure performance in the Exposure Dimension by evaluating evidence that address these questions:

  1. How much do we know about our information and systems exposure?
  2. Do we know where our blind spots are?  Are we making discoveries and developing understanding in the most important areas of exposure?
  3. Are we being successful in executing -- making decisions and then implementing them?
  4. Do we have evidence that we are making good trade-off decisions (a.k.a. optimizing)?
  5. Is our capability to optimize exposure getting progressively better?  Or are we stuck in a place of inadequate performance?
Each of these questions will have it's own metrics or performance evidence, according to the objectives of the organization.  I'm offering these as suggestions for further thought and refinement, and not as the last word.

In summary, the Optimize Exposure performance dimension includes many of the activities, policies, and practices that people associate with information security, privacy, IP protection, and so on.  I'm not suggesting much that is new except that the integration of these elements can and should be measured in terms of performance.

(Next dimension: 2. Effective Threat Intelligence)