Tuesday, December 18, 2018

Does Modern Portfolio Theory (MPT) apply to cyber security risks?

Many months ago, my colleague David Severski asked on Twitter how Modern Portfolio Theory (MPT) does or does not apply to quantified cyber security risk:

I replied that I would blog on this "...soon".  Ha!  Almost four months later.  Well, better late than never.

Short answerNo, MPT doesn't apply.  Read on for explanations.

NOTE: "Cyber security risk" in this article is quantified risk -- probabilistic costs of loss events or probabilistic total costs of cyber security.  Not talking about color-coded risk, categorical risk, or ordinal scores for risk.  I don't ever talk about them, if I can help it.

Thursday, November 8, 2018

NIST Cybersecurity Risk Management Conference

I'm presenting today in a 45 minute session.  It's a quick overview of previous topics, focused on the Ten Dimensions.  The emphasis in this short presentation will be on defining what "performance" means and why managing performance in cyber security is not simply a matter of implementing a list of practices. Below are the slides and relevant blog posts.

Here is an Applicability Matrix I created that shows how the existing NIST CSF 1.1 applies to each of the Ten Dimensions.  You'll notice that there are only a few blue squares, which indicates that the Ten Dimensions is a different way of carving up the space.  This has plusses and minuses, of course.  In the blog posts on the Ten Dimensions, I explain and justify.  You'll also notice that some of the Ten Dimensions are poorly covered -- 3. Effective Design & Development; 8. Effective Agility and Learning (incl.. metrics); and 9. Optimize Total Cost of Risk.

Applicability Matrix. Rows = 10 Dimensions. Columns = NIST CSF.
Darker colors = more CSF items are applicable.

Monday, April 16, 2018

Presentation: Navigating the Vast Ocean of Browser Fingerprints

Here a PDF version of my BSides San Francisco presentation. (Today, Monday at 4:50pm)

COMING SOON:  GitHub repo with Python and R code, plus sample data.  Watch this space.

Wednesday, March 7, 2018

The Swan of No-Swan: Ambiguous Signals Tied To Cataclysmic Consequences

What do you see? colored blocks, or a Black Swan, or both?
This is figure-ground reversal, a type of ambiguity.
We are in the middle of the 100th anniversary of the Great War (a.k.a. World War I).  None of the great powers wanted a long total war. Yet the unthinkable happened anyway.

Surprisingly, historians are still struggling to understand what caused the war.

One of the biggest causal factors was ambiguous signals that precipitated cascading actions and reactions. When tied to cataclysmic consequences, this represents a distinct class of "Black Swan" systems.

(Here are some great lectures for those interested in a full analysis of causes of the Great War: Margret MacMillanMichael NeiburgSean McMeekin)

Rethinking "Black Swans"

As I have mentioned at the start of this series, the "Black Swan event" metaphor is a conceptual mess. (This post is seventh in the series "Think You Understand Black Swans? Think Again".) 

It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.


A "Swan of No-Swan" is a process where: 

  • The generating process is some set of large opposing forces that can be triggered by a set of signals or decisions tied to ambiguous signals;
  • The evidence are signals -- communications, interpreted actions, interpreted inaction, rhetoric/discourse, irreversible commitments -- that have ambiguous interpretations, either intentionally or unintentionally;
  • The method of reasoning either rational expectations (normative Decision Science) or biased expectations (Behavioral Psychology and Economics).  The key feature is lack of attention or awareness that one might be mis-perceiving the signals, combined with a strategic preference for precautionary aggressiveness.

Main Features

First, let us recognize that ambiguity is pervasive in social, business, and political life.  Ambiguous signals and communication have many pro-social functions: keeping our options open, saving face, avoiding insult or offense, optimistic interpretation of events, and so on.  They are especially prevalent in the lead-up to major commitments -- romance+marriage in personal life and big ticket sales in commercial life.

Most of the time, ambiguity has a smoothing effect.  It reduces the probability of extreme/rare events because of the flexibility of action and response associated with ambiguous signals.  Therefore, most people would not associate ambiguous signals with any type of "Black Swan" phenomena.

But when tied to "large opposing forces", things change and that's why this deserves to be a separate type of Black Swan.  Ambiguous signals become dangerous when they are linked to cataclysmic processes via certain types of reasoning processes.  It's not rational vs. biased.  Instead, it's committed self-confidence vs. self-aware fallibility. In committed self-confidence, there is lack of attention or awareness that one might be mis-perceiving the signals, combined with a strategic preference for precautionary aggressiveness.  "Shoot first, ask questions later".


Military forces leading to total war are the obvious case, and most common in history.   But we are now in a new age -- the Cyber Age!  (Yes.  I said it. Cyber)  Here are some cyber examples.
  • Offensive cyber capabilities -- By "offensive" I mean everything from "hack back" to punitive or disabling cyber attacks on critical infrastructure. If it becomes common for nation states and various non-nation actors to develop and deploy offensive capabilities, they everyone faces the strategic dilemma as to when and how much to deploy/trigger each capability.  This depends critically on the ability of each actor to detect and accurately interpret a wide variety of signals and evidence related to normal and abnormal activity, including breach events, threat actor attribution, signs of escalation, and so on.  These are all swimming in ambiguity, including intentional ambiguity (spoofing, camouflage, etc.)
  • Remote kill switches -- What if Internet of Things (IOT) makers build "remote kill switches" in their devices? After all, we'd like to prevent our toaster, pacemaker, automobile, or drone from doing harm in the case when it starts malfunctioning catastrophically.  Are there scenarios where one or more IOT manufacturers decide to remotely kill at the same time?  What if their monitoring instruments make it appear that some threat actor(s) are making self-driving cars intentionally crash into crowds of people?  Out of an abundance of caution, they might remotely kill the IOT devices to cut off the apparent disaster as it is unfolding.  But maybe the threat actor is only spoofing the signals because they pwned the monitoring devices and infrastructure.  Or maybe it's the precautionary action of some other IOT device system owner that is causing your monitoring system to go bonkers.  I could go on but you get the idea.

How to Cope with Swan of No-Swan

It would be good to decouple the generating process if possible.  Avoid the arms race to begin with.  (Give peace a chance!)

Absent that, the best antidote is to treat evidence and signals pluralistically, which means avoiding the tendency to commit to one interpretation or another too early.  This is very hard to do within one person or even one cohesive team. It's easier to assign different "lenses" to different people or teams who then proceed with their analysis and decision recommendations independently.

Finally, the decision makers who can "pull the trigger" should seek strategy alternatives to the preference for precautionary aggressiveness ("Shoot first, ask questions later").  While decision makers may feel like this is their only choice (and it may be), there is great advantage if more flexible alternatives can be found.

Monday, October 31, 2016

The Cyber Insurance Emperor Has No Clothes

(Of course, the title is hyperbole and attention-seeking. Now that you are here, I hope you'll keep reading.)

(click to enlarge)
In the Hans Christian Anderson story, The Emperor's New Clothes, the collective delusion of the Emperor's grand clothes was burst by a young child who cried out: "But he has got nothing on!"

I don't mean that cyber insurance has no value or that it is a charade.

My main point: cyber insurance has the wrong clothes for the purposes and social value to which it aspires.

This blog post sketches the argument and evidence. I will be following up separately with more detailed and rigorous analysis (via computational modeling) that, I hope, will be publishable.

tl;dr: (switching metaphors)
As a driving force for better cyber risk management, today's cyber insurance is about as effective as eating soup with a fork.
(This is a long post. For readers who want to "cut to the chase",  you can skip to the "Cyber Insurance is a Functional Misfit" section.)

Wednesday, October 19, 2016

Orange TRUMPeter Swans: When What You Know Ain't So

Was Donald J. Trump's political rise in 2015-2016 a "black swan" event?  "Yes" is the answer asserted by Jack Shafer this Politico article. "No" is the answer from other writers, including David Atkins in this article on the Washington Monthly Political Animal Blog.

Orange Swan
My answer is "Yes", but not in the same way that other events are Black Swans.   Orange Swans like the Trump phenomenon is fits this aphorism:
"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- attributed to Mark Twain
In other words, the signature characteristic of Orange Swans is delusion.

Rethinking "Black Swans"

As I have mentioned at the start of this series, the "Black Swan event" metaphor is a conceptual mess. (This post is sixth in the series "Think You Understand Black Swans? Think Again".)

It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.

Tuesday, June 21, 2016

Public Statement to the Commission on Enhancing National Cybersecurity, 6-21-2016

[Submitted in writing at this meeting. An informal 5 min. version was presented during the public comment period. This statement is my own and does not represent the views or interests of my employer.]


Cyber security desperately needs institutional innovation, especially involving incentives and metrics.  Nearly every report since 2003 has included recommendations to do more R&D on incentives and metrics, but progress has been slow and inadequate.


Because we have the wrong model for research and development (R&D) on institutions.

My primary recommendation is that the Commission’s report should promote new R&D models for institutional innovation.  We can learn from examples in other fields, including sustainability, public health, financial services, and energy.

What are Institutions and Institutional Innovation?

Institutions are norms, rules, and social structures that enable society to function. Examples include marriage, consumer credit reporting and scoring, and emissions credit markets.

Cyber security[1] has institutions today, but many are inadequate, dysfunctional, or missing.  Examples:
  1. overlapping “checklists + audits”; 
  2. professional certifications; 
  3. post-breach protection for consumers (e.g. credit monitoring); 
  4. lists of “best practices” that have never been tested or validated as “best” and therefore are no better than folklore.  

There is plenty of talk about “standards”,  “information sharing”, “public-private partnerships”, and “trusted third parties”, but these remain mostly talking points and not realities.

Institutional innovation is a set of processes that either change existing institutions in fundamental ways or create new institutions.   Sometimes this happens with concerted effort by “institutional entrepreneurs”, and other times it happens through indirect and emergent mechanisms, including chance and “happy accidents”.

Institutional innovation takes a long time – typically ten to fifty years.

Institutional innovation works different from technological innovation, which we do well.  In contrast, we have poor understanding of institutional innovation, especially on how to accelerate it or achieve specific goals.

Finally, institutions and institutional innovation should not be confused with “policy”.  Changes to government policy may be an element of institutional innovation, but they do not encompass the main elements – people, processes, technology, organizations, and culture.

The Need: New Models of Innovation

Through my studies, I have come to believe that institutional innovation is much more complicated  [2] than technological innovation.   It is almost never a linear process from theory to practice with clearly defined stages.

There is no single best model for institutional innovation.  There needs to be creativity in “who leads”, “who follows”, and “when”.  The normal roles of government, academics, industry, and civil society organizations may be reversed or otherwise radically redrawn.

Techniques are different, too. It can be orchestrated as a “messy” design process [3].  Fruitful institutional innovation in cyber security might involve some of these:
  • “Skunk Works”
  • Rapid prototyping and pilot tests
  • Proof of Concept demonstrations
  • Bricolage[4]  and exaptation[5]
  • Simulations or table-top exercises
  • Multi-stakeholder engagement processes
  • Competitions and contests
  • Crowd-sourced innovation (e.g. “hackathons” and open source software development)

What all of these have in common is that they produce something that can be tested and can support learning.  They are more than talking and consensus meetings.

There are several academic fields that can contribute defining and analyzing new innovation models, including Institutional Sociology, Institutional Economics, Sociology of Innovation, Design Thinking, and the Science of Science Policy.

Role Models

To identify and test alternative innovation models, we can learn from institutional innovation successes and failures in other fields, including:
  • Common resource management (sustainability)
  • Epidemiology data collection and analysis (public health)
  • Crash and disaster investigation and reporting (safety)
  • Micro-lending and peer-to-peer lending (financial services)
  • Emissions credit markets and carbon offsets (energy)
  • Open software development (technology)
  • Disaster recovery and response[6]  (homeland security)

In fact, there would be great benefit if there were a joint R&D initiative for institutional innovation that could apply to these other fields as well as cyber security.  Furthermore, there would be benefit making this an international effort, not just limited to the United States.


[1] "Cyber security" includes information security, digital privacy, digital identity, digital information property, digital civil rights, and digital homeland & national defense.
[2] For case studies and theory, see: Padgett, J. F., & Powell, W. W. (2012). The Emergence of Organizations and Markets. Princeton, NJ: Princeton University Press.
[3] Ostrom, E. (2009). Understanding Institutional Diversity. Princeton, NJ: Princeton University Press.
[4] “something constructed or created from a diverse range of available things.”
[5]  “a trait that has been co-opted for a use other than the one for which natural selection has built it.”
[6] See: Auerswald, P. E., Branscomb, L. M., Porte, T. M. L., & Michel-Kerjan, E. O. (2006). Seeds of Disaster, Roots of Response: How Private Action Can Reduce Public Vulnerability. Cambridge University Press.