Tuesday, December 18, 2018

Does Modern Portfolio Theory (MPT) apply to cyber security risks?

Many months ago, my colleague David Severski asked on Twitter how Modern Portfolio Theory (MPT) does or does not apply to quantified cyber security risk:

I replied that I would blog on this "...soon".  Ha!  Almost four months later.  Well, better late than never.

Short answerNo, MPT doesn't apply.  Read on for explanations.

NOTE: "Cyber security risk" in this article is quantified risk -- probabilistic costs of loss events or probabilistic total costs of cyber security.  Not talking about color-coded risk, categorical risk, or ordinal scores for risk.  I don't ever talk about them, if I can help it.

Thursday, November 8, 2018

NIST Cybersecurity Risk Management Conference

I'm presenting today in a 45 minute session.  It's a quick overview of previous topics, focused on the Ten Dimensions.  The emphasis in this short presentation will be on defining what "performance" means and why managing performance in cyber security is not simply a matter of implementing a list of practices. Below are the slides and relevant blog posts.

Here is an Applicability Matrix I created that shows how the existing NIST CSF 1.1 applies to each of the Ten Dimensions.  You'll notice that there are only a few blue squares, which indicates that the Ten Dimensions is a different way of carving up the space.  This has plusses and minuses, of course.  In the blog posts on the Ten Dimensions, I explain and justify.  You'll also notice that some of the Ten Dimensions are poorly covered -- 3. Effective Design & Development; 8. Effective Agility and Learning (incl.. metrics); and 9. Optimize Total Cost of Risk.

Applicability Matrix. Rows = 10 Dimensions. Columns = NIST CSF.
Darker colors = more CSF items are applicable.

Monday, April 16, 2018

Presentation: Navigating the Vast Ocean of Browser Fingerprints

Here a PDF version of my BSides San Francisco presentation. (Today, Monday at 4:50pm)

COMING SOON:  GitHub repo with Python and R code, plus sample data.  Watch this space.

Wednesday, March 7, 2018

The Swan of No-Swan: Ambiguous Signals Tied To Cataclysmic Consequences

What do you see? colored blocks, or a Black Swan, or both?
This is figure-ground reversal, a type of ambiguity.
We are in the middle of the 100th anniversary of the Great War (a.k.a. World War I).  None of the great powers wanted a long total war. Yet the unthinkable happened anyway.

Surprisingly, historians are still struggling to understand what caused the war.

One of the biggest causal factors was ambiguous signals that precipitated cascading actions and reactions. When tied to cataclysmic consequences, this represents a distinct class of "Black Swan" systems.

(Here are some great lectures for those interested in a full analysis of causes of the Great War: Margret MacMillanMichael NeiburgSean McMeekin)

Rethinking "Black Swans"

As I have mentioned at the start of this series, the "Black Swan event" metaphor is a conceptual mess. (This post is seventh in the series "Think You Understand Black Swans? Think Again".) 

It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.


A "Swan of No-Swan" is a process where: 

  • The generating process is some set of large opposing forces that can be triggered by a set of signals or decisions tied to ambiguous signals;
  • The evidence are signals -- communications, interpreted actions, interpreted inaction, rhetoric/discourse, irreversible commitments -- that have ambiguous interpretations, either intentionally or unintentionally;
  • The method of reasoning either rational expectations (normative Decision Science) or biased expectations (Behavioral Psychology and Economics).  The key feature is lack of attention or awareness that one might be mis-perceiving the signals, combined with a strategic preference for precautionary aggressiveness.

Main Features

First, let us recognize that ambiguity is pervasive in social, business, and political life.  Ambiguous signals and communication have many pro-social functions: keeping our options open, saving face, avoiding insult or offense, optimistic interpretation of events, and so on.  They are especially prevalent in the lead-up to major commitments -- romance+marriage in personal life and big ticket sales in commercial life.

Most of the time, ambiguity has a smoothing effect.  It reduces the probability of extreme/rare events because of the flexibility of action and response associated with ambiguous signals.  Therefore, most people would not associate ambiguous signals with any type of "Black Swan" phenomena.

But when tied to "large opposing forces", things change and that's why this deserves to be a separate type of Black Swan.  Ambiguous signals become dangerous when they are linked to cataclysmic processes via certain types of reasoning processes.  It's not rational vs. biased.  Instead, it's committed self-confidence vs. self-aware fallibility. In committed self-confidence, there is lack of attention or awareness that one might be mis-perceiving the signals, combined with a strategic preference for precautionary aggressiveness.  "Shoot first, ask questions later".


Military forces leading to total war are the obvious case, and most common in history.   But we are now in a new age -- the Cyber Age!  (Yes.  I said it. Cyber)  Here are some cyber examples.
  • Offensive cyber capabilities -- By "offensive" I mean everything from "hack back" to punitive or disabling cyber attacks on critical infrastructure. If it becomes common for nation states and various non-nation actors to develop and deploy offensive capabilities, they everyone faces the strategic dilemma as to when and how much to deploy/trigger each capability.  This depends critically on the ability of each actor to detect and accurately interpret a wide variety of signals and evidence related to normal and abnormal activity, including breach events, threat actor attribution, signs of escalation, and so on.  These are all swimming in ambiguity, including intentional ambiguity (spoofing, camouflage, etc.)
  • Remote kill switches -- What if Internet of Things (IOT) makers build "remote kill switches" in their devices? After all, we'd like to prevent our toaster, pacemaker, automobile, or drone from doing harm in the case when it starts malfunctioning catastrophically.  Are there scenarios where one or more IOT manufacturers decide to remotely kill at the same time?  What if their monitoring instruments make it appear that some threat actor(s) are making self-driving cars intentionally crash into crowds of people?  Out of an abundance of caution, they might remotely kill the IOT devices to cut off the apparent disaster as it is unfolding.  But maybe the threat actor is only spoofing the signals because they pwned the monitoring devices and infrastructure.  Or maybe it's the precautionary action of some other IOT device system owner that is causing your monitoring system to go bonkers.  I could go on but you get the idea.

How to Cope with Swan of No-Swan

It would be good to decouple the generating process if possible.  Avoid the arms race to begin with.  (Give peace a chance!)

Absent that, the best antidote is to treat evidence and signals pluralistically, which means avoiding the tendency to commit to one interpretation or another too early.  This is very hard to do within one person or even one cohesive team. It's easier to assign different "lenses" to different people or teams who then proceed with their analysis and decision recommendations independently.

Finally, the decision makers who can "pull the trigger" should seek strategy alternatives to the preference for precautionary aggressiveness ("Shoot first, ask questions later").  While decision makers may feel like this is their only choice (and it may be), there is great advantage if more flexible alternatives can be found.