Wednesday, May 6, 2020

Look Papa! I'm on the Loopcast! -- Talking complexity, simulation, black swans, randomness, resilience, and institutional innovation

If you have a spare 1 hr 40 min. *, you might want to listen my interview on the "Loopcast" podcast (below).  The host is Sina Kashefipour (@rejectionking on Twitter).

* Personally, I think I sound better at 1.5X speed, but then again I listen to most podcasts at 1.5X speed.



In this podcast, I reference the following websites and resources:

 

Monday, May 4, 2020

S4x20 Video: Lessons Learned from Norsk Hydro on Loss Estimation and Cyber Insurance

I gave a talk at S4X20 in January on the Norsk Hydro ransomware attack.  The full video has now been posted on YouTube:




Like all great presentations, it includes a Seinfeld reference :-)

Image

Wednesday, April 1, 2020

Splattered Swan: Collateral Damage, Friendly Fire, and Mis-fired Mega-systems

Like Curly from the "Three Stooges" said,
"I'm a victim of circumstances!"
There is a type of "...Swan" that is not surprising or extreme in its aggregate effect, but is extremely surprising to a particular entity that was considered to be outside the scope of the main process.  A bomb is supposed to kill enemy soldiers, not your own soldiers. 

I call this type "Splattered Swans".

Context: Rethinking "Black Swans"

This post is nineth in the series "Think You Understand Black Swans? Think Again". The "Black Swan event" metaphor is a conceptual mess.

Summary: It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.


Definition 

A "Splattered Swan" is a process where:
  • The generating process involves a very powerful force (i.e. penal, constraining, or damaging force) with less than perfect aim.
  • The evidence are official rules, specifications, or scope, or experience that is limited what is intended;
  • The method of reasoning are based on the assumption that the aim will be perfect and error-free, or that errors will be "well behaved".
 

Main Features

A Splattered Swan arises when a very powerful system is prone to misfiring in very bad ways, causing damage to some entities that are considered "safe" or "out of bounds" by normal reasoning.  That these outcomes are extreme or surprising are basically due to failures to understand the total system and the ways it can fail.  

Two key features of  Splattered Swans are 1) critical error conditions are excluded from reasoning on principle and 2) those errors are potentially severe, even the first time.  A lot of systems adapt by trial and error, but that only works of the magnitude of errors (i.e. aim) is relatively small and the magnitude of collateral damage is also relatively small.  Consider airplane bombers from World War II aiming to kill enemy troops that are located near allied troops.  Even though they had bomb sights, the bombers were notoriously inaccurate.  With ordinary large bombs, the risk of "friendly fire" (i.e. killing your own troops) is high.  If the bomber is carrying a single atomic bomb, then the risk of "friendly fire" becomes extremely high, because you only get one chance to aim and drop and there is no feedback from previous attempts.  Plus the damage process is extreme.  In the other direction, if there are several bombers, and the first bombers drop flairs instead of bombs, then the cost of error is small and the opportunity of corrective feedback has the potential to dramatically reduce the risk of "friendly fire".

Another important feature of Splattered Swans are the blind spots created by the "official" or "intended" definition of the system of interest.  This can lead analysts and decision-makers to never even consider the possibility of collateral damage or unintended consequences. 
 

One Example

"Offensive cyber" , a.k.a. "hack back" is an example from the domain of cyber security.  There are many flavors of offensive cyber, but the most extreme involve damaging the targets, either physically or digitally or both.  Such extreme attacks might also be considered acts of war, a.k.a. "cyber war".  Putting aside the ethics or advisability of offensive cyber, there is immense potential for collateral damage.  First, it might be hard or impossible to attribute a given attack to the "real"  threat agents or groups (a.k.a. "Black Hat").  They might operate through affiliates, mask or disguise their tools and infrastructure, and might even intentionally implicate a different agent or group in the "Indicators of Compromise" and other forensic evidence.  Even if you can correctly identify the attacking group, it may be hard to attack them in a way that doesn't also do harm to socially-important entities or resources (e.g. cloud computing resources, networks, etc.).  Finally, in corner-case situations there is also a non-zero potential for self-harm, where an offensive cyber attack backfires on the "White Hat" attacker.

From a planning and on-going management viewpoint, it is much harder to anticipate and control the side-effects of cyber attacks than it is for physical attacks. 
 

How to Cope with Splattered Swan

 It is relatively simple to cope with Splattered Swan systems.  Don't take the "official" or "intended" system as a strict definition of what behavior or outcomes are possible.  Use Scenario Planning or "What If?" analysis to look outside the "official" or "intended" to identify potential for collateral damage.

Then look for ways to introduce error-correcting feedback or damage mitigations for the collateral damage.  Another good mitigation is to reduce the intensity of the damage/punishment process.

Swarm-as-Swan: Surprising Emergent Order or Aggregate Action

A flock of swans in swan-shaped formation
In complex systems with many interacting elements or agents, most of the time the actions of individual agents "average out" and remain local.  In some systems and in some circumstances, surprising aggregate or emergent behavior patterns arise.

Emergence is a common characteristic of such systems.  But that alone doesn't qualify them as a "Swan".  It requires several other factors that could, under the right circumstances, yield very surprising or cataclysmic outcomes.  I call this the "Swarm-as Swan", since swarming behavior (birds, fish, insects) is one well-known type of emergent phenomena, but this category is explicitly not limited to swarm phenomena.

Context: Rethinking "Black Swans"

This post is eighth in the series "Think You Understand Black Swans? Think Again". The "Black Swan event" metaphor is a conceptual mess.

Summary: It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.

 

Definition 

A "Swarm-as-Swan" is a process where:

  • The generating process involves a large-scale Complex Adaptive System that has regions in the state space where collective and/or emergent phenomena become dominant, leading to collective behavior that is dramatically different from the behavior in the "normal" regions of state space.
  • The evidence are patterns of system behavior and interaction at various scales (individual, group, collective) and especially surprisingly different patterns, including downward causation and varieties of self-organization and information processing;
  • The method of reasoning are mental models of the system, whether formal or informal, sophisticated or common sense, and the implications of those models on what behaviors are "normal" and expected vs. what is surprising.

Main Features

The field of Complexity Science has grown and blossomed over the last 30 years.  This Wikipedia article gives a good summary, along with the central theme of Complex Adaptive Systems (CAS).  From that article, the common characteristics of CAS:
  • The number of elements is sufficiently large that conventional descriptions (e.g. a system of differential equations) are not only impractical, but cease to assist in understanding the system. Moreover, the elements interact dynamically, and the interactions can be physical or involve the exchange of information
  • Such interactions are rich, i.e. any element or sub-system in the system is affected by and affects several other elements or sub-systems
  • The interactions are non-linear: small changes in inputs, physical interactions or stimuli can cause large effects or very significant changes in outputs
  • Interactions are primarily but not exclusively with immediate neighbors and the nature of the influence is modulated
  • Any interaction can feed back onto itself directly or after a number of intervening stages. Such feedback can vary in quality. This is known as recurrency
  • The overall behavior of the system of elements is not predicted by the behavior of the individual elements
  • Such systems may be open and it may be difficult or impossible to define system boundaries
  • Complex systems operate under far from equilibrium conditions. There has to be a constant flow of energy to maintain the organization of the system
  • Complex systems have a history. They evolve and their past is co-responsible for their present behavior
  • Elements in the system may be ignorant of the behavior of the system as a whole, responding only to the information or physical stimuli available to them locally
While all CAS are, in principle, capable of emergent behavior, not all are capable of big surprises in behavior that we require for our "...Swans" series.  Roughly, there are three levels of emergent phenomena
  • Level 1: Emergent behavior -- behavior of large numbers of individuals becomes interdependent and mutually influencing, far beyond the range of causal and information interaction, including some downward causation where the collective shapes the individuals.  Examples: flocks of birds, schools of fish. 
  • Level 2: Emergent functional structures -- the formation of stable networks of individuals that constitute functional subsystems. "Functional" means they do some work beyond just collective behavior of Level 1. An excellent example is the "glider" phenomena in Conway's Game of Life (a type of cellular automata).
A single Gosper's glider gun creating "gliders"

  •  Level 3: Emergence against a model -- similar to Level 2, but the "stable functional subsystems" have information processing and self-sustaining capabilities (including possibly metabolizing energy and repairing/regenerating structures).  In a real sense, Level 3 systems "take on a life of their own", at least for an extended period.  Example: emergent subsystems that function as regulators (e.g. thermostat), communication systems (e.g. encoding, decoding, transmission), pattern matching, optimization, etc.  The human immune system has some of these capabilities.
As you move up these levels, the nature of emergent phenomena changes dramatically, from mildly surprising and interesting at Level 1 up to "HOLY COW" at Level 3.   But our understanding of CAS is most at Level 1, some at Level 2, and only a little at Level 3. Put another way, if you create a hundred or thousand different CAS in a computational laboratory, most would only exhibit Level 1 emergent phenomena, a few would exhibit Level 2 emergent behavior, and only a very few, under narrow circumstances, would have Level 3 capabilities.  (In his book A New Kind of Science, Stephen Wolfram did exactly this investigation for all simple cellular automata.)

Like all of the "...Swans", it's just as important to understand the evidence that we use to understand these systems (i.e. CAS), and also our methods of reasoning. It's the combination of all three that give rise to the surprising/shocking/extreme behavior that we associate with "...Swans".

The most common evidence people pay attention two is either individual-based behaviors and interactions and the most common collective behavior patterns and distributions.  If the states of the CAS are in "low complexity" regions of the state space (i.e. not in one of the three Levels, above), then people may not even recognize that the CAS is capable of complex emergent phenomena.  The reverse is also true.  If the CAS is normally in a highly coherent, highly functional state then people may not observe or understand the micro-level behavior that supports that phenomena.  The evidence we need most is the location of "phase transitions" in state space, where the CAS shifts dramatically from one regime to another.  Unfortunately for us mortal humans, it's almost impossible to know in advance where the important phase transitions are in CAS, especially the Level 2 and 3 CAS.

Our methods of reasoning about CAS fall into three categories: 1) Intuitive (i.e patterns of "normal" behavior with small deviations, naïve causal models, "folk wisdom", etc.);  2) Linear Models (i.e. the standard tools of science up to ~1990);  3) Non-linear Models, including Agent-based Modeling.

Methods 1) and 2) are most common, and work well in "normal" circumstances, but are very prone to catastrophic failures of reasoning when the CAS enters a new, unfamiliar regime of emergent behavior.  Method 3) is specifically designed to understand CAS in all their complexity, but they aren't a "magic bullet" that completely eliminate the potentials for surprise or extreme outcomes.

One huge difference between Method 2) "Linear Models" and Method 3) "Non-linear Models" is that the that Method 3) usually does not  yield a forecast or prediction of system behavior in the same way that Method 2) does.  Instead it can help us understand when and why the CAS will change regimes, which is still very useful information to understand potential surprises.

Examples

In the previous section I  mentioned some illustrative examples. But here I'll mention two "biggies".

The  modern economy has been characterized and studied as a Complex Adaptive System (CAS), especially to understand innovation and crises, especially societal/economic collapse.  Some of the first books published by the Santa Fe Institute in the late `80s were titled "The Economy as a Complex Adaptive System".

Mass uprisings and mass revolutions are other classes of phenomena that benefit from study as CAS.  I won't go into detail here, but if you are curious, you might read up on these cases:

How to Cope with Swarm-as-Swan

The first step is to recognize that the system you are dealing with has the characteristics of a Complex Adaptive System.  Start with the Wikipedia page, then read some of the general references listed at the bottom.  This will give you the basic knowledge plus some exposure to many types of CAS.

The second step is to characterize the types of emergent behavior and structures that are within the "possibility space" of the CAS.  But stay away from "magical thinking" and "conspiracy theories".  

The third step is to apply modeling tools that are appropriate to the complexity of the CAS.  Linear models are fine for what they do, but don't try to use them to identify "phase transitions" from simple to complex behavior, etc.

If you aren't mathematically inclined or are not comfortable programming your own Agent-based Models (ABM), you can at least read books and papers that utilize these models and learn from the pros who built them and analyzed them.  Even better, you can play with them yourself using the Model Library that comes with NetLogo (free and open source).  Each model is controlled by sliders and buttons, and comes with documentation that guides you how to use it, how to interpret it, and how to explore it.

 

 

 

Tuesday, January 21, 2020

S4x20 Presentation

I am presenting today, January 21, at the S4x20 conference, 3:30-4pm on the Main Stage.

Here are my slides and notes.  

Here is a paper mentioned in the talk that gives details on breach impact estimation:

Monday, November 25, 2019

Talk Like a Cyber Insurance Risk Analyst

In a recent class on catastrophe risk modeling, I learned the definition of terms that are common in insurance but not so well understood elsewhere:
  • Peril
  • Exposure
  • Hazard
  • Ground-up Loss
  • Risk
Read on for definitions, ending with an analogy that, hopefully, ties them all together.

Friday, June 14, 2019

RESET: "Data-driven Security Smashup" will launch in Fall 2019

Big change of plans for the "Data-driven Security Smashup":
We are canceling the live event in Las Vegas, August 3 - 5. 
Instead, we aim to launch one or more Virtual Smashup projects in the Fall of 2019, followed by one or more live events early in 2020, perhaps one in the US and one in UK.

Why?

Basically, we ran out of time as we were trying to organize the event: sponsorship, organizer recruiting and on-boarding, Call for Participation, legal structure, venue.  No fault to anyone.  We started relatively late, and our standards are high.  We didn't want to just throw it together and risk having things fall apart during the event.

Benefits

This new schedule gives us time to do it right, starting with the basics.  For example, we will secure a "fiscal sponsorship" relationship so we have the legal, financial, and operational infrastructure to take donations, manage risk, and to spend money responsibly.

Another "basic" that needs attention is contact and relationship management for all the people who have expressed interest, asked questions, or need responses.  This includes a dedicated website instead of this blog.

The new schedule gives us the lead time to recruit organizers and collaborators in academia, professional associations, industry, independent consultants, and government, both in US and internationally (mostly UK, Europe, Switzerland).

Personally, I'm not disappointed. The core idea is solid.  Lots of interest.  This change makes some space for some of my other priorities (dissertation!).

Stay tuned!