Showing posts with label surprise. Show all posts
Showing posts with label surprise. Show all posts

Wednesday, April 1, 2020

Splattered Swan: Collateral Damage, Friendly Fire, and Mis-fired Mega-systems

Like Curly from the "Three Stooges" said,
"I'm a victim of circumstances!"
There is a type of "...Swan" that is not surprising or extreme in its aggregate effect, but is extremely surprising to a particular entity that was considered to be outside the scope of the main process.  A bomb is supposed to kill enemy soldiers, not your own soldiers. 

I call this type "Splattered Swans".

Context: Rethinking "Black Swans"

This post is nineth in the series "Think You Understand Black Swans? Think Again". The "Black Swan event" metaphor is a conceptual mess.

Summary: It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.


Definition 

A "Splattered Swan" is a process where:
  • The generating process involves a very powerful force (i.e. penal, constraining, or damaging force) with less than perfect aim.
  • The evidence are official rules, specifications, or scope, or experience that is limited what is intended;
  • The method of reasoning are based on the assumption that the aim will be perfect and error-free, or that errors will be "well behaved".
 

Main Features

A Splattered Swan arises when a very powerful system is prone to misfiring in very bad ways, causing damage to some entities that are considered "safe" or "out of bounds" by normal reasoning.  That these outcomes are extreme or surprising are basically due to failures to understand the total system and the ways it can fail.  

Two key features of  Splattered Swans are 1) critical error conditions are excluded from reasoning on principle and 2) those errors are potentially severe, even the first time.  A lot of systems adapt by trial and error, but that only works of the magnitude of errors (i.e. aim) is relatively small and the magnitude of collateral damage is also relatively small.  Consider airplane bombers from World War II aiming to kill enemy troops that are located near allied troops.  Even though they had bomb sights, the bombers were notoriously inaccurate.  With ordinary large bombs, the risk of "friendly fire" (i.e. killing your own troops) is high.  If the bomber is carrying a single atomic bomb, then the risk of "friendly fire" becomes extremely high, because you only get one chance to aim and drop and there is no feedback from previous attempts.  Plus the damage process is extreme.  In the other direction, if there are several bombers, and the first bombers drop flairs instead of bombs, then the cost of error is small and the opportunity of corrective feedback has the potential to dramatically reduce the risk of "friendly fire".

Another important feature of Splattered Swans are the blind spots created by the "official" or "intended" definition of the system of interest.  This can lead analysts and decision-makers to never even consider the possibility of collateral damage or unintended consequences. 
 

One Example

"Offensive cyber" , a.k.a. "hack back" is an example from the domain of cyber security.  There are many flavors of offensive cyber, but the most extreme involve damaging the targets, either physically or digitally or both.  Such extreme attacks might also be considered acts of war, a.k.a. "cyber war".  Putting aside the ethics or advisability of offensive cyber, there is immense potential for collateral damage.  First, it might be hard or impossible to attribute a given attack to the "real"  threat agents or groups (a.k.a. "Black Hat").  They might operate through affiliates, mask or disguise their tools and infrastructure, and might even intentionally implicate a different agent or group in the "Indicators of Compromise" and other forensic evidence.  Even if you can correctly identify the attacking group, it may be hard to attack them in a way that doesn't also do harm to socially-important entities or resources (e.g. cloud computing resources, networks, etc.).  Finally, in corner-case situations there is also a non-zero potential for self-harm, where an offensive cyber attack backfires on the "White Hat" attacker.

From a planning and on-going management viewpoint, it is much harder to anticipate and control the side-effects of cyber attacks than it is for physical attacks. 
 

How to Cope with Splattered Swan

 It is relatively simple to cope with Splattered Swan systems.  Don't take the "official" or "intended" system as a strict definition of what behavior or outcomes are possible.  Use Scenario Planning or "What If?" analysis to look outside the "official" or "intended" to identify potential for collateral damage.

Then look for ways to introduce error-correcting feedback or damage mitigations for the collateral damage.  Another good mitigation is to reduce the intensity of the damage/punishment process.

Swarm-as-Swan: Surprising Emergent Order or Aggregate Action

A flock of swans in swan-shaped formation
In complex systems with many interacting elements or agents, most of the time the actions of individual agents "average out" and remain local.  In some systems and in some circumstances, surprising aggregate or emergent behavior patterns arise.

Emergence is a common characteristic of such systems.  But that alone doesn't qualify them as a "Swan".  It requires several other factors that could, under the right circumstances, yield very surprising or cataclysmic outcomes.  I call this the "Swarm-as Swan", since swarming behavior (birds, fish, insects) is one well-known type of emergent phenomena, but this category is explicitly not limited to swarm phenomena.

Context: Rethinking "Black Swans"

This post is eighth in the series "Think You Understand Black Swans? Think Again". The "Black Swan event" metaphor is a conceptual mess.

Summary: It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.

 

Definition 

A "Swarm-as-Swan" is a process where:

  • The generating process involves a large-scale Complex Adaptive System that has regions in the state space where collective and/or emergent phenomena become dominant, leading to collective behavior that is dramatically different from the behavior in the "normal" regions of state space.
  • The evidence are patterns of system behavior and interaction at various scales (individual, group, collective) and especially surprisingly different patterns, including downward causation and varieties of self-organization and information processing;
  • The method of reasoning are mental models of the system, whether formal or informal, sophisticated or common sense, and the implications of those models on what behaviors are "normal" and expected vs. what is surprising.

Main Features

The field of Complexity Science has grown and blossomed over the last 30 years.  This Wikipedia article gives a good summary, along with the central theme of Complex Adaptive Systems (CAS).  From that article, the common characteristics of CAS:
  • The number of elements is sufficiently large that conventional descriptions (e.g. a system of differential equations) are not only impractical, but cease to assist in understanding the system. Moreover, the elements interact dynamically, and the interactions can be physical or involve the exchange of information
  • Such interactions are rich, i.e. any element or sub-system in the system is affected by and affects several other elements or sub-systems
  • The interactions are non-linear: small changes in inputs, physical interactions or stimuli can cause large effects or very significant changes in outputs
  • Interactions are primarily but not exclusively with immediate neighbors and the nature of the influence is modulated
  • Any interaction can feed back onto itself directly or after a number of intervening stages. Such feedback can vary in quality. This is known as recurrency
  • The overall behavior of the system of elements is not predicted by the behavior of the individual elements
  • Such systems may be open and it may be difficult or impossible to define system boundaries
  • Complex systems operate under far from equilibrium conditions. There has to be a constant flow of energy to maintain the organization of the system
  • Complex systems have a history. They evolve and their past is co-responsible for their present behavior
  • Elements in the system may be ignorant of the behavior of the system as a whole, responding only to the information or physical stimuli available to them locally
While all CAS are, in principle, capable of emergent behavior, not all are capable of big surprises in behavior that we require for our "...Swans" series.  Roughly, there are three levels of emergent phenomena
  • Level 1: Emergent behavior -- behavior of large numbers of individuals becomes interdependent and mutually influencing, far beyond the range of causal and information interaction, including some downward causation where the collective shapes the individuals.  Examples: flocks of birds, schools of fish. 
  • Level 2: Emergent functional structures -- the formation of stable networks of individuals that constitute functional subsystems. "Functional" means they do some work beyond just collective behavior of Level 1. An excellent example is the "glider" phenomena in Conway's Game of Life (a type of cellular automata).
A single Gosper's glider gun creating "gliders"

  •  Level 3: Emergence against a model -- similar to Level 2, but the "stable functional subsystems" have information processing and self-sustaining capabilities (including possibly metabolizing energy and repairing/regenerating structures).  In a real sense, Level 3 systems "take on a life of their own", at least for an extended period.  Example: emergent subsystems that function as regulators (e.g. thermostat), communication systems (e.g. encoding, decoding, transmission), pattern matching, optimization, etc.  The human immune system has some of these capabilities.
As you move up these levels, the nature of emergent phenomena changes dramatically, from mildly surprising and interesting at Level 1 up to "HOLY COW" at Level 3.   But our understanding of CAS is most at Level 1, some at Level 2, and only a little at Level 3. Put another way, if you create a hundred or thousand different CAS in a computational laboratory, most would only exhibit Level 1 emergent phenomena, a few would exhibit Level 2 emergent behavior, and only a very few, under narrow circumstances, would have Level 3 capabilities.  (In his book A New Kind of Science, Stephen Wolfram did exactly this investigation for all simple cellular automata.)

Like all of the "...Swans", it's just as important to understand the evidence that we use to understand these systems (i.e. CAS), and also our methods of reasoning. It's the combination of all three that give rise to the surprising/shocking/extreme behavior that we associate with "...Swans".

The most common evidence people pay attention two is either individual-based behaviors and interactions and the most common collective behavior patterns and distributions.  If the states of the CAS are in "low complexity" regions of the state space (i.e. not in one of the three Levels, above), then people may not even recognize that the CAS is capable of complex emergent phenomena.  The reverse is also true.  If the CAS is normally in a highly coherent, highly functional state then people may not observe or understand the micro-level behavior that supports that phenomena.  The evidence we need most is the location of "phase transitions" in state space, where the CAS shifts dramatically from one regime to another.  Unfortunately for us mortal humans, it's almost impossible to know in advance where the important phase transitions are in CAS, especially the Level 2 and 3 CAS.

Our methods of reasoning about CAS fall into three categories: 1) Intuitive (i.e patterns of "normal" behavior with small deviations, naïve causal models, "folk wisdom", etc.);  2) Linear Models (i.e. the standard tools of science up to ~1990);  3) Non-linear Models, including Agent-based Modeling.

Methods 1) and 2) are most common, and work well in "normal" circumstances, but are very prone to catastrophic failures of reasoning when the CAS enters a new, unfamiliar regime of emergent behavior.  Method 3) is specifically designed to understand CAS in all their complexity, but they aren't a "magic bullet" that completely eliminate the potentials for surprise or extreme outcomes.

One huge difference between Method 2) "Linear Models" and Method 3) "Non-linear Models" is that the that Method 3) usually does not  yield a forecast or prediction of system behavior in the same way that Method 2) does.  Instead it can help us understand when and why the CAS will change regimes, which is still very useful information to understand potential surprises.

Examples

In the previous section I  mentioned some illustrative examples. But here I'll mention two "biggies".

The  modern economy has been characterized and studied as a Complex Adaptive System (CAS), especially to understand innovation and crises, especially societal/economic collapse.  Some of the first books published by the Santa Fe Institute in the late `80s were titled "The Economy as a Complex Adaptive System".

Mass uprisings and mass revolutions are other classes of phenomena that benefit from study as CAS.  I won't go into detail here, but if you are curious, you might read up on these cases:

How to Cope with Swarm-as-Swan

The first step is to recognize that the system you are dealing with has the characteristics of a Complex Adaptive System.  Start with the Wikipedia page, then read some of the general references listed at the bottom.  This will give you the basic knowledge plus some exposure to many types of CAS.

The second step is to characterize the types of emergent behavior and structures that are within the "possibility space" of the CAS.  But stay away from "magical thinking" and "conspiracy theories".  

The third step is to apply modeling tools that are appropriate to the complexity of the CAS.  Linear models are fine for what they do, but don't try to use them to identify "phase transitions" from simple to complex behavior, etc.

If you aren't mathematically inclined or are not comfortable programming your own Agent-based Models (ABM), you can at least read books and papers that utilize these models and learn from the pros who built them and analyzed them.  Even better, you can play with them yourself using the Model Library that comes with NetLogo (free and open source).  Each model is controlled by sliders and buttons, and comes with documentation that guides you how to use it, how to interpret it, and how to explore it.

 

 

 

Wednesday, March 7, 2018

The Swan of No-Swan: Ambiguous Signals Tied To Cataclysmic Consequences

What do you see? colored blocks, or a Black Swan, or both?
This is figure-ground reversal, a type of ambiguity.
We are in the middle of the 100th anniversary of the Great War (a.k.a. World War I).  None of the great powers wanted a long total war. Yet the unthinkable happened anyway.

Surprisingly, historians are still struggling to understand what caused the war.

One of the biggest causal factors was ambiguous signals that precipitated cascading actions and reactions. When tied to cataclysmic consequences, this represents a distinct class of "Black Swan" systems.

(Here are some great lectures for those interested in a full analysis of causes of the Great War: Margret MacMillanMichael NeiburgSean McMeekin)

Rethinking "Black Swans"

As I have mentioned at the start of this series, the "Black Swan event" metaphor is a conceptual mess. (This post is seventh in the series "Think You Understand Black Swans? Think Again".) 

It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.


Definition

A "Swan of No-Swan" is a process where: 

  • The generating process is some set of large opposing forces that can be triggered by a set of signals or decisions tied to ambiguous signals;
  • The evidence are signals -- communications, interpreted actions, interpreted inaction, rhetoric/discourse, irreversible commitments -- that have ambiguous interpretations, either intentionally or unintentionally;
  • The method of reasoning either rational expectations (normative Decision Science) or biased expectations (Behavioral Psychology and Economics).  The key feature is lack of attention or awareness that one might be mis-perceiving the signals, combined with a strategic preference for precautionary aggressiveness.


Main Features

First, let us recognize that ambiguity is pervasive in social, business, and political life.  Ambiguous signals and communication have many pro-social functions: keeping our options open, saving face, avoiding insult or offense, optimistic interpretation of events, and so on.  They are especially prevalent in the lead-up to major commitments -- romance+marriage in personal life and big ticket sales in commercial life.

Most of the time, ambiguity has a smoothing effect.  It reduces the probability of extreme/rare events because of the flexibility of action and response associated with ambiguous signals.  Therefore, most people would not associate ambiguous signals with any type of "Black Swan" phenomena.

But when tied to "large opposing forces", things change and that's why this deserves to be a separate type of Black Swan.  Ambiguous signals become dangerous when they are linked to cataclysmic processes via certain types of reasoning processes.  It's not rational vs. biased.  Instead, it's committed self-confidence vs. self-aware fallibility. In committed self-confidence, there is lack of attention or awareness that one might be mis-perceiving the signals, combined with a strategic preference for precautionary aggressiveness.  "Shoot first, ask questions later".

Examples

Military forces leading to total war are the obvious case, and most common in history.   But we are now in a new age -- the Cyber Age!  (Yes.  I said it. Cyber)  Here are some cyber examples.
  • Offensive cyber capabilities -- By "offensive" I mean everything from "hack back" to punitive or disabling cyber attacks on critical infrastructure. If it becomes common for nation states and various non-nation actors to develop and deploy offensive capabilities, they everyone faces the strategic dilemma as to when and how much to deploy/trigger each capability.  This depends critically on the ability of each actor to detect and accurately interpret a wide variety of signals and evidence related to normal and abnormal activity, including breach events, threat actor attribution, signs of escalation, and so on.  These are all swimming in ambiguity, including intentional ambiguity (spoofing, camouflage, etc.)
  • Remote kill switches -- What if Internet of Things (IOT) makers build "remote kill switches" in their devices? After all, we'd like to prevent our toaster, pacemaker, automobile, or drone from doing harm in the case when it starts malfunctioning catastrophically.  Are there scenarios where one or more IOT manufacturers decide to remotely kill at the same time?  What if their monitoring instruments make it appear that some threat actor(s) are making self-driving cars intentionally crash into crowds of people?  Out of an abundance of caution, they might remotely kill the IOT devices to cut off the apparent disaster as it is unfolding.  But maybe the threat actor is only spoofing the signals because they pwned the monitoring devices and infrastructure.  Or maybe it's the precautionary action of some other IOT device system owner that is causing your monitoring system to go bonkers.  I could go on but you get the idea.


How to Cope with Swan of No-Swan

It would be good to decouple the generating process if possible.  Avoid the arms race to begin with.  (Give peace a chance!)

Absent that, the best antidote is to treat evidence and signals pluralistically, which means avoiding the tendency to commit to one interpretation or another too early.  This is very hard to do within one person or even one cohesive team. It's easier to assign different "lenses" to different people or teams who then proceed with their analysis and decision recommendations independently.

Finally, the decision makers who can "pull the trigger" should seek strategy alternatives to the preference for precautionary aggressiveness ("Shoot first, ask questions later").  While decision makers may feel like this is their only choice (and it may be), there is great advantage if more flexible alternatives can be found.

Tuesday, January 12, 2016

Institutional Innovation in Contested Territory: Quantified Cyber Security and Risk

Say you are an entrepreneurial sort of person who wants to really change the world of cyber security. Problem: nobody seems to know where the game-changing innovation is going to come from.  Is it technology?  Is it economics?  Is it law and policy? Is it sociology? Maybe combination, but what? And in what sequence?

If you aim for institutional innovation, then at some point you are going to need to take sides in the great "Quant vs. Non-quant" debate:
  • Can cyber security and risk be quantified? 
  • If "yes", how can quantitative information be used to realize security to significantly improve outcomes?
Whether you choose Quant or Non-quant, you will need some tools and methods to advance the state of the art.  But how do you know if you are choosing the right tools, and using them well?  (Think about the difference between Numerology and Calculus as they might be applied to physics of motion.)

Whoever makes sufficient progress toward workable solutions will "win", in the sense of getting wide-spread adoption, even if the other is "better" in some objective sense (i.e. "in the long run").

I examine this innovation race in a book chapter (draft). The book will probably come out in 2016.

Abstract:
"The focus of this chapter is on how the thoughts and actions of actors coevolve when they are actively engaged in institutional innovation. Specifically: How do innovators take meaningful action when they are relatively ‘blind’ regarding most feasible or desirable paths of innovation? Our thesis is that innovators use knowledge artifacts – e.g. dictionaries, taxonomies, conceptual frameworks, formal procedures, digital information systems, tools, instruments, etc. – as cognitive and social scaffolding to support iterative refinement and development of partially developed ideas. We will use the case of institutional innovation in cyber security as a way to explore these questions in some detail, including a computational model of innovation."
Your feedback, comments, and questions would be most welcome.

The computational model used is called "Percolation Models of Innovation".  Here is the NetLogo code of the model used in the book chapter.   Below are some figures from the book chapter.

Innovation as percolation. Progress moves from bottom to top. Each column is a "technology",
and neighboring columns are closely related.  This version (S&V 2005) only models
rate of progress and distribution of "sizes", not anything about the technology or
trajectory of innovation.
A screen shot of the user interface.  Three different models can be selected (upper left).

Sunday, March 16, 2014

S Kauffman on Emergent Possibility Spaces in Evolutionary Biology, Economics, & Tech. (great lecture)

Below is a great lecture by Stuart Kauffman on the scientific and philosophical consequences of emergent possibility spaces in evolutionary biology and evolutionary economics, including technology innovation and even cyber security. This web page has the both video and lecture notes.

The lecture is very accessible anyone who reads books or watches programs on science aimed at the general public -- especially evolution, ecology, complexity, and innovation. He does mention some mathematical topics related to Newtonian physics and also Quantum Mechanics, but you don't need to know the details of any the math to follow his argument.  He gives very simple examples for all the important points he makes.


There are very important implications on epistemology (what do we know? what can be known?), scientific methods and research programs, and the causal role of cognition, conception, and creativity in economic and technological change. This last implication is an important element in my dissertation. I'll write more on that later.

Monday, February 24, 2014

#BSidesSF Prezo: Getting a Grip on Unexpected Consequences

Here are the slides I'm presenting today at B-Sides San Francisco (4pm).  I suggest that you download it as PPTX because it is best viewed in PowerPoint so you can read the stories in the speaker notes.

Wednesday, October 2, 2013

Out-of-the-Blue Swans: Megatsunami, Supervolcanos, The Black Death, and Other Cataclysms

The Out-of-the-Blue Swan is out there waiting to ruin our
day, month, year, decade, or century.
This is the fifth in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This will be a short post because the phenomena and implications on risk management are fairly simple (at least for individual people and firms). I've seen a few people include these in their list of "Black Swans" if they want to emphasize events with massive destruction and unpredictable timing.

Wednesday, August 28, 2013

Disappearing Swans: Descartes' Demon -- the Ultimate in Diabolical Deception

The Disappearing Swan.  Now you see it.   Now you don't.
Descartes' Demon has fog machines,  fake signs,
and much much more to mess with your head.
This is the fourth in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This one is named "Disappearing" because the emphasis is on deception to the ultimate degree.

The Disappearing Swans are mostly a rhetorical fiction -- an imaginary and socially constructed entity that is treated as real for the purposes of persuasion.  They are often mentioned as reasons why we can never understand anything about any variety of Black Swan, especially those with "intelligent adversaries".  I'm including Disappearing Swans in this series mostly for completeness and to make distinctions with other, more common Swans like Red Swans.

Tuesday, August 27, 2013

Red Swans: Extreme Adversaries, Evolutionary Arms Races, and the Red Queen

The Red Swan of evolutionary arms races, where the
basis for competition is the innovation process itself.
As the Red Queen says: "...it takes all the running you can do,
to keep in the same place."
This is the third in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This one is named "Red" after the Red Queen Hypothesis in evolutionary biology, which itself draws from the Red Queen in Lewis Carroll's Through the Looking Glass (sequel to Alice in Wonderland).  But in this post I'll talk about competitive and adversarial innovation in general, including host-parasite systems that are most analogous to cyber security today.

In addition to the usual definition and explanations, I've added a postscript at the end: "Why Red Swans Are Different From Ordinary Competition and Adversarial Rivalry".

Friday, August 9, 2013

Green Swans: Virtuous Circles, Snowballs, Bandwagons, and the Rich Get Richer

The Green Swan of cumulative prosperity.
The future's so bright she's gotta wear shades.
This is the second in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again."  This one is named "Green" as an allusion to the outsized success and wealth that often arise through this process, though by no means is it only limited to material or economic gains.

Taleb includes the Internet and the Personal Computer among his prime examples of Black Swan events.  In this post I hope to convince you that these phenomena are quite different than his other examples (e.g. what I've labeled "Grey Swans") and that there is value in understanding them separately.

Thursday, August 1, 2013

Grey Swans: Cascades in Large Networks and Highly Optimized/Critically Balanced Systems

A Grey Swan -- almost Black, but not quite. More narrowly defined.
This is the first of the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again."

I'll define and describe each one, and maybe give some examples. Most important, each of these Shades will be defined by a mostly-unique set of 1) generating process(es); 2) evidence and beliefs; and 3) methods of reasoning and understanding.  As described in the introductory post, it's only in the interaction of these three that Black Swan phenomena arise. Each post will close with section called "How To Cope..." that, hopefully, will make it clear why this Many Shades approach is better than the all-lumped together Black Swan category.

This first one is named "Grey" because it's closest to Taleb's original concept before it got hopelessly expanded and confused.