Wednesday, August 28, 2013

Disappearing Swans: Descartes' Demon -- the Ultimate in Diabolical Deception

The Disappearing Swan.  Now you see it.   Now you don't.
Descartes' Demon has fog machines,  fake signs,
and much much more to mess with your head.
This is the fourth in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This one is named "Disappearing" because the emphasis is on deception to the ultimate degree.

The Disappearing Swans are mostly a rhetorical fiction -- an imaginary and socially constructed entity that is treated as real for the purposes of persuasion.  They are often mentioned as reasons why we can never understand anything about any variety of Black Swan, especially those with "intelligent adversaries".  I'm including Disappearing Swans in this series mostly for completeness and to make distinctions with other, more common Swans like Red Swans.

Tuesday, August 27, 2013

Red Swans: Extreme Adversaries, Evolutionary Arms Races, and the Red Queen

The Red Swan of evolutionary arms races, where the
basis for competition is the innovation process itself.
As the Red Queen says: "...it takes all the running you can do,
to keep in the same place."
This is the third in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This one is named "Red" after the Red Queen Hypothesis in evolutionary biology, which itself draws from the Red Queen in Lewis Carroll's Through the Looking Glass (sequel to Alice in Wonderland).  But in this post I'll talk about competitive and adversarial innovation in general, including host-parasite systems that are most analogous to cyber security today.

In addition to the usual definition and explanations, I've added a postscript at the end: "Why Red Swans Are Different From Ordinary Competition and Adversarial Rivalry".

Monday, August 26, 2013

Risk Management: Out with the Old, In with the New!

In this post I'm going to attempt to explain why I think many existing methods of assessing and managing risk in information security (a.k.a. "the Old") are going the wrong direction and describe what I think is a better direction (a.k.a. "the New").

While the House of Cards metaphor is crude, it gets across the idea of interdependence
between risk factors, in contrast to the "risk bricks" of the old methods.

Here's my main message:
  • Existing methods that treat risk as if it were a pile of autonomous "risk bricks" is the wrong direction for risk management.  ("Little 'r' risk")
  • A better method is to measure and estimate risk as an interdependent system of factors, roughly analogous to a House of Cards.   ("Big 'R" Risk")

I call the first "Little 'r' risk" because it attempts to analyze risk at the most granular micro level.  I call the second "Big 'R' Risk" because the focus is on risk estimation at an organization level (e.g. business unit), and then to estimate the causal factors that have the most influence on that aggregate risk.  With some over-simplification, we can say that Little 'r' risk is bottom-up while Big 'R' Risk is top-down.  (In practice, Big 'R' Risk is more "middle-out".)

This new method isn't my idea alone.  It comes from many smart folks who have been working on Operational Risk for many years, mainly in Financial Services.  For a more complete description of the new approach, I strongly recommend the following tutorial document by the Society of Actuaries: A New Approach for Managing Operational Risk. 

For readability and to keep an already-long post from being even longer, I'm going to talk in broad generalities and skip over many details.  Also, I'm not going to explain and evaluate each of the existing methods.  Finally, I'm not going to argue point-by-point all the folks who assert that probabilistic risk analysis is futile, worthless, or even harmful.

Saturday, August 24, 2013

First Presentation of "Ten Dimensions..." at BSides-LA

I had fun on Friday presenting the "Ten Dimensions of Cyber Security Performance" at BSides-LA.  This is the first time I presented it in a general forum, so I was looking forward to see how it would "fly" and to see what reactions it would get.

On the plus side, several people were pretty excited and I had some great discussions afterward.  Also, I got most of the presentation done in the available time, but I still have more tuning to do.

On the down side, there weren't as many people in my session as I had hoped.  It was one of the last sessions on the last day, so that probably had an impact.  Or maybe the headline or topic wasn't widely interesting.  But the people who were there were interested and engaged, which is what matters most.

But for a first presentation, I felt it was successful.

Here are the slides.  View in full screen mode to enjoy the animations.

Friday, August 9, 2013

Green Swans: Virtuous Circles, Snowballs, Bandwagons, and the Rich Get Richer

The Green Swan of cumulative prosperity.
The future's so bright she's gotta wear shades.
This is the second in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again."  This one is named "Green" as an allusion to the outsized success and wealth that often arise through this process, though by no means is it only limited to material or economic gains.

Taleb includes the Internet and the Personal Computer among his prime examples of Black Swan events.  In this post I hope to convince you that these phenomena are quite different than his other examples (e.g. what I've labeled "Grey Swans") and that there is value in understanding them separately.

Thursday, August 1, 2013

Grey Swans: Cascades in Large Networks and Highly Optimized/Critically Balanced Systems

A Grey Swan -- almost Black, but not quite. More narrowly defined.
This is the first of the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again."

I'll define and describe each one, and maybe give some examples. Most important, each of these Shades will be defined by a mostly-unique set of 1) generating process(es); 2) evidence and beliefs; and 3) methods of reasoning and understanding.  As described in the introductory post, it's only in the interaction of these three that Black Swan phenomena arise. Each post will close with section called "How To Cope..." that, hopefully, will make it clear why this Many Shades approach is better than the all-lumped together Black Swan category.

This first one is named "Grey" because it's closest to Taleb's original concept before it got hopelessly expanded and confused.

Tutorial: How Fat-Tailed Probability Distributions Defy Common Sense and How to Handle Them

This post is related to the Grey Swans post, but is a good topic to present on it's own.

For random time series, we often ask general questions to learn something about the probability distribution we are dealing with:
  1. What's average?  What's typical?
  2. How much does it vary?  How wide is the "spread"?  Is it "skewed" to one side?
  3. How extreme can the outcomes be?
  4. How good are our estimates, given the sample size?  Do we have enough samples?
If we have a good sized sample of data, common sense tells us that "average" is somewhere in the middle of the sample values and that the "spread" and "extreme" of the sample are about the same as those of the underlying distribution.  Finally, common sense tells us that after we have good estimates, we don't need to gather any more sample data because it won't change our estimates much.

It turns out the that these common-sense answers could all be flat wrong, depending on how "fat" the tail of the distribution is.  Now that's surprising!