Wednesday, November 27, 2013

Did chariots & cavalry drive social complexity for 3,000 yrs.? (letter to PNAS)

Battle scene decoration on the chariot of Thutmose III
(click to enlarge)
(It's holiday time, so it's probably a good time for non-risk, non-InfoSec posts.  Time for war chariots!)

tl;dr:  I wrote a letter to PNAS disputing the claims made by a famous author in a recent article.  My beef was with their modeling methods. We'll see if it gets published and if the author(s) respond.


Friday, November 22, 2013

"Prediction" vs "Forecast"

The "Bonds Shift" was based on a forecast.  In contrast,
the decision to intentionally walk him so often
(120 times in 2004) was based on a prediction that
the shift wouldn't work well enough.
We sometimes hear arguments against quantitative risk analysis that include a claim that "you can't predict what intelligent adversaries will do".  In reply, advocates often say "we don't aim to predict, but instead to forecast", but that rarely settles the argument because people don't agree on what those terms mean and if they are even different.

Most recently, this topic was debated by the hosts of the Risk Science Podcast Ep. 9, (31:10 to 55:00).

Summarizing the debate: two hosts say there’s no meaningful difference between “prediction” and “forecast” because they are both probabilistic statements about the future -- plus real people don’t care. In contrast, two hosts disagree, saying there is a meaningful difference and real-world people do care.

I side with the people who say there is a meaningful difference, but I’m not sure the essence of the difference came out in the podcast conversation. I do think that Jay’s statement at 31:10 is the best jumping off point.

 The main difference between "prediction" and "forecast", in my opinion, has to do with what actions you take based on the information and what uncertainty is communicated.

Thursday, November 14, 2013

Several pieces of good news

Sorry I haven't posted in a while.  I've been pretty busy with research work -- writing papers for conferences, mostly.  But I've got some good news to report.

Cash will be flowing as nature intended.
First, I'm starting a full-time job at a Financial Institution* with the title Security Data Analyst/Scientist, which I choose to shorten to Security Data Scientist.  This is a big deal on many levels.  One of  the best things is that their capabilities are comparatively mature and the leadership is both visionary and pragmatic.  This means that I hope to do some fairly compelling analysis drawing on some rich data sources and previous analysis rather than having to start from scratch.

(* My Twitter followers will know.)

I'm continuing my PhD program part-time, with focusing on my dissertation.  I hope to complete that in 2014.

Also, I'll continue blogging here on all the same topics.

Second, I'm very happy to say that I've had a talk accepted at the RSA Conference in February 2014, co-presenting with David Severski:
10 Dimensions of Security Performance for Agility & Rapid Learning
2/26/2014, 10:40 AM - 11:00 AM
Abstract: Information security is an innovation arms race. We need agility and rapid learning to stay ahead of adversaries. In this presentation, you'll learn about a Balanced Scorecard method called the Ten Dimensions of Cyber Security Performance. Case studies will show how this approach can dramatically improve organization learning and agility, and also to get buy-in from managers and executives. 
This is a 20 minute time slot, and there's no way that I can compress my 60 minute or 45 minute versions of "Ten Dimensions" into such a short time.  Therefore, David and I are going to cook up an extended "trailer" that conveys the basic idea of double loop learning in practice (David is doing some neat stuff that we'll try to "fly through").  In parallel, I hope to have some videos, webinar, or other media that people can go to in order to get a proper introduction and survey.

Also, I've proposed a peer-to-peer session at RSA on a related theme: "Building a Quantitative Evidence-based Security & Risk Management Program".  I should hear later in November whether it's been accepted.  It will be an hour long session and I will only be facilitating, but it should be a good time for Q&A, sharing insights, etc.

Finally, I'll be presenting a SIRA webinar "Big 'R' Risk Management - from concept to pilot implementation".  This is basically the same talk I gave at SIRAcon, but some people couldn't attend that session (we had parallel tracks) and many people couldn't attend SIRAcon at all.  I think it'll be in December, but there isn't a date set yet.

I've got some good blog posts in the works, including Game Theory Meets Risk Analysis, several more Shades of Black Swans, a review of RIPE, some philosophy, and others.   Thanks for reading and thanks for your comments, both here and in other media.

--------

One more bit of good news from a completely different domain: the book Chasing Chariots is coming soon!  Includes most of the papers presented at the First International Chariot Conference held in Cairo in December 2012.  The evolution of technology in the Late Bronze Age became an strong interest (a.k.a. compulsion) of mine a couple years ago, with particular focus on the so-called "first revolution in military affairs" -- the war chariot.  Beyond just curiosity, I'd like to do some serious research in this area, but short of getting a second PhD, the only way it's going to happen is if I can find some collaborators (after I graduate!).

Periodically, I'll post some war chariot stuff here.  Bruce S. has his squids;  I have my war chariots.

Monday, October 21, 2013

preso: Big 'R' Risk Management - from concept to pilot implementation

Here's the presentation (pdf) that I'm giving Monday at SIRAcon in Seattle.  This extends the ideas presented in the post "Risk Management: Out with the Old, In with the New!". This presentation presents some specifics on how to get started implementing the Big 'R' approach. It's even got a illustrative case toward the end featuring patch management and exceptions, shown in this figure (click to enlarge)

Example of Causal Dynamic Analysis, in this case Patch Management & Exceptions
(click to enlarge)

Monday, October 7, 2013

Baby Boomer on Board! -- a data-based exploration


Kindergarten graduations.  Play dates. First names like Zaiden (boy) and Bristol (girl). Driving kids to and from school in SUVs and minivans when they live in walking distance (e.g. 2 blocks).


And those "Baby on Board!" signs.

What "Baby on Board!" really signifies.
How did we get here?

This post is a data-based exploration into the origins of these cultural patterns in the White middle and upper-middle classes in the US.

(warning: long post, but with many charts and pictures)


Wednesday, October 2, 2013

Out-of-the-Blue Swans: Megatsunami, Supervolcanos, The Black Death, and Other Cataclysms

The Out-of-the-Blue Swan is out there waiting to ruin our
day, month, year, decade, or century.
This is the fifth in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This will be a short post because the phenomena and implications on risk management are fairly simple (at least for individual people and firms). I've seen a few people include these in their list of "Black Swans" if they want to emphasize events with massive destruction and unpredictable timing.

Tuesday, October 1, 2013

Pantyhose attitudes correlate to GOP's "Suicide Caucus" districts

The New Yorker blog has an interesting post on the geography and demography behind the current government shutdown: "Where the GOP's Suicide Caucus Lives".  Read the article because it has a lot of good data on the districts of 80 Representatives who have pushed the Republican leadership into this battle.  (The label "Suicide Caucus" was coined by conservative commentator Charles Krauthammer.)

Here's the map of districts of those Representatives (click to enlarge):

This immediately reminded me of the map I posted regarding attitudes toward pantyhose in this post.  Here's the map from that post:

Just to see if the two might be correlated, I physically combined the maps (click to enlarge):

[Edit 10/8/13 -- added this map, below]

Here's the same data, but this map only shows the relative prevalence of people who believe it is "acceptable".  Red indicates very high prevalence, yellow indicates medium prevalence, and blue, low prevalence. (Click on map to enlarge.)




I think the correlation is striking, though certainly not perfect especially in certain regions.  For example, the region of East Texas and West Louisiana is a stronghold of the Suicide Caucus but is very pro-pantyhose.  Likewise, we'd expect Suicide Caucus members in Nebraska, Oklahoma, South Dakota, and West Virginia, but there are none (at least indicated by signatures on the letter).   Also, the most pro-pantyhose region of Colorado is in the Caucus, and likewise with Idaho.

Food for thought and grounds for further examination.

Wednesday, September 25, 2013

I'm presenting at SIRAcon Oct 21, Seattle WA


SIRAcon - registration
Monday, October 21, 2013

Bell Harbor Conference Center
Pier 66
Seattle, WA 98121

You won't find any conference with a higher concentration of bright, forward-thinking InfoSec risk folks than SIRAcon.

________________

Title: Big ‘R’ Risk Management (the “Modern Approach”) — From Concept to Pilot Implementation

Abstract:
Big ‘R’ Risk Management is also known as the Modern Approach to Operational Risk.  It’s a very different approach to probabilistic risk analysis.  Instead of trying to quantify the risk of individual threat + vulnerability + consequence combinations, the focus is on quantitative estimation of the factors that drive aggregate risk at a business unit or enterprise level.  While it’s been described in concept,  there isn’t much information on implementation.

As introduction, the presentation will start with an overview of the Modern Approach and the generic steps in the analysis and decision-making.  The rest of the presentation will be a walkthrough of one or two illustrative cases to show how it would be implemented in practice, especially in a pilot or a proof-of-concept.

The main takeaway will better understanding of the viability of the Modern Approach and practical guidance on how to get started on it via a pilot implementation.


Sunday, September 22, 2013

Blogging as rapid prototyping

People blog for a lot of reasons and in many styles. Except for occasional posts like this one, I rarely write short posts.  No doubt, some potential readers will find this to be a big turnoff.  I'm fine with that.  I know who I aim to serve and who I don't.  This blog isn't for people who have short attention spans or who only want bit-sized "nuggets".

In addition to writing to serve readers, I also write for my own purposes. I've discovered that blogging works best when it serves as a way to rapidly prototype ideas and methods that might later become academic papers, industry presentations, book chapters, books, software models, and such.  These final products take a lot of time and effort to produce in final form.  Blogging gives me the opportunity to get started on them, focusing on just a few ideas at a time and without the need to have everything worked out.  Plus I get to see how people react, either through page views, social media comments, blog comments, or private email.

Tuesday, September 17, 2013

Movie plot: 2017 Texas Heat Wave (EnergySec Summit presentation)

I'm presenting tomorrow at the EnergySec Summit in Denver, 2:15 to 2:50pm.  If you are attending, come and say "hi". Since it's such a tight time slot, the pace of presentation will be pretty fast.  Therefore you might want to preview my presentation in advance or have it open while I'm presenting:
This is the Ten Dimensions of Cyber Security Performance but I'm using a different presentation approach than in the blog posts or my Bsides-LA presentation. As a dramatic device, I'm using a "movie plot" to help the audience imagine how the Ten Dimensions would make a difference once they are implemented.

As you might already know, I won Bruce Schneier's Sixth Annual Movie Plot Threat contest. This movie plot was constructed using a similar approach and methods. My main goal was to stretch the imagination of the audience by emphasizing a threat and attack scenario that isn't often considered, but yet is very plausible -- namely business partners as threat agents. I also wanted a scenario that was not a typical attack with typical consequences, but yet was serious at a system level.

[Edit: shout out to Andy Bochman who just wrote this post on the value of a compelling story to boost awareness and understanding. Great minds think alike!]


As the 2017 heat wave extended into it’s third week, "Monkey’s Uncle" had netted
Gold Man Hacks almost $300 million in bonus payments, with no end in sight.

If any of the microgrid operators had noticed their anomalous wholesale transactions
and was sufficiently capable to do a proper investigation…

Thursday, September 12, 2013

I'm leaving Facebook (Frog escapes slowly boiling pot)

That's a frog on the handle.  It was in the pot
but jumped out when things got too hot.
I'm one frog that has noticed that the water in the Facebook pot is getting too hot for comfort.  I'm jumping out.

I'm leaving Facebook this week -- permanently. I'm tired of the creeping encroachments on my privacy. Also I'm no longer willing to be a part of Facebook's quest to commercialize and make public all of our social relations and interactions.


The most recent privacy policy changes are the proximate cause (see thisthis, this and this).  Though protest and government scrutiny have prompted Facebook to delay implementation, the trend is clear.

The title of this post refers to the story of the Boiling Frog:
If you drop a frog in a pot of boiling water, it will of course frantically try to clamber out. But if you place it gently in a pot of tepid water and turn the heat on low, it will float there quite placidly. As the water gradually heats up, the frog will sink into a tranquil stupor, exactly like one of us in a hot bath, and before long, with a smile on its face, it will unresistingly allow itself to be boiled to death.
(version of the story from Daniel Quinn's The Story of B)
I'm not against businesses making money through advertising in their "free" services.  It's just the way Facebook is doing it that deeply bothers me.

Privacy isn't just "not disclosing private information".  It's also about people keeping control of their private information, where and how it is used, and by whom.  Facebook's latest changes are forcing users like me to give away vital elements of control, in my opinion.

Finally, I don't trust them to keep to the spirit of privacy.  Facebook's definition of privacy is like Bill Clinton's definition of "sexual relations" -- an unreasonably narrow definition whose rhetorical aim is to dissemble.  At best, I believe Facebook will continue to keep to the letter of their constantly shifting privacy policy and user agreement, all the while constantly finding ways to subtly erode our privacy. At worst -- well, obviously very bad things would happen.  But I'm acting on the assumption of the best case, not the worst.



Bye, bye, Facebook.  And I won't be coming back.

Sunday, September 8, 2013

Mr Langner is wrong. Risk management isn't 'bound to fail'. But it does need improvement and innovation.

In "Bound to Fail: Why Cyber Security Risk Cannot Simply Be 'Managed' Away" (Feb 2013) and a recent white paper, Ralph Langer argues that risk management is a fundamentally flawed approach to cyber security, especially for critical infrastructure.

Langner's views have persuaded some people and received attention in the media.  He gained some fame in the course of the investigation of the Stuxnet worm capabilities to exploit Siemens PLCs (programmable logic controllers). Specifically, Ralph was the first to assert that Stuxnet worm is a precision weapon aimed at sabotaging Iran's nuclear program. Langner also gains institutional credibility as a Nonresident Fellow at the Brookings Institute, who published the "Bound to Fail..." paper.  I'm guessing that Brookings PR department has been helping to get press attention for Langner's blog post critiquing NIST CSF and his proposed alternative: RIPE.  They were reported in seven on-line publications last week alone: here, here, here, herehere, here, and here.   (Note to self: get a publicist.)

In this long post, I'm going to critique Mr. Langner's critique of risk management, pointing to a few places where I agree with him, but I will present counter-arguments to his arguments that risk management is fundamentally flawed.

  • TL;DR version: There's plenty of innovation potential in the modern approach to risk management that Langner hasn't considered or doesn't know about. Therefore, "bound to fail" is false.  Instead, things are just now getting interesting.  Invest more, not less.

In the next post, I'll critique Mr. Langner's proposed alternative for an industrial control system security framework, which he dubs "Robust ICS Planning and Evaluation" (RIPE).

Friday, September 6, 2013

Latest #NISTCSF draft: "Three yards and a cloud of dust" will not be enough to matter

I wish I could write a more favorable review of the latest NIST Cyber Security Framework (CSF) draft.  I'm in favor of frameworks that might help us break out of the current malaise in cyber security.  I'm not anti-government or anti-regulation, either.  But my review isn't favorable because I don't think NIST CSF will promote the type of change necessary to make a meaningful difference.  It's not about the details but mostly about the overall structure and strategy behind it, as I describe below.

CSF advances the ball, but not enough to matter,
especially considering the effort.
The title of this blog post refers to the offensive strategy of Ohio State football coach Woody Hayes.  He had success for nearly 30 years with a very conservative, grind-it-out ground-based offense that centered on running plays between the Tackles. His teams often threw fewer than 10 passes a game.  Since a team needs 10 yards every 4 plays to hold on to the ball, if you could guarantee 3 yards per play you could hold on to the ball for a long time and eventually grind it in for a score.  This also worked in part it minimized turnovers.  It was predictable and not very imaginative, but succeeded through sheer mass, strength, effort, and persistence.

This may have worked for Woody at Ohio State, but it's a poor model for progress in cyber security.  The "pace of the game" is governed by the clock speed of innovation on the part of threat agents, not by us defenders.  Plus, most organizations aren't sitting at "3rd down and 3 yds to go" -- more like "3rd down and 33 yds to go".  "Three yards and a cloud of dust" is ultimately a failing strategy because it leaves us perpetually behind the minimum threshold of acceptable performance.

Wednesday, August 28, 2013

Disappearing Swans: Descartes' Demon -- the Ultimate in Diabolical Deception

The Disappearing Swan.  Now you see it.   Now you don't.
Descartes' Demon has fog machines,  fake signs,
and much much more to mess with your head.
This is the fourth in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This one is named "Disappearing" because the emphasis is on deception to the ultimate degree.

The Disappearing Swans are mostly a rhetorical fiction -- an imaginary and socially constructed entity that is treated as real for the purposes of persuasion.  They are often mentioned as reasons why we can never understand anything about any variety of Black Swan, especially those with "intelligent adversaries".  I'm including Disappearing Swans in this series mostly for completeness and to make distinctions with other, more common Swans like Red Swans.

Tuesday, August 27, 2013

Red Swans: Extreme Adversaries, Evolutionary Arms Races, and the Red Queen

The Red Swan of evolutionary arms races, where the
basis for competition is the innovation process itself.
As the Red Queen says: "...it takes all the running you can do,
to keep in the same place."
This is the third in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again." This one is named "Red" after the Red Queen Hypothesis in evolutionary biology, which itself draws from the Red Queen in Lewis Carroll's Through the Looking Glass (sequel to Alice in Wonderland).  But in this post I'll talk about competitive and adversarial innovation in general, including host-parasite systems that are most analogous to cyber security today.

In addition to the usual definition and explanations, I've added a postscript at the end: "Why Red Swans Are Different From Ordinary Competition and Adversarial Rivalry".

Monday, August 26, 2013

Risk Management: Out with the Old, In with the New!

In this post I'm going to attempt to explain why I think many existing methods of assessing and managing risk in information security (a.k.a. "the Old") are going the wrong direction and describe what I think is a better direction (a.k.a. "the New").

While the House of Cards metaphor is crude, it gets across the idea of interdependence
between risk factors, in contrast to the "risk bricks" of the old methods.

Here's my main message:
  • Existing methods that treat risk as if it were a pile of autonomous "risk bricks" is the wrong direction for risk management.  ("Little 'r' risk")
  • A better method is to measure and estimate risk as an interdependent system of factors, roughly analogous to a House of Cards.   ("Big 'R" Risk")

I call the first "Little 'r' risk" because it attempts to analyze risk at the most granular micro level.  I call the second "Big 'R' Risk" because the focus is on risk estimation at an organization level (e.g. business unit), and then to estimate the causal factors that have the most influence on that aggregate risk.  With some over-simplification, we can say that Little 'r' risk is bottom-up while Big 'R' Risk is top-down.  (In practice, Big 'R' Risk is more "middle-out".)

This new method isn't my idea alone.  It comes from many smart folks who have been working on Operational Risk for many years, mainly in Financial Services.  For a more complete description of the new approach, I strongly recommend the following tutorial document by the Society of Actuaries: A New Approach for Managing Operational Risk. 

For readability and to keep an already-long post from being even longer, I'm going to talk in broad generalities and skip over many details.  Also, I'm not going to explain and evaluate each of the existing methods.  Finally, I'm not going to argue point-by-point all the folks who assert that probabilistic risk analysis is futile, worthless, or even harmful.

Saturday, August 24, 2013

First Presentation of "Ten Dimensions..." at BSides-LA

I had fun on Friday presenting the "Ten Dimensions of Cyber Security Performance" at BSides-LA.  This is the first time I presented it in a general forum, so I was looking forward to see how it would "fly" and to see what reactions it would get.

On the plus side, several people were pretty excited and I had some great discussions afterward.  Also, I got most of the presentation done in the available time, but I still have more tuning to do.

On the down side, there weren't as many people in my session as I had hoped.  It was one of the last sessions on the last day, so that probably had an impact.  Or maybe the headline or topic wasn't widely interesting.  But the people who were there were interested and engaged, which is what matters most.

But for a first presentation, I felt it was successful.

Here are the slides.  View in full screen mode to enjoy the animations.

Friday, August 9, 2013

Green Swans: Virtuous Circles, Snowballs, Bandwagons, and the Rich Get Richer

The Green Swan of cumulative prosperity.
The future's so bright she's gotta wear shades.
This is the second in the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again."  This one is named "Green" as an allusion to the outsized success and wealth that often arise through this process, though by no means is it only limited to material or economic gains.

Taleb includes the Internet and the Personal Computer among his prime examples of Black Swan events.  In this post I hope to convince you that these phenomena are quite different than his other examples (e.g. what I've labeled "Grey Swans") and that there is value in understanding them separately.

Thursday, August 1, 2013

Grey Swans: Cascades in Large Networks and Highly Optimized/Critically Balanced Systems

A Grey Swan -- almost Black, but not quite. More narrowly defined.
This is the first of the series "Many Shades of Black Swans", following on the introductory post "Think You Know Black Swans? Think Again."

I'll define and describe each one, and maybe give some examples. Most important, each of these Shades will be defined by a mostly-unique set of 1) generating process(es); 2) evidence and beliefs; and 3) methods of reasoning and understanding.  As described in the introductory post, it's only in the interaction of these three that Black Swan phenomena arise. Each post will close with section called "How To Cope..." that, hopefully, will make it clear why this Many Shades approach is better than the all-lumped together Black Swan category.

This first one is named "Grey" because it's closest to Taleb's original concept before it got hopelessly expanded and confused.

Tutorial: How Fat-Tailed Probability Distributions Defy Common Sense and How to Handle Them

This post is related to the Grey Swans post, but is a good topic to present on it's own.

For random time series, we often ask general questions to learn something about the probability distribution we are dealing with:
  1. What's average?  What's typical?
  2. How much does it vary?  How wide is the "spread"?  Is it "skewed" to one side?
  3. How extreme can the outcomes be?
  4. How good are our estimates, given the sample size?  Do we have enough samples?
If we have a good sized sample of data, common sense tells us that "average" is somewhere in the middle of the sample values and that the "spread" and "extreme" of the sample are about the same as those of the underlying distribution.  Finally, common sense tells us that after we have good estimates, we don't need to gather any more sample data because it won't change our estimates much.

It turns out the that these common-sense answers could all be flat wrong, depending on how "fat" the tail of the distribution is.  Now that's surprising!

Tuesday, July 30, 2013

The Cost of a Near-miss Data Breach

[This post originally appeared in the New School of Information Security blog on October 6, 2009.  This idea was incorporated in the breach impact model presented in this paper, where you can find more details: ]
How Bad Is It?–A Branching Activity Model to Estimate the Impact of Information Security Breaches

If one of your security metrics is Data Breach Cost, what is the cost of a near miss incident? This seemingly simple question gets at the heart of security metrics problem.

Jerry escapes death, but is it cost-free?
Consider the gleeful Jerry Mouse in this cartoon. Tom the Cat has just missed in his attempt to swat Jerry and turn him into mouse meat. Is there any cost to Jerry for this near miss? Is Jerry’s cost any different than if he was running with Tom no where in sight?

By “near miss” I mean a security incident or sequence of incidents that could have resulted in a severe data breach (think TJX or Heartland), but somehow didn’t succeed. Let’s call the specific near-miss event “NM” for short. For sake of argument, let’s assume that the lack of attack success was due to dumb luck or attacker mistakes, not due to brilliant defenses or detection. Let’s say that you only discover NM long after the events took place. For simplicity let’s assume that discovering NM doesn’t result in any extraordinary costs, meaning that out-of-pocket costs are the same just before and immediately after NM. Finally, assume that your expected cost of a successful large-scale data breach is on the order of tens of millions, with the worst case being hundreds of millions of dollars.

How much does NM cost? The realist answer is “zero”. (Most engineers are realists, by disposition and training.) There is a saying in street basketball that expresses the realist philosophy about losses and associated costs: “No blood, no foul”. If you ask your accountants to pour over the spending and budget reports, they will probably agree. Case closed, right?

Not so fast….

Monday, July 29, 2013

Think You Understand Black Swans? Think Again.

"Black Swan events" are mentioned frequently in tweets, blog posts, public speeches, news articles, and even academic articles.  It's so widespread you'd think that everyone knew what they were talking about. But I don't.

Coming soon: 23 Shades of Black Swans
I think the "Black Swan event" metaphor is a conceptual mess.

Worse, it has done more harm than good by creating confusion rather than clarity and by serving as a tool for people who unfairly denigrate probabilistic reasoning.  It's also widely misused, especially by pundits and so-called thought leaders to give the illusion that they know what they are talking about on this topic when they really don't.

But rather than just throwing rocks, in future posts I will be presenting better/clearer metaphors and explanations -- perhaps as many as 23 Shades of Black Swan.  Here are the ones I've completed so far:
  1. Grey Swans: Cascades in Large Networks and Highly Optimized/Critically Balanced Systems
  2. Green Swans: Virtuous Circles, Snowballs, Bandwagons, and the Rich Get Richer
  3. Red Swans: Extreme Adversaries, Evolutionary Arms Races, and the Red Queen
  4. Disappearing Swans: Descartes' Demon -- the Ultimate in Diabolical Deception
  5. Out-of-the-Blue Swans: Megatsunami, Supervolcanos, The Black Death, and Other Cataclysms
  6. Orange TRUMPeter Swans: When What You Know Ain't So
  7. The Swan of No-Swan: Ambiguous Signals Tied To Cataclysmic Consequences
  8. Swarm-as-Swan: Surprising Emergent Order or Aggregate Action
  9. Splattered Swan: Collateral Damage, Friendly Fire, and Mis-fired Mega-systems 
In this post, I just want to make clear what is so wrong about the "Black Swan event" metaphor.

Saturday, July 27, 2013

QOTW: Strategy is not predicting the future. Instead, it's about making decisions today to be ready for an uncertain tomorrow - Drucker

This quote is from Peter Drucker, p 125 of his 1974 classic book: Management: Tasks, Practices, Responsibilities (Harper and Row).  Though it talks about strategic planning, the same applies to risk management (emphasis added):
"Strategic planning does not deal with future decisions.  It deals with the futurity of present decisions.  Decisions only exist in the present.  The questions that faces the strategic decision-maker is not what his organization should do tomorrow.  It is, 'What do we have to do today to be ready for an uncertain tomorrow?'  The question is not what will happen in the future.  It is, 'What futurity do we have to build into our present thinking and doing, what time spans do we have to consider, and how do we use this information to make a rational decision now?'"

Friday, July 26, 2013

Visualization Friday: 14 dimensions represented in 2D using MDS, Colors, and Shapes

For the last three years I've been building an Agent-based Model (ABM) of innovation ecosystems to explore how agent value systems and histories mutually influence each other.  The focus to this point has been on Producer-Consumer relationships and the Products they produce and consume.

One of my key challenges has been how to visualize changes in agent value systems as new products are introduced.  Products have surface characteristics defined as a 10-element vector of real numbers between 0 and 1.  Consumers make valuation decisions based on their perception of these 10 dimensions compared to their current "ideal type".  But they realize utility after consuming based on three "hidden" dimensions.  Adding on the dimension of consumption volume, this means I need to somehow visualize 14 dimensions in a 2D dynamic display.

The figures below show my solution.  Products are represented by black squares, while Consumer ideal points are represented by blue dots.  (There are about 200 Consumers in this simulation.)  Products that are not yet introduced are represented by hollow dark red squares.  The 10 dimensions of Product surface characteristics are reduced to 2D coordinates through Multi-Dimensional Scaling (MDS).  Therefore the 2D space is a dimensionless projection where 2D distances between points is roughly proportional to distances in the original 10 dimensions.

The three utility dimensions are represented by colored "spikes" coming off of each Product.  The length of each spike is proportional to the utility offered by that Product on that dimension.

Finally, the proportion of the Product population is represented by a dark red circle around each product (black filled square).

These two plots show the same simulation at different points in time, about 300 ticks apart, showing the effect of the introduction of several new products.  What we are looking for is patterns and trajectories of Consumer ideal points (blue dots).

Putting these all together:

  • Dots are close or distant in 2D space according to how they are perceived by Consumers based on surface characteristics.
  • Products that are close to each other in 2D space may or may not have similar utility characteristics (spikes).  This reveals the "ruggedness" of the "landscape", and thus the search difficulty faced by Consumers.
  • The circles around each product allow easy identification of popular vs unpopular products.
Initial Consumer ideal points (blue dots) after 1000 ticks, given 5 initial Products (black filled squares in center), plus one new Product (far left).  Red arrow points to a product with low utility on 3 dimensions.  -- click to see larger image.
Consumer ideal points (blue dots) after 1367 ticks, showing influence of new Product (black dot on far left).  Notice large increase in popularity of product pointed to by red arrow.  Though it has relatively low utility on all three dimensions, it is a "bridge" between products on right side and new (high utility) product on left side. -- click to see larger image.




Thursday, July 25, 2013

Tutorial: How to Value Digital Assets (Web Sites, etc.)

[This originally appeared in the New School of Information Security Blog in two posts, Oct 20 and 23, 2009]

Many security management methods don’t rely on valuing digital assets.  They get by with crude classifications (e.g. “critical”, “important”, etc.).  Moreover, I dont believe that it’s absolutely necessary to calculate digital asset values to do quantitative risk analysis.  But if you need to do financial justification or economic analysis of security investments or alternative architectures then you might need something more precise and defensible.

This tutorial article presents one method aimed at helping line-of-business managers (“business owners” of digital assets) make economically rational decisions. It’s somewhat simplistic, but it does take some time and effort. Yet it should be feasable for most organizations if you really care about getting good answers.

Warning: No simple spreadsheet formulas will do the job. Resist the temptation to put together magic valuation formulas based on traffic, unique visits, etc.

(This is a long post, so read on if you want the full explanation…)

Wednesday, July 24, 2013

The Bayesian vs Frequentist Debate And Beyond: Empirical Bayes as an Emerging Alliance

This is one of the best articles I've ever seen on the Bayesian vs Frequentist Debate in probability and statistics, including a description of recent developments such as the Bootstrap, a computationally intensive inference process that combines Bayesian and frequentist methods.
Efron, B. (2013). A 250-year argument: Belief, behavior, and the bootstrap. Bulletin of the American Mathematical Society, 50(1): 129-146.
Many disagreements about risk analysis are rooted in differences in philosophy about the nature of probability and associated statistical analysis.  Mostly, the differences center on how to handle sparse prior information, and especially the absence of prior information. "The Bayesian/frequentist controversy centers on the use of Bayes rule in the absence of genuine prior experience."

What's great about this article is that it presents the issue and alternative approaches in a simple, direct way, including very illuminating historical context.  It also presents a very lucid description of the advantages and limitations of the two philosophies and methods.

Finally, it discusses recent developments in the arena of 'empirical Bayes' that combines the best of both methods to address inference problems in the context of Big Data.  In other words, because of Big Data and the associated problems people are trying to solve now, pragmatics matter more than philosophical correctness.  Another example of empirical Bayes is Bayesian Structural Equation Modeling that I referenced in this post.

Tuesday, July 23, 2013

The Rainforest of Ignorance and Uncertainty

One of the most important books I've ever read is Michael Smithson's Ignorance and Uncertainty.  It gives a tour of many varieties of ignorance and uncertainty and the many strategies that have been developed in different disciplines and professional fields.  Through this tour, it becomes very clear that uncertainty is not a single phenomena, and not even a couple, but instead is like a rainforest ecosystem of species. (My metaphor, not his.)

One vivid illustration of this is the taxonomy of ignorance and uncertainty.  Here's the original taxonomy by Smithson in Ignorance and Uncertainty:
In 2000, I modified this for a presentation I gave at a workshop at Wharton Business School on Complexity Science in Business.  Here's my taxonomy (2000 version):

Smithson and his colleagues have updated their taxonomy, which is presented as Figure 24.1 in Chapter 24 "The Nature of Uncertainty" in: Smithson, M., & Bammer, G. (2012). Uncertainty and Risk: Multidisciplinary Perspectives. Routledge.   (I can't find an on-line version of the diagram, sorry.) If you are looking for one book on the topic, I'd suggest this one.  It's well edited and presents the concepts and practical implications very clearly.

I don't think there is one definitive taxonomy, or that having a single taxonomy is essential for researchers.  I find them useful in terms of scoping my research, relating it to other research (esp. far from my field), and in selecting modeling and analysis methods that are appropriate.

Of course, there are other taxonomies and categorization schemes, including Knight's distinction between risk (i.e. uncertainty that can be quantified in probabilities) and (true) uncertainty (everything else).  Other categorization you'll see is epistemic uncertainty (i.e. uncertainty in our knowledge) and aleatory uncertainty (i.e. uncertainty that is intrinsic to reality, regardless of our knowledge of it).  The latter is also known as ontological uncertainty.  But these simple category schemes don't really capture the richness and variety.

The main point of all this is that ignorance and uncertainty come in many and varied species.  To fully embrace them (i.e. model them, analyze them, make inferences about them), you can't subsume them into a couple of categories.

[Edit:  Smithson's blog is here.  Though it hasn't been updated in two years, there's still some good stuff there, such as "Writing about 'Agnotology, Ignorance and Uncertainty'".]

Monday, July 22, 2013

Where are the NLP or Text Mining Tools for Automated Frame Analysis?

I'd like to do Frame Analysis (Goffman 1976, Johnston 1995) on a medium-sized corpus of text (articles, speeches, blog posts and comments) and I'm looking for NLP or text mining tools to help with it.  Strangely, I can't find anything. All the examples of frame analysis in published research (e.g. Nisbet 2009) use purely manual methods or computer-augmented manual analysis.

Frame Analysis requires sophisticated semantic analysis, filtering, situational understanding, and inference on missing text. Near as I can tell, this level of sophistication is beyond the grasp of the common NLP and text mining tools.  Is this true?  If not, do any of you know of fully automated tools for Frame Analysis?

I should add that I have two use cases. First, the most demanding, is automatic identification of frames followed by text classification.  Second, more feasible, is automatic classification of texts given frame definitions and sample texts.  The latter fits the classic machine learning model of supervised learning, so I assume that as long as my training set is large enough and representative enough, I can probably find an adequate ML classification algorithm.

[Edit 7/23/2013:  This is the best summary I could find of available tools: Frame Analysis: Software]

__________

Goffman, E. (1976). Frame Analysis: An Essay on the Organization of Experience. Harvard University Press.
Johnston, Hank (1995). A methodology for frame analysis: from discourse to cognitive schemata. In Social Movements and Culture (pp. 217–246). University of Minnesota Press.
Nisbet, M. C. (2009). Communicating Climate Change: Why Frames Matter for Public Engagement, Environment. Science and Policy for Sustainable Development, 51(2): 12–23.

Call For Speakers: Cybersecurity Innovation Forum Jan 28-30 `14, Baltimore MD. Due 9/3/13

There's a new event that could be very important in to promote innovation in cyber security.
Cybersecurity Innovation Forum, Jan. 28-30, 2014, Baltimore MD
Call for Speakers, submissions due Sept. 3, 2013

There are four tracks, two slanted toward technical solutions and one slanted toward social/organizational solutions, and one mixed (see bold text):
  • Trusted Computing – Trust through device and system integrity 
  • Security Automation – Automate with trust to speed informed decision making 
  • Information Sharing – Openly and confidently share the information we need to share to make informed decisions and enact automated responses 
  • Research – Explore end-state research themes for designed-in security, trustworthy spaces, moving target, and cyber economic incentives.
I'm probably going to submit one or two proposals to the Research track. One might report on "How Bad Is It?" (breach impact estimation) and Ten Dimensions of Cyber Security Performance.  The second might be on the topic of innovative research models to improve industry-academic-government-citizen research collaboration, focusing on metrics, economics, social and organization aspects.

I'd also like to see a proposal from my brothers and sisters from SIRA on the state of the art in risk analysis and opportunities for research collaborations.

It would be great if this conference had good attendance from innovators in academia and industry.  It sure would help their cause if they had a strong cross-sector program committee.  That was one thing the National Cyber Leap Year folks got right.

Sunday, July 21, 2013

Path Dependence in Tools, or Why I Use Mathematica When Everyone Else Uses Python & R

Nearly every job posting I see requires or desires experience in Python or R or both.  They have clearly won the programming language race for data science applications.  Sadly, I'm just getting started with them, because three years ago I made an impulsive decision that leaves me competent and skilled in Mathematica and not in the others.  The easy path is to continue using it on each new project.

Saturday, July 20, 2013

Quote of the Week: Machiavelli on the Enemies of Reform, Including the 'Incredulity of Mankind'

I found this choice quote in: Paquet, G. (1998). Evolutionary cognitive economics. Information Economics and Policy, 10(3): 343–357.  Emphasis added.
"For the reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order, this lukewarmness arising partly from fear of their adversaries, who have the laws in their favour; and partly from the incredulity of mankind, who do not truly believe in anything new until they have had actual experience of it." -- Machiavelli, 1537
Full citation: Machiavelli, N., 1952 (orig. 1537) The Prince, New York: Mentor Book, p 49-50.

Let’s Shake Up the Social Sciences

Given that I'm a student in the first-ever Department of Computational Social Science, I strongly agree with Nicholas Christakis in his New York Times article "Let’s Shake Up the Social Sciences".  I especially like the points he makes regarding teaching social science students to do experiments early in their education process.

Friday, July 19, 2013

Visualization Friday: Probability Gradients

I'm fascinated with varieties of uncertainty -- ways of representing it, reasoning about it, and visualizing it.  I was very tickled when I came across this blog post by Alex Krusz on the Velir blog.  He presents a neat improvement over "box and whiskers" plot for representing uncertainty or variation in data points which he calls "probability gradients".




Guest on the Risk Science Podcast

On Episode 3 of the Risk Science Podcast, I had a nice conversation with my friends Jay Jacobs and Ally Miller.  The topics included on the balance in simplifying complexity, the need to get more industry people involved in the WEIS conference (as participants and presenters), writing winning movie plots about cyber war, and the learning curve for R.

In case you don't recognize it, the Risk Science Podcast (@Risksci on twitter) is new and improved.  Previous iterations were the Risk Hose Podcast, and before that, the SIRA Podcast.

Wednesday, July 17, 2013

On the performance value of "cyber hygiene"

One idea that keeps coming up in the NIST Cyber Security Framework process is that we should be collectively promoting good "cyber hygiene" -- common things that everyone should be doing by habit to protect themselves on-line.  Analogies are made to personal health and hygiene and also personal safety (auto seat belts).  Vinton Cerf claims to have coined the term.  It is being widely promoted in cyber security awareness programs, including by outgoing DHS Secretary Janet Napolitano at this public event.  There are non-profit organizations focused on it, e.g. Stay Safe Online and Stop, Think, Connect. There's even a certificate in cyber hygiene offered.  These are often oriented at consumers and individuals, but the same ideas are being promoted for organizations, including those in critical infrastructure industries.
A real "cyber hygiene" promotion poster.  Let's all be smart chipmunks!
While most people seem to believe that it is possible to define "good cyber hygiene" and also worthwhile to promote it, not everyone agrees.  One commentator believes it puts too much burden on individuals and distracts us from the institutional and systematic forces that create or perpetuate the risks in the first place.

Of course, I have to try to answer these questions: where does "cyber hygiene" fit in to the proposed Ten Dimensions of Cyber Security Performance?  Can we define "good hygiene practices" in each dimension that serve as the common baseline for all organizations, as a minimum acceptable performance level, as a common entry level at the lowest level of maturity, or similar?

In my opinion, it is possible to define a common set of "cyber hygiene" practices for most individuals and most organizations.  They are good.  Do them.  But don't think you are achieving adequate or even minimum acceptable cyber security performance in an organization by simply implementing good "cyber hygiene".  At best, "cyber hygiene" is a set of practices that helps your organization be "anti-stupid".

Tuesday, July 16, 2013

How simple can we make it?

In the Q&A session of the first day of the 3rd NIST Cybersecurity Framework (CSF) workshop, someone asked if there was a way to simplify the proposed five functional categories in the Core.  Basically, he was saying that he needed to persuade non-specialists, especially executives, and that the five functional categories plus subcategories was too complicated. (full question and answer is on the video at 1:18:00).  When I heard that, I nearly sprayed coffee all over my keyboard.  "You want it even SIMPLER??" I yelled out (to my screen).

I immediately thought of this: cyber security in one dimension using the Grissom Dimension, which is named after Astronaut Gus Grissom.  Grissom gave a speech in 1959 to the workers at the Convair plant where the Atlas rocket booster was being built.  The entire speech:  "Do good work." (remembered by a worker)  Yes, we could reduce all of cyber security to the Grissom Dimension, then it would be simple, right?

I'm a bit sensitive to this because I know many people will say my Ten Dimensions are too complicated.  I wonder myself if it is too complicated and I'm certainly interested in ways to simplify it.  Parsimony is good.  Occam's razor keeps our models clean.

Nice example of "Execution & Operations"

Ally Miller wrote a nice post called "Quant Ops" describing the relationship between DevOps and risk, which fits nicely into Dimension 5. Effective/Efficient Execution & Operations.  Best of all, she works through an example of how it works in practice.  Give it a read.

Monday, July 15, 2013

Why economics and management PhD dissertations are collections of papers

I'm back in California after two wonderful weeks at Summer School in Trento, Italy.  I learned many things, both great and small.  One of them was resolving the mystery as to the organization of PhD dissertations in economics and management.  In engineering, computer science, and sociology, dissertations are generally organized as a report on a one project -- a cohesive whole.  In contrast, many or most dissertations in economics and management are collections of "essays", usually numbering three.  For a long time, I couldn't figure out why.  Now I know.

The driving force is the hiring criteria in the academic job market in economics and management.  I was told by a professor that, to even get an interview at a top university, an applicant had to have one or more publications in a high ranking journal, and that having many conference publications or a top-quality dissertation were not enough.  Therefore, from the start of the dissertation process, candidates are guided toward journal publication, even at the expense of fully researching the thesis as is done in engineering, computer science, and sociology.

Another reason, I suspect, is a shortage of post-doc positions in economics and management, particularly the latter.  I don't have any data to back this up, so maybe I'm wrong, but it seems that PhDs in management are pushed into the job market for tenure-track positions immediately after completing their dissertations.  If this is so, then it would explain why they would be structuring their dissertation as a collection of three "publishable units".

Thursday, July 11, 2013

Communicating about cyber security using visual metaphors

For a workshop with non-computer people, I needed a simple visual metaphor to communicate how messy and complicated information security can be (and, by extension, cyber security).  This is what I came up with.  Seems to get across the main point on a visceral level. Enjoy.




Monday, July 8, 2013

Recommendations for NIST Cyber Security Framework (CSF) Workshop 3

Here is my input to the NIST CSF process prior to Workshop 3 (July 10-12 in San Diego).
________________________________________________________________________
Dear workshop attendees:

I hope that you find the following recommendations to be helpful.  Wish I could be there!

Russ
________________________________________________________________________

Recommendations:
  1. In my judgement, the five "Cyber Security Functions" described in the July 1 draft are inadequate to support agile and continuously innovative cyber security.  As detailed in this analysis, the five functional categories have serious deficiencies:
    • "Know" is too broad and too vague
    • "Respond" and "Recover" are too narrow and could be combined
    • "Detect" does not adequately cover of Threat Intelligence
    • Missing:
      • Design & Development
      • Resilience
      • Execution & Operations
      • External Engagement
      • Agility & Learning
      • Total Cost of Risk
      • Responsibility & Accountability
  2. Rather than using functional categories which are nothing more than "buckets of content", it would be better to organize the framework around performance dimensions.  This will help make the framework more coherent and better justified.
  3. I recommend organizing the framework according to Ten Dimensions of Cyber Security Performance, (slides), which are explained individually the linked posts:
    1. Optimize Exposure
    2. Effective Threat Intelligence
    3. Effective Design & Development
    4. Quality of Protection & Controls
    5. Effective/Efficient Execution & Operations 
    6. Effective Response, Recovery, & Resilience
    7. Effective External Engagement
    8. Effective Learning & Agility
    9. Optimize Total Cost of Risk
    10. Responsibility & Accountability
  4. I recommend that the framework should explicitly support Double Loop Learning, which is described in these two posts:
  5. I recommend that pilot projects be started right away to design and test inference methods for estimating cyber security performance, as sketched in this post.

The key to measuring cyber security: inferences, not calculations

(It's getting very late here, so my apologies for any slips or gaps in communication.)

In many of the previous posts on the Ten Dimensions of Cyber Security Performance, I've hinted or suggested that these could be measured as a performance index.  But I'm sure many readers have frustrated because I haven't spelled out any details or given examples.  Still other readers will be skeptical that this can be done at all.

Sorry about that.  There's only so much I can put in each post without them becoming book chapters. In this post, I'll describe an approach and general method that I believe will work for all or nearly all the performance dimensions.  At the time of writing, this is in the idea stage, and thus it needs to be tested and proven in practice.  Still, I suggest it's worthy of consideration.

[Update 7/19/2013:  After some searching in Google Scholar, I discovered that the method I'm suggesting below is called Bayesian Structural Equation Modeling.  I'm very glad to see that it is an established method that has both substantial theory and software tool support.  I hope to start exploring it in the near future.  I'll post my results as I go.]

Here are my ideas on how to measure performance in each dimension.

NIST's "Cyber Security Functions" compared to the Ten Dimensions

On July 1, NIST posted a draft outline of the CSF.  It proposed five "cyber security functions" to serve as organizing categories for the framework.  Quoting from the draft:
  • "Know – Gaining the institutional understanding to identify what systems need to be protected, assess priority in light of organizational mission, and manage processes to achieve cost effective risk management goals."
  • "Prevent – Categories of management, technical, and operational activities that enable the organization to decide on the appropriate outcome-based actions to ensure adequate protection against threats to business systems that support critical infrastructure components."
  • "Detect –Activities that identify (through ongoing monitoring or other means of observation) the presence of undesirable cyber risk events, and the processes to assess the potential impact of those events."
  • "Respond – Specific risk management decisions and activities enacted based upon previously implemented planning (from the Prevent function) relative to estimated impact."
  • "Recover - Categories of management, technical, and operational activities that restore services that have previously been impaired through an undesirable cybersecurity risk event."
There are several important differences between these five categories and my proposed Ten Dimensions of Cyber Security Performance.  First, NIST is proposing categories of activities and functions to serve as buckets of content.  There's no formal relationship between the categories, at least not stated explicitly.  Second, the NIST categories only partially and imperfectly cover the space of the Ten Dimensions, as shown in this matrix (click to enlarge):


If you believe in the scope and organization of the Ten Dimensions, then the deficiencies of the NIST functional categories become apparent in the comparison:

  1. "Know" category is scoped too broadly. It is overloaded and contains too many performance dimensions.  I list five question marks (?) in the matrix because I can't tell if these would be included in "Know" or not.
  2. "Respond" and "Recover" categories map to a single performance dimension, implying that they are probably scoped too narrowly.
  3. A glaring omission is lack of coverage for Resilience, which is vital for critical infrastructure.
  4. Also there's no coverage of dimension 5. Effective/Efficient Execution & Operations, and probably no coverage of five other dimensions: 3. Effective Design & Development; 7. Effective External Engagement; 8. Effective Agility & Learning; 9. Optimize Total Cost of Risk; and 10. Responsibility & Accountability.
Thus, the NIST functional categories put too much attention in one or two areas and not enough in many others.  Most serious, there is no coverage in the second loop of the Double Loop Learning model, which implies that the NIST functional categories are inadequate to support agile and continuously innovative cyber security.

Agile Cyber Security and Double Loop Learning

In this post, I want to summarize dimensions 7 through 10, focusing on their interactions and relationships and how they deliver Double Loop Learning. (See this post for the full list of dimensions.)

Together, dimensions 7 through 10 provide the "dynamic capabilities" of an organization to achieve agility and rapid innovation in the face of constant changes in the landscape.  I mentioned this specifically in the context of dimension 8. Effective Agility & Learning, but the notion of dynamic capabilities extends to subsystem comprised of dimensions 7 -- 10, as well.


Dimension 10: Responsibility & Accountability

This is the tenth and last post defining each of the Ten Dimensions of Cyber Security Performance.  It's also the capstone of all the performance dimensions, tying them together from the perspective of leadership and management.

The dimension of Responsibility & Accountability include all processes that link the decision-makers in an organization (at all levels) with the stakeholders of the organization who are affected by cyber security outcomes, including:

  • The Board of Directors
  • Shareholders
  • Customers
  • Employees (as individuals)
  • Suppliers, distributors
  • Outsource partners
  • Regulators
  • Legal authorities
  • (others)
This dimension includes most of the processes of governance and compliance, at least the interfaces between organization units and the external interfaces.  But I chose not to call it "Governance & Compliance" because those are both formally codified processes and I felt it was important to include some of the less formal and tacit aspects.  This is especially important if we want to encourage wide-spread acceptance of responsibility and accountability beyond the core executives.  In addition, I felt the title "Responsibility & Accountability" focuses attention on performance, not just activity or formal structures.