Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Sunday, September 8, 2013

Mr Langner is wrong. Risk management isn't 'bound to fail'. But it does need improvement and innovation.

In "Bound to Fail: Why Cyber Security Risk Cannot Simply Be 'Managed' Away" (Feb 2013) and a recent white paper, Ralph Langer argues that risk management is a fundamentally flawed approach to cyber security, especially for critical infrastructure.

Langner's views have persuaded some people and received attention in the media.  He gained some fame in the course of the investigation of the Stuxnet worm capabilities to exploit Siemens PLCs (programmable logic controllers). Specifically, Ralph was the first to assert that Stuxnet worm is a precision weapon aimed at sabotaging Iran's nuclear program. Langner also gains institutional credibility as a Nonresident Fellow at the Brookings Institute, who published the "Bound to Fail..." paper.  I'm guessing that Brookings PR department has been helping to get press attention for Langner's blog post critiquing NIST CSF and his proposed alternative: RIPE.  They were reported in seven on-line publications last week alone: here, here, here, herehere, here, and here.   (Note to self: get a publicist.)

In this long post, I'm going to critique Mr. Langner's critique of risk management, pointing to a few places where I agree with him, but I will present counter-arguments to his arguments that risk management is fundamentally flawed.

  • TL;DR version: There's plenty of innovation potential in the modern approach to risk management that Langner hasn't considered or doesn't know about. Therefore, "bound to fail" is false.  Instead, things are just now getting interesting.  Invest more, not less.

In the next post, I'll critique Mr. Langner's proposed alternative for an industrial control system security framework, which he dubs "Robust ICS Planning and Evaluation" (RIPE).

Wednesday, July 24, 2013

The Bayesian vs Frequentist Debate And Beyond: Empirical Bayes as an Emerging Alliance

This is one of the best articles I've ever seen on the Bayesian vs Frequentist Debate in probability and statistics, including a description of recent developments such as the Bootstrap, a computationally intensive inference process that combines Bayesian and frequentist methods.
Efron, B. (2013). A 250-year argument: Belief, behavior, and the bootstrap. Bulletin of the American Mathematical Society, 50(1): 129-146.
Many disagreements about risk analysis are rooted in differences in philosophy about the nature of probability and associated statistical analysis.  Mostly, the differences center on how to handle sparse prior information, and especially the absence of prior information. "The Bayesian/frequentist controversy centers on the use of Bayes rule in the absence of genuine prior experience."

What's great about this article is that it presents the issue and alternative approaches in a simple, direct way, including very illuminating historical context.  It also presents a very lucid description of the advantages and limitations of the two philosophies and methods.

Finally, it discusses recent developments in the arena of 'empirical Bayes' that combines the best of both methods to address inference problems in the context of Big Data.  In other words, because of Big Data and the associated problems people are trying to solve now, pragmatics matter more than philosophical correctness.  Another example of empirical Bayes is Bayesian Structural Equation Modeling that I referenced in this post.

Tuesday, July 23, 2013

The Rainforest of Ignorance and Uncertainty

One of the most important books I've ever read is Michael Smithson's Ignorance and Uncertainty.  It gives a tour of many varieties of ignorance and uncertainty and the many strategies that have been developed in different disciplines and professional fields.  Through this tour, it becomes very clear that uncertainty is not a single phenomena, and not even a couple, but instead is like a rainforest ecosystem of species. (My metaphor, not his.)

One vivid illustration of this is the taxonomy of ignorance and uncertainty.  Here's the original taxonomy by Smithson in Ignorance and Uncertainty:
In 2000, I modified this for a presentation I gave at a workshop at Wharton Business School on Complexity Science in Business.  Here's my taxonomy (2000 version):

Smithson and his colleagues have updated their taxonomy, which is presented as Figure 24.1 in Chapter 24 "The Nature of Uncertainty" in: Smithson, M., & Bammer, G. (2012). Uncertainty and Risk: Multidisciplinary Perspectives. Routledge.   (I can't find an on-line version of the diagram, sorry.) If you are looking for one book on the topic, I'd suggest this one.  It's well edited and presents the concepts and practical implications very clearly.

I don't think there is one definitive taxonomy, or that having a single taxonomy is essential for researchers.  I find them useful in terms of scoping my research, relating it to other research (esp. far from my field), and in selecting modeling and analysis methods that are appropriate.

Of course, there are other taxonomies and categorization schemes, including Knight's distinction between risk (i.e. uncertainty that can be quantified in probabilities) and (true) uncertainty (everything else).  Other categorization you'll see is epistemic uncertainty (i.e. uncertainty in our knowledge) and aleatory uncertainty (i.e. uncertainty that is intrinsic to reality, regardless of our knowledge of it).  The latter is also known as ontological uncertainty.  But these simple category schemes don't really capture the richness and variety.

The main point of all this is that ignorance and uncertainty come in many and varied species.  To fully embrace them (i.e. model them, analyze them, make inferences about them), you can't subsume them into a couple of categories.

[Edit:  Smithson's blog is here.  Though it hasn't been updated in two years, there's still some good stuff there, such as "Writing about 'Agnotology, Ignorance and Uncertainty'".]

Tuesday, July 16, 2013

How simple can we make it?

In the Q&A session of the first day of the 3rd NIST Cybersecurity Framework (CSF) workshop, someone asked if there was a way to simplify the proposed five functional categories in the Core.  Basically, he was saying that he needed to persuade non-specialists, especially executives, and that the five functional categories plus subcategories was too complicated. (full question and answer is on the video at 1:18:00).  When I heard that, I nearly sprayed coffee all over my keyboard.  "You want it even SIMPLER??" I yelled out (to my screen).

I immediately thought of this: cyber security in one dimension using the Grissom Dimension, which is named after Astronaut Gus Grissom.  Grissom gave a speech in 1959 to the workers at the Convair plant where the Atlas rocket booster was being built.  The entire speech:  "Do good work." (remembered by a worker)  Yes, we could reduce all of cyber security to the Grissom Dimension, then it would be simple, right?

I'm a bit sensitive to this because I know many people will say my Ten Dimensions are too complicated.  I wonder myself if it is too complicated and I'm certainly interested in ways to simplify it.  Parsimony is good.  Occam's razor keeps our models clean.

Wednesday, June 26, 2013

Good cyber security is not just a pile of "best practices"

Recently, some folks involved in the NIST Cyber Security Framework process have suggested that the challenge is analogous to "safety" and thus a similar compilation of "best practices" is what we need.  The thinking goes like this: If we just compile the "best practices" and then give everyone incentives to implement them, all will be good (or at least much better).  Taking the health/safety analogy further, they say that we need to promote "cyber hygiene".

But cyber security is not like safety.  It would be a grave mistake to treat it like they are the same or even similar.

Tuesday, June 25, 2013

"Cyber security" is a superset of "information security", not a synonym

Over at the Security Sceptic blog, Dave Piscitello has a post titled, "Stop Saying Cybersecurity When You Mean Infosec (and vice-versa)" where he makes a good case for not using "cyber security" and "information security" interchangably.
"There is perhaps no term more overhyped, overused, overloaded and misunderstood in infosec and politics today than cybersecurity. Infosec and cybersecurity are often used interchangeably..."
Many InfoSec pros bash the use of the qualifying term "cyber" and consider it a sign of incompetence on the part of the speaker or writer.   They also see it as a sign that the field is being over-run by Beltway policy types, military types, and lawyers who really know nothing about it.

Rather than try to banish it, I agree with Dave that it should be used to mean a superset of information security, and not used as a synonym.  If enough people use it that way, it might catch on.

Dave suggests this distinction:
"Label as infosec activities that seek to fix actual security defects (i.e., cure, manage or improve health). This would include categories like secure code development, best practices and technology to identify or mitigage suboptimal (vulnerable) configuration, SIEM, identity and data/privacy protection. Label as cybersecurity activities that are offensive, reliatory or surveillance (military intelligence)."
This is OK, but I suggest a broader definition:

  • "Cyber security" -- the confluence of information security, industrial control security, privacy, identity, and digital rights, along with civil liberties and national/homeland security in the digital domain.

What do you think?   If someone can come up with a better umbrella term, I'm all for it.

(Edit 6/26/13: added "identity" to the definition.  It's a key integrating thread. Also added "industrial control security".)