Tuesday, February 25, 2014

Quick links to "Ten Dimensions" resources for #RSAC folks

This post is aimed at folks attending my RSA Conference talk on Wednesday, but could be useful for anyone who wants to catch up on the topics.

My talk is at 10:40am - 11:00am in Moscone West, Room: 2020.  Immediately after the talk, I'll be moving to the "Continuing the Conversation" space in the 2nd floor lobby of Moscone West.  I'll be wearing a black EFF hat, in case you want to pick me out of a crowd.

This is 20 minute talk, so it will only be an introduction to the topics.  My main goal is to stimulate your interest to learn more and to dig into these resources:
Not directly related to the above, but here's the slides for the talk I gave Monday at BSides-SF:
If we don't connect at the conference for some reason, feel free to email me at russell ♁ thomas ❂ meritology ♁ com.  (Earth = dot; Sun = at)

And if you've come this far and you aren't following me on twitter -- @MrMeritology -- what's wrong with you?  Follow, already! ☺

How to aggregate ground-truth metrics into a performance index

My remix of a painting by William Blake,
with the Meritology logo added. Get it?
He's shedding light on an impossible shape.
(Click to enlarge)
The general problem is this:
How can we measure aggregate performance on an interval or ratio scale index when we have a hodge-podge of ground-truth metrics with varying precision, relevance, reliability, and that are incommensurate with each other?
Here's a specific example from the Ten Dimensions:
How can we measure overall Quality of Protection & Controls if our ground-truth metrics include false positives percentages, false negatives percentages, exceptions number of exceptions, various "high-medium-low" ratings, audit results, coverage percentages, and a bunch more?
I've been wrestling with this problem for a long time, both in information security and elsewhere.  So have a lot of other people.  I while back I had an insight that the solution may be to treat it as an inference problem, not a calculation problem (described in this post). But I didn't work out the method at that time.  Now I have.

In this blog post, I'm introducing a new method.  At least I think it's new because, after much searching, I haven't been able to find any previously published papers. (If you know of any, please contact me or comment to this post.)

The new method is innovative but I don't think it's much more complicated or mathematically sophisticated than the usual methods (weighted average, etc.), but it does take a change in how you think about metrics, evidence, and aggregate performance.  Even though all the examples below are related to information security, the method is completely general.  It can apply to IT, manufacturing, marketing, R&D, governments, non-profits... any organization setting where you need to estimate aggregate performance from a collection of disparate ground-truth metrics.

This post is a tutorial and is as non-technical as I can make it. As such, its is on the long side, but I hope you find it useful.  A later post will take up the technicalities and theoretical issues. (See here for Creative Commons licensing terms.)

Monday, February 24, 2014

#BSidesSF Prezo: Getting a Grip on Unexpected Consequences

Here are the slides I'm presenting today at B-Sides San Francisco (4pm).  I suggest that you download it as PPTX because it is best viewed in PowerPoint so you can read the stories in the speaker notes.

Friday, February 21, 2014

Does a model and its data ever speak for themselves? No -- A reply to Turchin

This post is the first of a series to reply to Dr. Peter Turchin regarding his PNAS article (full text PDF -- free, thanks to Turchin & team), my letter to PNAS, and his PNAS letter reply.  I wrote a blog post here because I didn't think that Dr. Turchin's reply addressed my questions due to misunderstanding and I invited Dr. Turchin to engage in a colloquy via blog posts. I'm happy to say that Dr. Turchin wrote three blog posts (here, here, and here) in reply to my post, and this is my first reply.

While this post talks about interpreting simulation results, the general topic of data interpretation applies to all empirical research, and even data analysis in industry.  

Monday, February 17, 2014

Two new #InfoSec books that could transform your way of thinking

Happiness is having great colleagues and collaborators.  I'm very happy to recommend to you two new books by three of my favorite colleagues -- Jay Jacobs (@jayjacobs), Bob Rudis (@hrbrmstr), and Adam Shostack (@adamshostack).  These books not only do a great job covering the topics, they could also transform your way of thinking.

Friday, February 14, 2014

What analysis do we really need to guide vulnerability management?

This is the first of a series of posts on the topic of doing quantitative risk analysis in the face of intelligent and adaptive adversaries.  Later posts will dig into research topics like combining risk analysis with game theory, but this first post is mostly a reaction to what other people have said recently.

Rafał Łoś recently posted an article, and then followed with a guest post from Heath Nieddu, with this general theme (paraphrasing and condensing):
It's senseless and distracting to attempt to use quantitative risk analysis to make decisions about vulnerability remediation, and even for information security as a whole.  Uncertainties about the future are too great; adversaries too agile and intelligent; and the whole quant risk endeavor is too complicated.  Keep it simple and stick with what you know for sure, especially the basics.
In this post I'm going to address some of the issues and questions this skeptical view raises, but I won't attempt a point-by-point counter argument.  For the record, there are many points I disagree with, plus many ideas that I think are confused or just mis-stated.  But I think the discussion will be best served by keeping focused on the main issues.

I'm also appearing on Rafał's podcast, Down the Rabbit Hole, along with some other SIRA members.  I'll let you know when it is posted for listening.