Monday, October 31, 2016

The Cyber Insurance Emperor Has No Clothes


(Of course, the title is hyperbole and attention-seeking. Now that you are here, I hope you'll keep reading.)

(click to enlarge)
In the Hans Christian Anderson story, The Emperor's New Clothes, the collective delusion of the Emperor's grand clothes was burst by a young child who cried out: "But he has got nothing on!"

I don't mean that cyber insurance has no value or that it is a charade.

My main point: cyber insurance has the wrong clothes for the purposes and social value to which it aspires.

This blog post sketches the argument and evidence. I will be following up separately with more detailed and rigorous analysis (via computational modeling) that, I hope, will be publishable.

tl;dr: (switching metaphors)
As a driving force for better cyber risk management, today's cyber insurance is about as effective as eating soup with a fork.
(This is a long post. For readers who want to "cut to the chase",  you can skip to the "Cyber Insurance is a Functional Misfit" section.)

Wednesday, October 19, 2016

Orange TRUMPeter Swans: When What You Know Ain't So

Was Donald J. Trump's political rise in 2015-2016 a "black swan" event?  "Yes" is the answer asserted by Jack Shafer this Politico article. "No" is the answer from other writers, including David Atkins in this article on the Washington Monthly Political Animal Blog.

Orange Swan
My answer is "Yes", but not in the same way that other events are Black Swans.   Orange Swans like the Trump phenomenon is fits this aphorism:
"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- attributed to Mark Twain
In other words, the signature characteristic of Orange Swans is delusion.

Rethinking "Black Swans"

As I have mentioned at the start of this series, the "Black Swan event" metaphor is a conceptual mess. (This post is sixth in the series "Think You Understand Black Swans? Think Again".)

It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.

Tuesday, June 21, 2016

Public Statement to the Commission on Enhancing National Cybersecurity, 6-21-2016

[Submitted in writing at this meeting. An informal 5 min. version was presented during the public comment period. This statement is my own and does not represent the views or interests of my employer.]

Summary

Cyber security desperately needs institutional innovation, especially involving incentives and metrics.  Nearly every report since 2003 has included recommendations to do more R&D on incentives and metrics, but progress has been slow and inadequate.

Why?

Because we have the wrong model for research and development (R&D) on institutions.

My primary recommendation is that the Commission’s report should promote new R&D models for institutional innovation.  We can learn from examples in other fields, including sustainability, public health, financial services, and energy.

What are Institutions and Institutional Innovation?

Institutions are norms, rules, and social structures that enable society to function. Examples include marriage, consumer credit reporting and scoring, and emissions credit markets.

Cyber security[1] has institutions today, but many are inadequate, dysfunctional, or missing.  Examples:
  1. overlapping “checklists + audits”; 
  2. professional certifications; 
  3. post-breach protection for consumers (e.g. credit monitoring); 
  4. lists of “best practices” that have never been tested or validated as “best” and therefore are no better than folklore.  

There is plenty of talk about “standards”,  “information sharing”, “public-private partnerships”, and “trusted third parties”, but these remain mostly talking points and not realities.

Institutional innovation is a set of processes that either change existing institutions in fundamental ways or create new institutions.   Sometimes this happens with concerted effort by “institutional entrepreneurs”, and other times it happens through indirect and emergent mechanisms, including chance and “happy accidents”.

Institutional innovation takes a long time – typically ten to fifty years.

Institutional innovation works different from technological innovation, which we do well.  In contrast, we have poor understanding of institutional innovation, especially on how to accelerate it or achieve specific goals.

Finally, institutions and institutional innovation should not be confused with “policy”.  Changes to government policy may be an element of institutional innovation, but they do not encompass the main elements – people, processes, technology, organizations, and culture.

The Need: New Models of Innovation

Through my studies, I have come to believe that institutional innovation is much more complicated  [2] than technological innovation.   It is almost never a linear process from theory to practice with clearly defined stages.

There is no single best model for institutional innovation.  There needs to be creativity in “who leads”, “who follows”, and “when”.  The normal roles of government, academics, industry, and civil society organizations may be reversed or otherwise radically redrawn.

Techniques are different, too. It can be orchestrated as a “messy” design process [3].  Fruitful institutional innovation in cyber security might involve some of these:
  • “Skunk Works”
  • Rapid prototyping and pilot tests
  • Proof of Concept demonstrations
  • Bricolage[4]  and exaptation[5]
  • Simulations or table-top exercises
  • Multi-stakeholder engagement processes
  • Competitions and contests
  • Crowd-sourced innovation (e.g. “hackathons” and open source software development)

What all of these have in common is that they produce something that can be tested and can support learning.  They are more than talking and consensus meetings.

There are several academic fields that can contribute defining and analyzing new innovation models, including Institutional Sociology, Institutional Economics, Sociology of Innovation, Design Thinking, and the Science of Science Policy.

Role Models

To identify and test alternative innovation models, we can learn from institutional innovation successes and failures in other fields, including:
  • Common resource management (sustainability)
  • Epidemiology data collection and analysis (public health)
  • Crash and disaster investigation and reporting (safety)
  • Micro-lending and peer-to-peer lending (financial services)
  • Emissions credit markets and carbon offsets (energy)
  • Open software development (technology)
  • Disaster recovery and response[6]  (homeland security)

In fact, there would be great benefit if there were a joint R&D initiative for institutional innovation that could apply to these other fields as well as cyber security.  Furthermore, there would be benefit making this an international effort, not just limited to the United States.

Endnotes

[1] "Cyber security" includes information security, digital privacy, digital identity, digital information property, digital civil rights, and digital homeland & national defense.
[2] For case studies and theory, see: Padgett, J. F., & Powell, W. W. (2012). The Emergence of Organizations and Markets. Princeton, NJ: Princeton University Press.
[3] Ostrom, E. (2009). Understanding Institutional Diversity. Princeton, NJ: Princeton University Press.
[4] “something constructed or created from a diverse range of available things.”
[5]  “a trait that has been co-opted for a use other than the one for which natural selection has built it.”
[6] See: Auerswald, P. E., Branscomb, L. M., Porte, T. M. L., & Michel-Kerjan, E. O. (2006). Seeds of Disaster, Roots of Response: How Private Action Can Reduce Public Vulnerability. Cambridge University Press.




Thursday, April 28, 2016

Tuesday, March 29, 2016

Media Coverage of #TayFail Was "All Foam, No Beer"

One of the most surprising things I've discovered in the course of investigating and reporting on Microsoft's Tay chatbot is how the rest of the media (traditional and online) have covered it, and how the digital media works in general.

None of the articles in major media included any investigation or research.  None.  Let that sink in.

All foam, no beer.

Sunday, March 27, 2016

Microsoft's Tay Has No AI

(This is the third of three posts about Tay. Previous posts: "Poor Software QA..." and "...Smoking Gun...")

While nearly all the press about Microsoft's Twitter chatbot Tay (@Tayandyou) is about artificial intelligence (AI) and how AI can be poisoned by trolling users, there is a more disturbing possibility:

  • There is no AI (worthy of the name) in Tay. (probably)

I say "probably" because the evidence is strong but not conclusive and the Microsoft Research team has not publicly revealed their architecture or methods.  But I'm willing to bet on it.

Evidence comes from three places. First is from observing a small non-random sample of Tay tweet and direct message sessions (posted by various users). Second is circumstantial, from composition of the team behind Tay. Third piece of evidence is from a person who claims to have worked at Microsoft Research on Tay until June 2015.  He/she made two comments to my first post, but unfortunately deleted the second comment which had lots of details.

Saturday, March 26, 2016

Microsoft #TAYFAIL Smoking Gun: ALICE Open Source AI Library and AIML

[Update 3/27/16: see also the next post: Microsoft's Tay has no AI"]

As follow up to my previous post on Microsoft's Tay Twitter chatbot (@Tayandyou), I found evidence of where the "repeat after me" hidden feature came from.  Credit goes to SSHX for this lead in his comment:
"This was a feature of AIML bots as well, that were popular in 'chatrooms' way back in the late 90's. You could ask questions with AIML tags and the bots would automatically start spewing source into the room and flooding it. Proud to say I did get banned from a lot of places."
A quick web search revealed great evidence. First, some context.

AIML is acronym for "Artificial Intelligence Markup Language", which "is an XML-compliant language that's easy to learn, and makes it possible for you to begin customizing an Alicebot or creating one from scratch within minutes."  ALICE is acronym for "Artificial Linguistic Internet Computer Entity".  ALICE is free natural language artificial intelligence chat robot.

Evidence

This Github page has a set of AIML statements staring with "R". (This is a fork of "9/26/2001 ALICE", so there are probably some differences between Base ALICE today.)  Here are two statements matching "REPEAT AFTER ME" and "REPEAT THIS".

Snippet of AIML statements with "REPEAT AFTER ME" AND "REPEAT THIS"
(click to enlarge)
As it happens, there is an interactive web page with Base ALICE here. (Try it out yourself.) Here is what happened when I entered "repeat after me" and also "repeat this...":

In Base ALICE, the template response to "repeat after me" is "...".  In other words, NOP ("no operation").  This is different from the AIML statement, above, which is ".....Seriously....Lets have a conversation and not play word games.....".  Looks like someone just deleted the text following three periods.

But the template response to "repeat this X" is "X" (in quotes), which is consistent with the AIML statement, above.

Conclusion

From this evidence, I infer that Microsoft's Tay chatbot is using the open-sourced ALICE library (or similar AIML library) to implement rule-based behavior.  Though they did implement some rules to thwart trolls (e.g. gamergate), they left in other rules from previous versions of ALICE (either Base ALICE or some forked versions).

My assertion about root cause stands: poor QA process on the ALICE rule set allowed the "repeat after me" feature to stay in, when it should have been removed or modified significantly.

Another inference is that "repeat after me" is probably not the only "hidden feature" in AIML rules that could have caused misbehavior.  It was just the one that the trolls stumbled upon and exploited.  Someone with access to Base ALICE rules and also variants could have exploited these other vulnerabilities.

Friday, March 25, 2016

Poor Software QA Is Root Cause of TAY-FAIL (Microsoft's AI Twitter Bot)

[Update 3/26/16 3:40pm: Found the smoking gun. Read this new post. Also the recent post: "Microsoft's Tay has no AI"]

This happened:
"On Wednesday morning, the company unveiled Tay [@Tayandyou], a chat bot meant to mimic the verbal tics of a 19-year-old American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. According to Microsoft, the aim was to 'conduct research on conversational understanding.' Company researchers programmed the bot to respond to messages in an 'entertaining' way, impersonating the audience it was created to target: 18- to 24-year-olds in the US. 'Microsoft’s AI fam from the internet that’s got zero chill,' Tay’s tagline read." (Wired)
Then it all went wrong, and Microsoft quickly pulled the plug:
"Hours into the chat bot’s launch, Tay was echoing Donald Trump’s stance on immigration, saying Hitler was right, and agreeing that 9/11 was probably an inside job. By the evening, Tay went offline, saying she was taking a break 'to absorb it all.' " (Wired
Why did it go "terribly wrong"?  Here are two articles that assert the problem is in the AI:
The "blame AI" argument is: if you troll an AI bot hard enough and long enough, it will learn to be racist and vulgar. ([Update] For an example, see this section, at the end of this post)

I claim: the explanations that blame AI are wrong, at least in the specific case of tay.ai.

Monday, February 8, 2016

Work in Progress (Cyber Security Investment Game)

Here is a video demo of the NetLogo "Cyber Security Investment Game", as a work in progress.  This run is a 6/3 multiplayer game, meaning 6 cooperative Defenders and 3 adversarial Attackers. Defenders play a 2-player game with everyone else, while Attackers only play 2-player games with Defenders. It's a complicated game, with up to 400 possible "moves" for each agent at each step, and also a dynamic game, meaning the structure of the game can change dynamically during the course of play. (Game Theory Purists will probably hate it for that reason!)



Here's a video of 1,700 time steps, with 20x speed-up.



What you see in this simulation is an ecosystem that provides a positive environment for both Defenders and Attackers. That is, they both have consistently positive payoffs (see bar graph, center left).

While the overall trends of payoffs are pretty clear (bottom graph), there is an "unruly" back and forth between attackers and defenders. Crucially, the time series of payoffs is non-stationary for all agents (see center graphs, third and fourth from the top). Simply, non-stationary means that the probability distribution for each payoff  changes over time. You can see a sample distribution of payoffs in the two histograms, center right. You'll notice them changing shape (going from skew to symmetric) and also changes in mean and standard deviation ("SD").  Non-stationarity has implications for risk estimation, as I will detail in the up-coming WEIS paper.

In terms of progress toward the goal, I would say this is not yet a model of cyber security investment. It models competitive relationships rather than not host-parasite relationships which I believe are close to the true nature of cyber security ecosystems. The good news is that I believe I know what extensions and modifications need to be made. I'll save that for a later post.

Here are some key features of this version of the model:
  • Three levels of investment: 1) architecture/infrastructure; 2) capabilities; 3) practices/routines (a.k.a. "moves")
  • Asymmetric number and variety of "moves" for Defenders and Attackers
  • Parametric control over diversity of "moves" and investments by Defenders and Attackers
Yes, the model is getting complicated.  But I hope this richness will be rewarded when crucial results are revealed in experiments.  We shall see...

<update>

Here is the same run after 4,000 time steps.  No sign of equilibrium or stationary time series.

(Click to enlarge)



Friday, January 22, 2016

Time & Uncertainty (2nd post: "What kind of game is cyber security investment?")

Summary: Time and uncertainty are essential features of any model of the "game of cyber security".  Models that do not include them as central features are not fit for purpose.  But, yes, they do make life more difficult for modelers and their audiences. While I make the case that both are essential, I leave open the question as to what is the most parsimonious method or treatment.

Tuesday, January 19, 2016

What kind of game is cyber security investment? (post #1 of ?)

This is first in a series of blog posts where I think out loud as I build a paper for WEIS 2016, and also a component for my dissertation.

The focus is on "investment" broadly defined.  This means money invested in people, tools, infrastructure, processes, methods, know-how, etc.  It also means architectural commitments that shape the business, technical, legal, or social aspects of cyber security for a given person or organization.  All these investments provide the foundation for what a person or organization is able to do (i.e. their "capabilities") and the means of executing day-to-day tasks ("routines", "processes", "practices", etc.).

If cyber security investment is a strategic game between attackers and defenders, and among defenders, then what kind of game is it?

Summary

In simple terms, people tend to think of cyber security investment as being one of (at least) five types of games:

  1. An optimization game, where each player finds the optimal level of spending (or investment) to minimize costs (or losses).  This view is favored by Neo-classical Economists and most Game Theorists.
  2. A collective wisdom game, where the collective searching/testing activities of players leads to the emergence of a "collective wisdom" (a.k.a. "best practices") that everyone can then imitate. This view is favored by many industry consultants and policy makers.
  3. A maturity game, where all players follow a developmental path from immature to mature, and both individual and collective results are improved along the way.  This view is favored by many industry consultants.
  4. A carrots-and-sticks game, where players chose actions that balance rewards ("carrots") with punishments ("sticks") in the context of their other goals, resources, inclinations, habits, etc.  This view is favored by some Institutional Economists, and some researchers in Law and Public Policy.  It is also favored by many people involved in regulation/compliance/assurance. 
  5. A co-evolution game, where the "landscape" of player payoffs and possible "moves" is constantly shifting and overall behavior subject to surprises and genuine novelty.  This view is favored by some researchers who employ methods or models from Complexity Science or Computational Social Science.  This view is also a favorite of hipsters and "thought leaders", though they use it as metaphor rather than as a real foundation for research or innovation.
But what kind of game is cyber security, really?  How can we know?

These questions matter because, depending on the game type, the innovation strategies will be very different:
  1. If cyber security is an optimization game, then we need to focus on methods that will help each player do the optimization, and to remove disincentives for making optimal investments.
  2. If cyber security is a collective wisdom game, then we need to focus on identifying the "best practices" and to promote their wide-spread adoption.
  3. If cyber security is a maturity game, then we need to focus on the barriers to increasing maturity, and to methods that help each player map their path from "here" to "there" in terms of maturity.
  4. If cyber security is a carrots-and-sticks game, then we need to find the right combination of carrots and sticks, and to tune their implementation.
  5. Finally, if cyber security is a co-evolution game, then we need to focus on agility, rapid learning, and systemic innovation. Also, we should probably NOT do some of the strategies listed in 1) through 4), especially if they create rigidity and fragility in the co-evolutionary process, which is the opposite of what is needed.


Thursday, January 14, 2016

How fast does the space of possibilities expand? (replicating Tria, et al 2014)

How fast does the space of possibilities expand?  This question is explored in the following paper (free download):


From the abstract:
Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called “expanding the adjacent possible”. The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya's urn, predicts statistical laws for the rate at which novelties happen (Heaps' law) and for the probability distribution on the space explored (Zipf's law), as well as signatures of the process by which one novelty sets the stage for another.
I've written a NetLogo program to replicate their model, available here.  The code for the model is quite simple.  A majority of my code is for a "pretty layout", which is a schematic version of a "top-down view" of the urn.  Here's a video of a single run





Full screen with controls. (click to enlarge)
The charts on the top and center right show the frequency distribution by ball type (a.k.a. "color").  These are log-log plots, so a straight line (declining) is signature of a power law distribution, while a gradually curving (concave) is signature of lognormal or similar distribution with somewhat thinner tail.  Sharply declining curve is signature of a thin tailed distribution such as Gaussian.

So what?

This model will be useful in my dissertation because I need mechanisms to endogenously add novelty -- i.e. expand the possibility space based on the actions of agents in the simulated world, and not simply as external "shocks".

This is essential for modeling cyber security because some people claim that quantitative risk management is impossible in principle because of intelligent adversaries who can generate and exploit novel strategies and capabilities.


Tuesday, January 12, 2016

Institutional Innovation in Contested Territory: Quantified Cyber Security and Risk

Say you are an entrepreneurial sort of person who wants to really change the world of cyber security. Problem: nobody seems to know where the game-changing innovation is going to come from.  Is it technology?  Is it economics?  Is it law and policy? Is it sociology? Maybe combination, but what? And in what sequence?

If you aim for institutional innovation, then at some point you are going to need to take sides in the great "Quant vs. Non-quant" debate:
  • Can cyber security and risk be quantified? 
  • If "yes", how can quantitative information be used to realize security to significantly improve outcomes?
Whether you choose Quant or Non-quant, you will need some tools and methods to advance the state of the art.  But how do you know if you are choosing the right tools, and using them well?  (Think about the difference between Numerology and Calculus as they might be applied to physics of motion.)

Whoever makes sufficient progress toward workable solutions will "win", in the sense of getting wide-spread adoption, even if the other is "better" in some objective sense (i.e. "in the long run").

I examine this innovation race in a book chapter (draft). The book will probably come out in 2016.

Abstract:
"The focus of this chapter is on how the thoughts and actions of actors coevolve when they are actively engaged in institutional innovation. Specifically: How do innovators take meaningful action when they are relatively ‘blind’ regarding most feasible or desirable paths of innovation? Our thesis is that innovators use knowledge artifacts – e.g. dictionaries, taxonomies, conceptual frameworks, formal procedures, digital information systems, tools, instruments, etc. – as cognitive and social scaffolding to support iterative refinement and development of partially developed ideas. We will use the case of institutional innovation in cyber security as a way to explore these questions in some detail, including a computational model of innovation."
Your feedback, comments, and questions would be most welcome.

The computational model used is called "Percolation Models of Innovation".  Here is the NetLogo code of the model used in the book chapter.   Below are some figures from the book chapter.

Innovation as percolation. Progress moves from bottom to top. Each column is a "technology",
and neighboring columns are closely related.  This version (S&V 2005) only models
rate of progress and distribution of "sizes", not anything about the technology or
trajectory of innovation.
A screen shot of the user interface.  Three different models can be selected (upper left).

Friday, January 8, 2016

Complex dynamics in learning complicated games (replicating Galla & Farmer 2012)

I have written a NetLogo version of the random game model of Galla & Farmer (2012) (free download).  It has been uploaded to the NetLogo community library and should appear in a day or so.  Read on if you are interested in Game Theory, esp. learning models and computational methods.

Chaotic dynamics in a complicated game. The payoffs are negatively correlated (-0.7) and memory for learning is long (alpha ≈ 0). Notice the squiggly lines in the time series plots (lower right). Each line is probability for a given move.
If the game were in equilibrium, these lines would be flat. (click for larger image)
Download the .nlogo file 
Download NetLogo (Win, Mac, Linux)