Tuesday, January 19, 2016

What kind of game is cyber security investment? (post #1 of ?)

This is first in a series of blog posts where I think out loud as I build a paper for WEIS 2016, and also a component for my dissertation.

The focus is on "investment" broadly defined.  This means money invested in people, tools, infrastructure, processes, methods, know-how, etc.  It also means architectural commitments that shape the business, technical, legal, or social aspects of cyber security for a given person or organization.  All these investments provide the foundation for what a person or organization is able to do (i.e. their "capabilities") and the means of executing day-to-day tasks ("routines", "processes", "practices", etc.).

If cyber security investment is a strategic game between attackers and defenders, and among defenders, then what kind of game is it?

Summary

In simple terms, people tend to think of cyber security investment as being one of (at least) five types of games:

  1. An optimization game, where each player finds the optimal level of spending (or investment) to minimize costs (or losses).  This view is favored by Neo-classical Economists and most Game Theorists.
  2. A collective wisdom game, where the collective searching/testing activities of players leads to the emergence of a "collective wisdom" (a.k.a. "best practices") that everyone can then imitate. This view is favored by many industry consultants and policy makers.
  3. A maturity game, where all players follow a developmental path from immature to mature, and both individual and collective results are improved along the way.  This view is favored by many industry consultants.
  4. A carrots-and-sticks game, where players chose actions that balance rewards ("carrots") with punishments ("sticks") in the context of their other goals, resources, inclinations, habits, etc.  This view is favored by some Institutional Economists, and some researchers in Law and Public Policy.  It is also favored by many people involved in regulation/compliance/assurance. 
  5. A co-evolution game, where the "landscape" of player payoffs and possible "moves" is constantly shifting and overall behavior subject to surprises and genuine novelty.  This view is favored by some researchers who employ methods or models from Complexity Science or Computational Social Science.  This view is also a favorite of hipsters and "thought leaders", though they use it as metaphor rather than as a real foundation for research or innovation.
But what kind of game is cyber security, really?  How can we know?

These questions matter because, depending on the game type, the innovation strategies will be very different:
  1. If cyber security is an optimization game, then we need to focus on methods that will help each player do the optimization, and to remove disincentives for making optimal investments.
  2. If cyber security is a collective wisdom game, then we need to focus on identifying the "best practices" and to promote their wide-spread adoption.
  3. If cyber security is a maturity game, then we need to focus on the barriers to increasing maturity, and to methods that help each player map their path from "here" to "there" in terms of maturity.
  4. If cyber security is a carrots-and-sticks game, then we need to find the right combination of carrots and sticks, and to tune their implementation.
  5. Finally, if cyber security is a co-evolution game, then we need to focus on agility, rapid learning, and systemic innovation. Also, we should probably NOT do some of the strategies listed in 1) through 4), especially if they create rigidity and fragility in the co-evolutionary process, which is the opposite of what is needed.



Q: What Kind of Game Is It? (And why ask such a question?)


By "game", I mean in the sense of Game Theory.  Games involve sets of strategies (or "moves") available to each player, and payoffs to each player that depend on what moves are chosen by all the players.  There are even single player games where "Nature" is the opponent.  The goal in all games is to pick moves that yield the highest payoff (or at least the best payoff given some criteria), given what you know about the game and the other players.

Consider three simple examples: 1) Tick-Tack-Toe, 2) Rock-Paper-Scissors, and 3) Go.  They are largely distinguished by differences in payoffs for each move combination, and the number of possible moves at each stage of play.  Both players have perfect information about the state of play, the options available to the other player, and the payoffs.  But the structure of moves and move sequences makes all the difference. Tick-Tack-Toe and Rock-Paper-Scissors are both simple games that permit very thorough analysis and complete results.  In fact, once you know these results, it is hardly interesting to play these games since the outcomes seem pre-determined.  In contrast, Go is a very complex game with a very, very large state space.  So far, there is no "complete analysis" of the optimal move sequence for Go.

The moral of this story is: If you are doing research on these games it is vital to know precisely what type of game it is so that your methods of analysis will be appropriate and to know what sort of results you can hope to find.

Back to cyber security...

Every cyber security paper, report, or presentation is based on some mental model of what cyber security is, including the type of game that best fits.  

Three interesting points: First, there is disagreement over the nature and structure of the cyber security game, and these disagreements are foundational, not superficial. Second, nearly everyone takes their definition/categorization as a given, either through assumptions or through formal axioms Third, it seems that no one tests their assumptions.

If disagreements are fundamental and consequential, but no one tests alternative mental models, then we have no way to reconcile the alternatives.  These schools of thought are like "ships in the night", with little or no influence on each other and no resolution. Or perhaps they might cycle in and out of fashion through social processes.

How Different Game Types Imply Different Methods

In the Optimality Game, it is the payoff structure of the game and the optimizing procedures of players that matter most.  If I'm writing a paper on optimal cyber security spending (e.g. Gordon & Loeb 2002), then I usually assume that an optimum exists, or at least that my assumptions and axioms permit the existence of an optimum.  When each player can (and should) optimize, then the system will reach an equilibrium where each player is making their optimal choice.  If they have are worse off if any player deviates from this optimum play, then this is a Nash equilibrium.  This is a very powerful system concept because the existence of an equilibrium -- especially a unique equilibrium, especially a unique Nash equilibrium -- means that we can usually overlook how the system got to that equilibrium (i.e. the "transient behaviors").

But some games have multiple equilibria, or none at all.  This be the case when the number of possible moves is too large for the players to fully analyze and plan.  (See this post.)  When this is the case, we need to abandon the idea of finding optimal play for ourselves and other players.  In fact, assuming we can find an optimum may lead to the opposite -- to a region where our payoffs are far from ideal or desirable.

In the Collective Wisdom Game, I believe that "good play" in cyber security will inevitably emerge from the collective thoughts, experiences, and efforts of players.  Maybe I also believe that once the rules of "good play" are established that they won't change (much) over time.  These beliefs would lead me to follow then a different strategy from the previous case.  I am no longer concerned whether any given player, or even most players, are choosing optimal strategies.  In fact, I'd probably like to see a diversity of strategies, all engaged in some sort of "generate and test" process so that the best strategies emerge to popular acclaim.  The only mechanism that really matters is the collectivization of wisdom (i.e. social learning processes) including how the wisdom/knowledge is codified and abstracted so that it can be widely applied and adopted. 

But what if the "collective wisdom" is not wisdom at all, but instead is simply popular delusion?  There might be social "learning" mechanisms at work that serve other masters besides wisdom.  Some fine examples are given in the book Memoirs of Extraordinary Popular Delusions and the Madness of Crowds (free download) by Charles MacKay, written in 1852.  Just because "everybody knows..." doesn't make it true or wise.

Another contrasting model is the Maturity Game.  Here, the focus of strategy and innovation is on steadily increasing maturity, to what ever level of maturity is called for in any given situation.  Unlike the "Collective Wisdom Game", it is very important for players to grow and develop in a sequence.  It is simply not possible, in this view, to go from "Immature" to "Mature" (= implementing all "best practices") in one step.  The cumulative process of growth and maturity are vital, and even more important that the specific details of each level of maturity.

But what if there are "maturity traps"?  What if the appearance of maturity does not correspond to better or improved performance?  What if the path to maturity is not linear but branches or is cyclical?

The Carrots-and-Sticks Game shifts the attention from inside the players to outside.  It is a form of environmental determinism.  With a tip-of-the-hat to Thomas Hobbes (Leviathan, Part 1), if I view cyber security as a game of carrots and sticks, then I reduce all behavior and outcomes to the combined influence of rewards and punishments, especially those that are institutionalized and therefore persistent.  Yes, there is player optimization within the incentives imposed, but these are often treated as secondary or as something to be overcome.  In the Carrots-and-Sticks Game, the incentives of interest are usually extrinsic rather than intrinsic.

But what if there are unintended consequences of the Carrots and Sticks?  Examples include "malicious compliance" (example: work-to-rule),  "gaming the system" (examples: cheating scandals in educational testing here and here; and corporate performance reporting here).  Do we add more carrots and sticks to manage those?

Finally, the Co-evolutionary Game shifts attention to the dynamics of change and innovation (both internal and external to players).  No state of affairs are permanent and nothing is stable by necessity.   Whatever incentives may be exist at any time may be upended, literally "changing the rules of the game" as we play it. While the other game types seem to offer some desirable end state -- a "Heaven" -- there is no such prospect in the Co-evolutionary Game. We can never rest or say that we have "arrived" at the ideal destination.  The attention of researchers and analysts will be on processes of learning, discovery, agility, and "landscape changing". (Disclosure: the title of my dissertation is "Shaping Possibility Space", just in case you had doubts about which model I prefer.)

But what if this view of change and innovation is excessive? What if there are rigidities in the system that leave the game rather static for most players for long periods of time?  If so, then the co-evolutionary view would be a needless complication and would place excessive burdens on players who have better things to do than to innovate in cyber security.

What's Next

In the next post in this series, I'll layout a framework for a generic cyber security game that can be implemented computationally.  If it works, this should allow us to run computational experiments to characterize the nature of the cyber security game and to see which of the game types, above, best fit, if any.



No comments:

Post a Comment