(Of course, the title is hyperbole and attention-seeking. Now that you are here, I hope you'll keep reading.)
(click to enlarge) |
I don't mean that cyber insurance has no value or that it is a charade.
My main point: cyber insurance has the wrong clothes for the purposes and social value to which it aspires.
This blog post sketches the argument and evidence. I will be following up separately with more detailed and rigorous analysis (via computational modeling) that, I hope, will be publishable.
tl;dr: (switching metaphors)
As a driving force for better cyber risk management, today's cyber insurance is about as effective as eating soup with a fork.(This is a long post. For readers who want to "cut to the chase", you can skip to the "Cyber Insurance is a Functional Misfit" section.)
There has been a ton of industry and academic articles on cyber insurance. On the academic side, nearly all of it has been economic analysis (mostly theoretical, some empirical). My lens is different: Organization Science. The focus is functional -- how do organizations actually work and make decisions.
This essay centers on cyber insurance in the USA, but except for some regulatory details, I believe that my argument holds for other regions.
Cyber Insurance as "Emperor"
The conventional wisdom in cyber security economics and policy circles is that the world would be a better and safer place if every firm (or most firms) bought cyber insurance. This idea is so widely accepted that what people mostly debate is "how do we promote cyber insurance as a market and industry?" and "what can we do to complement cyber insurance (e.g. standards, regulation, safe harbors, etc.)?" Certainly some people think that other institutions would work better (e.g. legal liability and tort law, regulatory mandates and penalties, technological solutions). But almost no one has come forward to proclaim that cyber insurance, in its present form, was not fit for purpose.Thus, metaphorically, cyber insurance has become a conceptual "Emperor" to whom nearly everyone pays homage.
Like nearly all liability insurance, the putative economic function of cyber insurance is risk transfer from the insured parties to the insurers. But almost no one argues that the main problem with cyber security today is that insured parties (firms, consumers) are bearing excessive risk and therefore underinvest. This is not the desired social benefit has earned cyber insurance the honorific of "Emperor".
While most business insurance companies and lines operate on the value proposition of risk transfer, there is a subset that have been centered on risk reduction and best practices in addition to risk transfer. The exemplar is Hartford Steam Boiler, founded in 1866:
" 'the first company in America devoted primarily to industrial safety.' 'Hartford Standards' quickly became the specifications for boiler design, manufacture and maintenance."In the same vein, the desired social benefits of cyber insurance are four-fold (paraphrasing):
- "...provide financial incentives for firms to adopt best practices"
- "...promote the adoption of standards that are compatible with best practices"
- "Cyber insurance will be more agile than regulations, thereby more likely to promote good things and avoid bad unintended consequences."
- "Insurance companies will build ever-better models of risk so that collectively we will be better able to quantify the costs and benefits of specific security practices."
(The purpose of the panel event was to promote their new book Cyber Insecurity: Navigating the Perils of the Next Information Age, which looks quite interesting.)
For more comprehensive summary, here is an hour-long video panel discussion that provides a comprehensive summary of cyber insurance and how it should improve cyber risk management:
More panels, more conventional wisdom:
From DHS: "Cybersecurity Insurance Industry Readout Reports":
Cyber Insurance's "Fine Clothes"
The metaphorical "clothing" of cyber insurance are the capabilities associated with insurance companies and their functional ecosystem (i.e. value networks, legal and regulatory environment, professional societies, trade groups, academic programs and certifications, etc.). The claim that cyber insurance will deliver the desired social benefits is equivalent to saying that cyber insurance has the right capabilities, given the characteristics and job-to-be-done in the marketplace.
[EDIT: Moved "Analytic Framework" to the end]
Cyber Insurance is a Functional Misfit
This section aims to itemize how cyber insurance is a misfit, given the desired social benefits. First, what do firms need from cyber security and cyber insurance?
The Cyber Security "Job to Be Done"
The "job to be done" (see here and here) for firms and consumers is to make better decisions and investments regarding cyber security so that the expected benefits of "good things" outweighs the uncertain costs and worry over "bad things".
Compared to other major decisions involving risk (e.g. buying a house, building a factory, licensing a patent), cyber security decisions are both distributed (in time and space) and complex (feedback loops, nonlinearities, etc.). In medium to large organizations, there are hundreds or thousands of key decisions and decision makers that matter -- security staffing and training, IT architecture, vendor and outsourcing strategy and practices, business process design, hiring/staffing/performance incentives, and so on. And these go beyond what most people focus on: security products/appliances (e.g. firewalls, proxies, anti-malware, etc.) and security policies (e.g. password length/complexity, administrator rights, etc.).
The "job to be done" is to influence (a.k.a. "nudge") the decision and implementation process at all (or most) of these points. Nearly all firms have no idea how all these decisions influence each other and how they influence overall cyber security risk. (The Ten Dimension Framework that I have proposed is one approach to solving this problem: managing for cyber security performance.)
Cyber Insurance Misfits in Ten Ways
What cyber insurance does is governed by their existing capabilities and functions of insurance firms, and their supporting ecosystem. In terms of capabilities and routines, cyber insurance is not significantly different than other forms of business insurance. The contracts look similar, the sales process is similar, the pricing/underwriting is similar, the marketing is similar, and the regulatory oversight is similar.
[EDIT]
For readers interested in the theoretical/analytical framework, see the Appendix section at the end.
[EDIT]
For readers interested in the theoretical/analytical framework, see the Appendix section at the end.
I will organize my argument for misfits under ten desiderata:
1. Cyber Insurance is Bought by and Sold to the Wrong People
Cyber insurance is bought by, and sold to, the same people who buy other types of business insurance. These folks are in Finance, Legal, or maybe the Corporate Risk department (if it exists). In any complex business-to-business sales process, the buyers are generally a network of people, taking different roles and having different interests, e.g. economic buyers, user buyers, technical buyers, and coaches (i.e. internal allies for the vendor's sales team). Many organizations even outsource the process of insurance buying because they lack the expertise internally. A high percentage of corporate insurance is sold through brokers, who try to serve the needs of insurance providers and consumers, though in reality their interests are closely tied to providers.
Whose budget does it come out of? Not out of the CIO's or CISO's budget, nor out of any Line of Business executive. It comes out of some corporate budget under the CFO. Maybe it is allocated to business units, but probably bundled in with all the other corporate allocations.
Whose capital budget gets affected? Nobody's. Whose headcount increases or decreases? Nobody's.
When all the contract details are being negotiated, and all the forms and checklists are being generated and processed, who (in the buying organization) is mapping all that back to decisions and investments that influence security? Nobody.
Turn it around. Look at the most significant decisions and investments that affect security. Who makes those decisions? Who influences them? Who monitors them and gives feedback? Now that you have that set of people in mind -- how many of those people have any involvement in or awareness of the cyber insurance buying process? Close to zero.
In terms of affordances (see Appendix), the insurance industry does not have a sales/demand creation capability that is suited to the diffuse, poorly organized buying network in most organizations. What do both organizations do? They engage with the affordances that the are compatible with (same old sellers and buyers), just to make transactions work.
As a consequence, sellers don't get the corrective feedback they need, and neither do buyers. None of the people who are most important for cyber security have anything to do with the cyber insurance purchase process, and vice versa. The cyber insurance that gets bought -- with all the contract complexities, limits, and premium cost -- has a remote connection to what is actually going on in the organization, especially regarding key decisions and investments. Result: the products, services, and prices don't face effective selective pressure (in the evolutionary sense), and therefore they don't improve and instead stay stuck in the swamp.
2. The Information Basis for Underwriting Decisions is Weak
Underwriting is the process of setting coverages, limits, conditions, and premium prices. On what basis do they make this decision for cyber insurance. In a large majority of cases, the information comes from questionnaires, checklists, audit results, or other qualitative evaluations, and only once per purchase cycle. (There are some insurance companies that are starting to incorporate ratings and scores by firms like Bitsight, RiskRecon, and Cyence. But it is not yet clear how this information will influence underwriting or premiums.)
I argue there are two sub-misfits. The first misfit is that none of the information that insurance companies get about a firm is contextualized. For example, what is the difference in risk between two hospitals: Hospital A that has two factor authentication for all users but also has many people with system administrator privileges; vs Hospital B that has very few system administrators but almost no two factor authentication? The answer (if there is one) depends on all the other information and security features of their organization. But the insurance company doesn't see any of the systemic nature of this. All they see are check boxes (yes/no), or categorical answers, or sometimes numbers.
The second misfit is that insurance underwriters are making course grained decisions based on a hodge-podge of qualitative information that may or may not have anything to do with risk posture.
Several years ago, I had a conversation with someone at a conference who was knowledgeable about underwriting and risk management practices at diversified insurance firms. He said, basically, that cyber insurance was being treated as a "long term development" market, and that firms offering policies today were hoping to have good models of risk ten or twenty years from now. In the mean time, cyber risk was being bundled in with all the other "developmental risks" and as long as it, alone, wasn't generating too much in the way of losses, it would be OK for the insurers. In other words, cyber insurance was being priced like it was the table stakes at a (long term) poker game. (See #4, below).
3. The Structure of Cyber Insurance Contracts is a Poor Fit
The structure of cyber insurance contracts reflects the interests and world view of insurance firms and their ecosystem. Like nearly every insurance contract these days, they are long and complex, with limits, exclusions, deductibles, co-pays, riders, footnotes, and exit clauses "up the wazoo".
While the putative economic value of cyber insurance is risk transfer, in reality insurance companies (and their reinsurers) do not want too much risk transferred, of the wrong kind (esp. cascading, correlated risk).
But most firms and nearly all consumers do not really benefit economically from risk transfer. What they want (implicitly) is risk pricing so they can factor that into today's decisions and investments. All of the complexity of cyber insurance contracts makes it harder, not easier, for firms to do risk pricing, taking into account ALL of their costs under ALL contingent circumstances.
Imagine you wanted to estimate how much you would weigh at the end of the week, given how much food and exercise you have during the week. You buy a digital fitness & diet tracking device. GREAT! But then you find out that it will only report calories consumed while sitting down and motionless, in solid form, and not more than once per hour. And it will only report exercise when the sun is up and the temperature is between 55 and 65 degrees. Everything else is on you. THANKS FOR NOTHING!
Rather than infrequent claims, what society needs is frequent reporting of breaches and even near-misses. Only then will we have data that is rich enough to model risk at all scales.
Then there are the uncertainties -- timing of resolution/payment and grant-vs-deny decision, not to mention any lawsuits or arbitration that might follow a denial of a large claim. There may even be "time bombs" hidden in the contracts that you discover too late.
Here's a somewhat hypothetical example.
Robert Morgus (in the video above) reported that 100% of the policies he surveyed have explicit exclusions for nation-state actors and terrorists ("acts of war" by non-state actors). But who has the burden of proof in any claim that the threat actor was definitely not a nation state or terrorist? This detail will be buried somewhere in the insurance contract and terms. It may be phrased in legalese so that non-specialists may not recognize it or fully understand it. Maybe it is ambiguous, or covered in some other blanket clauses.
Let's say your firm has a costly breach and files a big claim. But your insurance company denies it on the grounds that you have not sufficiently proved that the threat actor was not a nation state or terrorist. WTF? Like most firms, yours does not have sufficient capabilities in threat intelligence, digital forensics, and law enforcement/government agency relations to adequately do threat actor attribution. You are screwed.
Because insurance contracts are so complex (see #3, above), there may be dozens or hundreds of similar hidden "time bombs" that could lead to denial of a claim. While most people would expect some rate of claim denial for insurance, they mostly focus on what they, the insured, might do or not do that would lead to a denial. But there are other forces at work in cyber insurance that have less to do with a specific firm and claim, and everything to do with the norms, regulations, and strategies of diversified insurance businesses. Insurance firms have significant incentives to deny a specific claim if they think it will set an undesirable precedent for the rest of their portfolio and for future contracts.
[EDIT]
To clarify, I am not arguing that cyber insurance claims payments is any worse (or better) than any other line of business insurance, or that it should be better in order to perform it's risk transfer function. I am arguing that -- for the incentive system role (and risk information service role) -- the costs and uncertainties of claims processing is a detriment and a misfit, especially convoluted exclusions and other contractual clauses.
4. The Claims Process is Too Expensive, Too Uncertain, and Not Frequent Enough
The capability for processing claims (administrative, legal, financial, etc.) is a core competency of insurance firms. But it is tuned for infrequent use by any given insured party. The claims themselves are processed with a combination of "factory" (for those that fit standard patterns) and "hand-crafted" methods (for everything else). They claims are expensive for insurance firms to process, even if they end up denying a claim. Claims are also expensive for insured parties to file, because almost always it requires specialist resources at expensive hourly rates.Rather than infrequent claims, what society needs is frequent reporting of breaches and even near-misses. Only then will we have data that is rich enough to model risk at all scales.
Then there are the uncertainties -- timing of resolution/payment and grant-vs-deny decision, not to mention any lawsuits or arbitration that might follow a denial of a large claim. There may even be "time bombs" hidden in the contracts that you discover too late.
Here's a somewhat hypothetical example.
Robert Morgus (in the video above) reported that 100% of the policies he surveyed have explicit exclusions for nation-state actors and terrorists ("acts of war" by non-state actors). But who has the burden of proof in any claim that the threat actor was definitely not a nation state or terrorist? This detail will be buried somewhere in the insurance contract and terms. It may be phrased in legalese so that non-specialists may not recognize it or fully understand it. Maybe it is ambiguous, or covered in some other blanket clauses.
Let's say your firm has a costly breach and files a big claim. But your insurance company denies it on the grounds that you have not sufficiently proved that the threat actor was not a nation state or terrorist. WTF? Like most firms, yours does not have sufficient capabilities in threat intelligence, digital forensics, and law enforcement/government agency relations to adequately do threat actor attribution. You are screwed.
Because insurance contracts are so complex (see #3, above), there may be dozens or hundreds of similar hidden "time bombs" that could lead to denial of a claim. While most people would expect some rate of claim denial for insurance, they mostly focus on what they, the insured, might do or not do that would lead to a denial. But there are other forces at work in cyber insurance that have less to do with a specific firm and claim, and everything to do with the norms, regulations, and strategies of diversified insurance businesses. Insurance firms have significant incentives to deny a specific claim if they think it will set an undesirable precedent for the rest of their portfolio and for future contracts.
[EDIT]
To clarify, I am not arguing that cyber insurance claims payments is any worse (or better) than any other line of business insurance, or that it should be better in order to perform it's risk transfer function. I am arguing that -- for the incentive system role (and risk information service role) -- the costs and uncertainties of claims processing is a detriment and a misfit, especially convoluted exclusions and other contractual clauses.
The misfit of the claims process is a specific instance of a more general misfit...
5. The Information Flow between Insured and Insurance Firms is Woefully Inadequate
Generally, what is the capability for insurance firms to take in information and process information (i.e. make decisions) from all of their customers (insured)? By the standards of the 21st century internet age, it's almost zero. Many insurance firms don't even have a direct relationship. Instead, they go through brokers. It's even worse for reinsurers, who (economically speaking) may be carrying the largest portion of economic risk in cyber. Typically, the information comes in once per purchase cycle, with perhaps a "re-up" annually.
[EDIT]
Every limit and gap in coverage is also a gap in information flow between insured parties and insurers. Referring back to the hypothetical example in #4, if all contracts deny coverage in cases of nation-state actors and terrorists threat agents, then how will insurers ever get enough information to distinguish those types of attacks from all the others?
[EDIT]
Every limit and gap in coverage is also a gap in information flow between insured parties and insurers. Referring back to the hypothetical example in #4, if all contracts deny coverage in cases of nation-state actors and terrorists threat agents, then how will insurers ever get enough information to distinguish those types of attacks from all the others?
Insurance firms don't learn about near misses, early warning signs, or even evidence that the information they may have received is incomplete, ambiguous, or erroneous. Even those insurance firms that are subscribing to real-time rating or monitoring services will have a hard time making use of this information in a way that changes the value proposition for insured parties. Why?
One reason is regulation. In the US, each of the 50 states has laws and regulatory authority over insurance, including definitions of what is and isn't insurance. Folks involved in financial innovation (e.g.. derivatives -- both OTC and traded -- and also real-money prediction markets) have run into this regulatory thicket. Insurance firms are on the inside, meaning that their ability to offer anything that looks different from "insurance" is quite limited.
One reason is regulation. In the US, each of the 50 states has laws and regulatory authority over insurance, including definitions of what is and isn't insurance. Folks involved in financial innovation (e.g.. derivatives -- both OTC and traded -- and also real-money prediction markets) have run into this regulatory thicket. Insurance firms are on the inside, meaning that their ability to offer anything that looks different from "insurance" is quite limited.
6. The Cycle Time Between "Stimulus" and "Response" is Too Slow
Assume there is some new significant information about risk (being higher or lower) due to some event or signal. How long will it take for that information to percolate through the whole insurance ecosystem so that it is reflected in prices, coverage, or other relevant "incentives"? The answer is, at best, months, but more likely a year or more. Compare that to the response time of security markets, or even cash commodity markets, which respond in seconds to days.
Now, we should assume that this adjustment process is noisy and subject to overshoot or undershoot. How long will the system take to reach a new equilibrium? Five years? Ten years? And how many more "information shocks" will happen during that time? (Many)
For any adaptive system to maintain stability, it must have a response and settling time much shorter than the frequency of "shocks". Though it may be more responsive than the legal/regulatory system, the cyber insurance ecosystem is still too slow and unresponsive to be effective as a market information processing system.
Now, we should assume that this adjustment process is noisy and subject to overshoot or undershoot. How long will the system take to reach a new equilibrium? Five years? Ten years? And how many more "information shocks" will happen during that time? (Many)
For any adaptive system to maintain stability, it must have a response and settling time much shorter than the frequency of "shocks". Though it may be more responsive than the legal/regulatory system, the cyber insurance ecosystem is still too slow and unresponsive to be effective as a market information processing system.
7. Variation in Premiums and Coverage is a Noisy, Unreliable Signal
The folk wisdom about cyber insurance is that higher (lower) premiums and coverage will be strong signals and incentives relating to worse (better) cyber security practices and investments.
The reality is that there is only a loose connection between premium prices/coverage and real risk posture (if we can posit that such a reality exists).
Many parts of the insurance industry are cyclical -- notably Property & Casualty. The reasons are disputed (see here, here, here, and here, for example), but the consequences on incentives are clear. If premiums rise and fall 50% (or more) over the course of 6 years (typical cycle period) for reasons that have nothing to do with any individual insured firm, what signal does that firm get each year? Especially if the difference due to it's own security practices might be only 5% - 10%? And what is the signal when exclusions, deductions, and caps appear and disappear due to pressures on the insurance firms (and their reinsurers)?
[EDIT]
Specialty Insurance lines may be less cyclical than P&C. Even so, they are prone to "hard" and "soft" market conditions where premiums are higher or lower than they would be otherwise because insurers are either rushing into the market, or fleeing the market (or holding back due to tight capital conditions). My point is that these premium fluctuations could be much wider than premium differences due to incrementally better or worse security practices/policies for a given firm.
[EDIT]
Even if premium pricing is not noisy and unreliable, as an incentive system it is easily gamed by the people who manage the relationship with insurers and brokers. Let's say that your executive in charge is named Cheap Bastard (C.B. for short) and your firm has a strong incentive plan to minimize insurance premiums The easiest and surest way for C.B. to achieve the incentive goal is to under-insure -- deductibles that are too high, limits that are too low, and so on. After all, who knows what the right/best amount of insurance is for your firm? C.B. is the expert, right? Maybe your Board is too smart for that and closes that loophole by pre-specifying the basic features of coverage. No problem for C.B. because he can agree to dozens or more exclusions in the details of the contract. Only C.B. will know. The insurer will know, too, and will agree to much lower premiums while fitting the basic requirements.
Aside from the contract, there are several other points in processes where C.B. can act as gate keeper to limit or cut off spending that would trigger a premium increase. Let's say a big breach happens. C.B. choses not to hire the big name digital forensics firm, and instead hires brother-in-law's small firm. No expensive external PR firm or external law firm, either. Instead, shift blame and costs on to customers, suppliers, contractors, and especially specific employees who take the blame. Fire them in a very public way to terrorize the rest. Report authoritatively the Board that "no systemic problems were found and no additional breaches were detected." Case closed. Stonewall the media. Tough out the stock market's reaction. Don't file an insurance claim if possible, and therefore your firm won't be put into a "high risk pool" or other action that lead to significantly higher premiums. At the end of the year, C.B. declares victory and collects a big bonus check.
[EDIT]
Yes, that is a fairly extreme example of gaming the system. Every incentive system is susceptible to being gamed, and none is perfect. The purpose of this example is to highlight how the particular characteristics here provide affordances (see Appendix, below) for certain types of manipulation strategies.
[EDIT]
There are a host of less egregious strategies available to executives and key staff that would effectively muffle the "signal" that premium and coverage differentials might provide. For example, cyber insurance premiums could be bundled into a larger collection of risk and compliance-related costs, and this bundle becomes the basis for organization and individual metrics. Because cyber insurance premiums would be a small percentage of the total, any actual or potential variation would be muted. Analogy: like trying to sing while pressing a pillow into your face.
[EDIT]
Specialty Insurance lines may be less cyclical than P&C. Even so, they are prone to "hard" and "soft" market conditions where premiums are higher or lower than they would be otherwise because insurers are either rushing into the market, or fleeing the market (or holding back due to tight capital conditions). My point is that these premium fluctuations could be much wider than premium differences due to incrementally better or worse security practices/policies for a given firm.
[EDIT]
Even if premium pricing is not noisy and unreliable, as an incentive system it is easily gamed by the people who manage the relationship with insurers and brokers. Let's say that your executive in charge is named Cheap Bastard (C.B. for short) and your firm has a strong incentive plan to minimize insurance premiums The easiest and surest way for C.B. to achieve the incentive goal is to under-insure -- deductibles that are too high, limits that are too low, and so on. After all, who knows what the right/best amount of insurance is for your firm? C.B. is the expert, right? Maybe your Board is too smart for that and closes that loophole by pre-specifying the basic features of coverage. No problem for C.B. because he can agree to dozens or more exclusions in the details of the contract. Only C.B. will know. The insurer will know, too, and will agree to much lower premiums while fitting the basic requirements.
Aside from the contract, there are several other points in processes where C.B. can act as gate keeper to limit or cut off spending that would trigger a premium increase. Let's say a big breach happens. C.B. choses not to hire the big name digital forensics firm, and instead hires brother-in-law's small firm. No expensive external PR firm or external law firm, either. Instead, shift blame and costs on to customers, suppliers, contractors, and especially specific employees who take the blame. Fire them in a very public way to terrorize the rest. Report authoritatively the Board that "no systemic problems were found and no additional breaches were detected." Case closed. Stonewall the media. Tough out the stock market's reaction. Don't file an insurance claim if possible, and therefore your firm won't be put into a "high risk pool" or other action that lead to significantly higher premiums. At the end of the year, C.B. declares victory and collects a big bonus check.
[EDIT]
Yes, that is a fairly extreme example of gaming the system. Every incentive system is susceptible to being gamed, and none is perfect. The purpose of this example is to highlight how the particular characteristics here provide affordances (see Appendix, below) for certain types of manipulation strategies.
[EDIT]
There are a host of less egregious strategies available to executives and key staff that would effectively muffle the "signal" that premium and coverage differentials might provide. For example, cyber insurance premiums could be bundled into a larger collection of risk and compliance-related costs, and this bundle becomes the basis for organization and individual metrics. Because cyber insurance premiums would be a small percentage of the total, any actual or potential variation would be muted. Analogy: like trying to sing while pressing a pillow into your face.
8. Risk Retention Needs to be Promoted, Not Risk Transfer
Most firms don't really need risk transfer. Then why are they paying for it (voluntarily or involuntarily)? Doesn't that take everyone's eye off of the real success factor: smart and effective risk retention.
Broadly speaking, society benefits when the parties best positioned to manage a risk retain that risk. Yes, society benefits when excess risk is transferred and diversified. But when it comes to the operations and practices that give rise to risk in the first place, society does not want that risk sloughed off onto other parties (insurers, customers, suppliers, employees, etc.).
Consider this scenario. Your company is offered two insurance policies with identical coverage but premiums of $\$$100K vs. $\$$50K, and the difference depends on you implementing a list of 45 best practices that you don't do now.
But then along comes an Angel benefactor who offers your firm $\$$50K to implement the 45 practices, and $\$$0 if you don't.
With the Angel, there is no hassle with contracts, deductibles, exclusions, caps, etc. and you don't have to pay $\$$50K per year for coverage you don't really want or need.
Wouldn't the Angel's incentive payment be a much simpler, clearer external incentive than cyber insurance? And your firm would be retaining the risk, which is what you should have been doing all along.
I'm not advocating this sort of direct payment incentive scheme, but I offer it to shine a spotlight the cumbersome, costly baggage that cyber insurance brings while riding the horse of "incentives for best practices".
Broadly speaking, society benefits when the parties best positioned to manage a risk retain that risk. Yes, society benefits when excess risk is transferred and diversified. But when it comes to the operations and practices that give rise to risk in the first place, society does not want that risk sloughed off onto other parties (insurers, customers, suppliers, employees, etc.).
Consider this scenario. Your company is offered two insurance policies with identical coverage but premiums of $\$$100K vs. $\$$50K, and the difference depends on you implementing a list of 45 best practices that you don't do now.
But then along comes an Angel benefactor who offers your firm $\$$50K to implement the 45 practices, and $\$$0 if you don't.
With the Angel, there is no hassle with contracts, deductibles, exclusions, caps, etc. and you don't have to pay $\$$50K per year for coverage you don't really want or need.
Wouldn't the Angel's incentive payment be a much simpler, clearer external incentive than cyber insurance? And your firm would be retaining the risk, which is what you should have been doing all along.
I'm not advocating this sort of direct payment incentive scheme, but I offer it to shine a spotlight the cumbersome, costly baggage that cyber insurance brings while riding the horse of "incentives for best practices".
9. The Risk Models that Insurance Firms are Developing are Not What The Rest of Us Need
What firms need to make better decisions is a risk model of their risk, given their unique circumstances and alternatives. But that is not what insurance firms are modeling.
Like all risk transfer insurance, insurers are modeling the risk associated with a portfolio of contracts. What they care most about is the risk of the whole portfolio over time, not what any individual firm experiences. (This is core to the business model of nearly all insurance businesses.) While they do care about the probability distribution of losses (claims) for firms in a given class or portfolio, what they care about much more are correlated losses that cause "excess claims" and "ruin" for the portfolio. This is where their best modeling minds and resources are working.
Now, there has been a trend in the last 10 years toward "predictive modeling" (e.g. see here) in casualty insurance, including some experimentation in real-time monitoring of automobile drivers and others. There is a possibility that cyber insurance could evolve in this direction, but given all the countervailing forces, it's not a sure bet.
The main message here is that it is the core competencies (a.k.a. capabilities) of the insurance industry, plus the regulatory structure that they operate in, that create clear incentives for them to create and innovate in risk models that serve the insurer and not the insured.
So far, I have discussed cyber insurance as though the only type of insured party were an end-user of information technology. But what about all the product and service vendors of information technology? Many firms have side businesses or complementary services that are, in effect, information technology services. Supply chain cyber risk is well-recognized and widely studied. But it raises a whole new class of interdependence between firms in the network -- events, controlling parties, causation, and so on -- all of which are not well-matched by today's cyber insurance industry.
To reiterate: my argument is not that cyber insurance is bad or broken as a traditional risk transfer institution. It may be fine for that function. But that is not the main reason that policy people are promoting it.
I'm also not painting insurance firms or insurance brokers as "bad guys" or somehow incompetent. They are working within a well established set of capabilities and institutions, doing what they are inclined to do.
That's all for now. Look for my follow up posts where I will dive into these issues in more detail, including some computational modeling.
Like all risk transfer insurance, insurers are modeling the risk associated with a portfolio of contracts. What they care most about is the risk of the whole portfolio over time, not what any individual firm experiences. (This is core to the business model of nearly all insurance businesses.) While they do care about the probability distribution of losses (claims) for firms in a given class or portfolio, what they care about much more are correlated losses that cause "excess claims" and "ruin" for the portfolio. This is where their best modeling minds and resources are working.
Now, there has been a trend in the last 10 years toward "predictive modeling" (e.g. see here) in casualty insurance, including some experimentation in real-time monitoring of automobile drivers and others. There is a possibility that cyber insurance could evolve in this direction, but given all the countervailing forces, it's not a sure bet.
The main message here is that it is the core competencies (a.k.a. capabilities) of the insurance industry, plus the regulatory structure that they operate in, that create clear incentives for them to create and innovate in risk models that serve the insurer and not the insured.
10. Cyber Insurance Doesn't Fit Networked, Interdependent Risk
For some firms and agencies, their biggest cyber risk is related to their being part of a service network -- i.e. critical infrastructure such as power, water, communications, etc. They also face nation-state threat agents and terrorist threat agents, both explicitly excluded in all cyber insurance policies. What they need is some way of pricing and pooling risk that embraces the interdependent nature of their risks, not shuns it.So far, I have discussed cyber insurance as though the only type of insured party were an end-user of information technology. But what about all the product and service vendors of information technology? Many firms have side businesses or complementary services that are, in effect, information technology services. Supply chain cyber risk is well-recognized and widely studied. But it raises a whole new class of interdependence between firms in the network -- events, controlling parties, causation, and so on -- all of which are not well-matched by today's cyber insurance industry.
In Closing
To reiterate: my argument is not that cyber insurance is bad or broken as a traditional risk transfer institution. It may be fine for that function. But that is not the main reason that policy people are promoting it.
I'm also not painting insurance firms or insurance brokers as "bad guys" or somehow incompetent. They are working within a well established set of capabilities and institutions, doing what they are inclined to do.
That's all for now. Look for my follow up posts where I will dive into these issues in more detail, including some computational modeling.
Appendix: Analytic Framework for Functional Ecosystems
An analytic framework can help us structure the evidence and arguments, drawing from Organization Science and Ecological Psychology. This framework is an ontology for what I am calling "Functional Ecosystems" to contrast it with reproductive, energetic, or material ecosystems.
Actors (firms, people, etc.) are agents with purposes and values, usually because they are operating within personally and socially meaningful roles.
A function is the reason or purpose for doing something. "Function" establishes goals or metrics or indicators of good vs. bad performance, and also places the activity in some greater context of purpose or strategy.
A capability is a general ability to perform some function to some degree of excellence. Capabilities include knowledge, experience, and supporting resources (time, money, people, etc.).
A routine is any process, procedure, or algorithm that can be carried out in a step-by-step fashion to some conclusion. The term "routine" is not meant to imply "typical" or "unexceptional". Think of it as a "computer subroutine", where the "computer" happens to be some combination of people, process, and technology. Any given routine will be associated with one or more capabilities, and each capabilities will have a portfolio of supporting/relevant routines to carryout specific actions.
A characteristic is a feature of something (an object, a service, a person, an organization) that can be detected, sensed, and/or distinguished, and also has some relevance to the actor or agent.
(Crucial!)
An affordance is an interrelation between an actor and its environment (incl. other actors) that serves as a resource for action, or facilitates action, given the actor's capabilities, intentions, purposes, and attention. Affordance seems obvious when interacting with physical objects (it's not!), but it gets really interesting when interacting with signs, signals, and information (100% the case in cyber insurance).
Interrelations are more than interfaces or protocols for communication or action. They are meaning-creating interactions that make purposeful action possible, given the complexities and details of any given situation.
(Putting these all together)
Actors utilize capabilities to carry out their functions. Capabilities are enacted by selecting, configuring, and deploying specific routines, whose characteristics are an appropriate or serviceable match to the affordances that they perceive and engage with.
A Simple Example
You are probably familiar with the phrase: "Use the right tool for the job." Some of you may even know this saying: "Don't bring a knife to a gun fight". They both allude to the performance benefits of having tools that are a good fit to the job at hand. The following simple (simplistic) example will, hopefully, make all the analytic framework clear.
Imagine you are 4 years old. You are very hungry and want to eat. You have four types of familiar food in front of you:
- Hot soup (broth-like with no solid chunks)
- Chicken meat (whole breast)
- A glass of cold milk
- Green peas
You have three utensils available, and (for some reason) can't use your hands directly:
- Fork
- Spoon
- Straw
The function you want to perform is "eat food". Your capabilities include 1) manual manipulation of hand-sized objects; and 2) eating. Your routines include: 1) grasp, 2) rotate, 3) aim, 4) press, 5) lift, and so on. The characteristics of the food include 1) phase (solid vs. liquid), 2) size, 3) density, 4) temperature, 5) odors, etc. The characteristics of the utensils include 1) weight, 2) sharpness, 3) receptacle capacity, 4) topology, etc.
The affordances arise through the interrelations between you (as actor and tool user), your chosen utensils, and the characteristics of the food that you try to eat, with a given technique (a.k.a. routines). Adults take this for granted because they are so habitual and unconscious, but young children (if not instructed) will poke around to try to find some sort of affordance that will fulfill their function.
Imagine you, as a four year old, try to eat the chicken meat with the straw. Maybe if you jab at it hard enough, you will gouge out some chicken meat inside the straw. If the straw is both stiff enough and sharp enough (characteristics), you might succeed in spearing the meat like a knife, and might then attempt to bring it to your mouth. Nearly all plastic straws will be too weak for this.
We can continue on with this exploration process, and eventually, you (4 year old) will learn which utensils work best for which types of food.
The fourth food type -- green peas -- are a very interesting case, because experience might reveal that all three utensils have sufficient characteristics to be used to eat peas (assuming the straw diameter is large enough). But while eating peas with a spoon is almost the same as eating soup (i.e. same capability and nearly the same routines), eating peas with a straw is very different than drinking soup or milk with the same straw. Notice also that almost no adults would ever consider eating peas with a straw because they are so acculturated in what utensils are used for what foods, and in the cultural norms than go against "rude" behavior of inappropriate utensil use (and sounds). Therefore, affordances are not simply what we see or believe to be possible or appropriate. But they also aren't purely "in the world". They require active engagement in order to come into being.
No comments:
Post a Comment