On Twitter recently, Phillip Beyer (@pjbeyer) asked: "how do you measure risk appetite in program early stages?". I gave my answers in a series of tweets, but this question comes up a lot so I think it's worthy of a blog post.
[Edit: Feel free to substitute the term "risk tolerance" for "risk appetite". They have slightly different origins, but their interpretation in this context is the same.]
First, some people have an aversion to the concept of "risk appetite" and others deny that it even applies to information security (or more broadly to cyber security). The argument goes that no rational manager or organization desires to take on information security risk if they could avoid it, and therefore there is no such thing as an appetite for risk. A different argument against it is based on belief that risk in information security is not quantifiable, and therefore attempts to quantify risk appetite are similarly impossible or meaningless.
I believe these two positions are mistaken. The first objection is a misunderstanding of what "risk appetite" really means and how it applies to information security. I'll explain and clarify, hopefully, Also in this post, I'll also address the second objection to show how risk appetite can be reliably quantified.