Organizations track risk for good reason. Risk is a way of measuring the effects of uncertainty, and the resulting opportunities and potential pitfalls. When risk is measured, it can be managed, and organizations that actively manage risk are better positioned for success, today and in the future.
So, why does it so often seem like risk assessments are just another “check the box” activity? If risk assessment is so valuable, why do so many people within organizations (including senior leaders) make decisions without considering what their own assessment processes have to say?
Part of the answer is presentation. The results of risk assessments tend to be shown in compliance terms, highlighting for attention areas that are deficient against a backdrop of “good enough.” This is intentional, because the definition of an acceptable risk is indeed “good enough” rather than ideal. However, when risk assessment is done from a compliance perspective, it tends to understate areas of opportunity in favor of emphasizing areas needing improvement. Strategic decisions, in contrast, tend to be based on perceived growth value. The resulting misalignment can make risk assessment seem more like audit, a backward-looking indicator designed to highlight shortcomings.
Differences in language also play a role in marginalizing risk assessment. Many organizations evaluate vulnerability in an information technology context, and IT tends to use different language and standards of measurement than those favored by business leaders. Even as IT has shifted from a supporting role to a central function that enables the business, its rank-and-file employees tend to be “techies” who see every vulnerability as something to remediate. Cost-benefit analysis may not factor in at their levels of engagement.
Risk practitioners sometimes diminish the value of their own work products through the choices they make in creating them. Executives view operations in monetary terms – profit and loss, or at least solvency – and can easily use metrics that use money as their reporting basis. In contrast, most risk assessments are qualitative or semi-quantitative. There is inherently more subjectivity in these approaches than exists in an assessment based on quantitative data, and it can be difficult to translate the results across different levels of an organization. What is “high risk” to a department manager may be less concerning to the CEO. People recognize this, and they limit their reliance accordingly. Assessments end up being undertaken for their own sake, the results written into a risk register (often by the only people who know where to find the risk register) while the organization moves forward instinctively, without the benefit of the assessment process to guide decisions.
It does not have to be that way. Instead of approaching risk assessment as something to be done for its own sake, risk practitioners can design and implement their processes with useful results in mind. Take the time to assemble quantitative data whenever possible. Where you must use subjective rankings, clearly define their thresholds so any audience can understand their implications. Avoid using formulas on semi-quantitative rankings, because the effects of the applied math tend to diverge greatly from the origins of those rankings. Reconcile technical language with the standard business lexicon, and use the language of business to present results.
When you see the people who make strategic decisions for your organization consulting the results of your risk assessments to inform their choices, you will know that you have gotten it right.
Editor’s note: For additional insights on this topic, download ISACA’s Conducting an IT Security Risk Assessment white paper.