Manage Learn to apply best practices and optimize your operations.

False alarms: Analyzing your leading risk management indicators

To alleviate risk, it’s necessary to validate risk management indicators specific to your organization. Here’s how, and why avoiding it could negatively affect your GRC program.

When profiling risk management indicators at your organization, are you sure about what triggers your risks, or are you guessing? If your assumptions are incorrect, what do you think this does to the integrity of your risk assessment? Furthermore, how do you think this compromises your entire governance, risk and compliance (GRC) system? If these questions are unsettling for you, it behooves you to spend some time validating these risk management indicators.

John Weathington
John Weathington

Risk triggers are a critical component of risk management efforts, which is a vital subset of an entire GRC program. Similar to the way midlevel management connects lower management to executive management, risk connects governance to compliance. Risks uncover compliance objectives while simultaneously putting management efficacy (i.e., governance) into perspective. This middle ground is based on uncertainty -- the probability and impact of an unknown future event -- which we label risk. Once characterized, risk must be controlled (ergo, the discipline of risk management). Consequently, to control risk properly the causes of these risks must be thoroughly understood.

The mistake I’ve seen most organizations make with risk triggers is assuming without validation. For instance, if an organization is trying to control risk due to improper discounting by employees, staff members may brainstorm internally about what would potentially cause a salesperson to offer an inappropriate discount. Insufficient education may be one cause to surface, which would then spawn a series of prevention controls (e.g., more training) and contingent controls (e.g., adding a step in the process to catch and adjust the discount prior to issuing the purchase order). The problem is that nobody validated that insufficient education was the cause of improper discounting. Of course, if insufficient education is not a valid trigger for improper discounting, the organization is wasting resources on the overhead from maintaining these controls.

Validation of risk management indicators is not as easy as it may seem. Validation means proving causation, not just correlation. If cars are crashing every time people walk around with umbrellas, the umbrellas are not causing the car crashes -- the rain is. The incidence of umbrellas is correlated with increased car accidents. However, the cause of car accidents is the rain. This is an important distinction. If we wanted to control this situation, we wouldn’t focus on controlling the umbrellas -- we would focus on controlling the rain. To understand correlation, you can look at data in the past and use techniques like regression. But to understand causation, you must design an experiment.

A risk(y) experiment

In a designed experiment, dependent variables are altered to see how the independent variables are affected. You will look at potential causes of risk, and methodically move each one (and combinations thereof) to see how they affect the risk outcome. Alluding to our previous example, we would purposely refrain from providing a salesperson with education and measure the occurrences of improper discounting. Then we would educate the salesperson and again measure the improper discounting occurrences to note the statistical difference between the two measures. Of course, this is an oversimplified version of the process for illustrative purposes. An actual experiment would involve multiple variables, multiple salespeople and a very structured process.

Since you’re actually creating risky events (which will presumably have adverse impacts), it’s important to do this in a controlled environment. A critical component of controlling the environment is the contingent control landscape. Contingent controls are proactive measures to handle the risk impacts. Any insurance policy is an example of a contingent control. In our example above, the extra step of catching and correcting the discount before the purchase order is issued nullifies the impact of the salesperson’s improper discounting. Another technique for controlling your experiment’s environment is dilution -- ensuring any incidence of risk in the control group has an insignificant practical impact to the organization due to the size of the control group. In an organization of 100 salespeople, improper discounting from four or five salespeople may not cause a significant impact to the overall goals of the organization.

Although this may seem like an extreme amount of effort, validating risk management indicators will pay significant rewards in the ongoing integrity and maintenance of your risk management program. Once you understand the real nature of what causes your risks, you can then focus your energy on installing more powerful and effective preventive controls to your situation that address the heart of why the risk occurs. An insurance policy is nice, but it’s better if the fire never happens.

Analyzing indicators is necessary for compliance professionals responsible for a risk management program , and vital for its integrity. Designing and conducting causation experiments in a controlled environment using contingent controls and dilution is the best way to know what causes risks to occur. You can start by choosing two or three critical risks and launching a project to analyze their causes.

John Weathington is president and CEO of Excellent Management Systems Inc., a San Francisco-based management consultancy. Write to him at

Dig Deeper on Risk management and compliance

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.