Pavel Ignatov - Fotolia

Metrics vital to insider threat prevention and mitigation

Insider threat prevention has become inherent to cybersecurity strategy, but companies must use the right metrics to determine whether their efforts are working.

An organization's employees still represent one of the biggest threats to its information security -- as well as the first line of a company's cybersecurity defense. But as companies turn to rapidly evolving tools, such as those that monitor employee behavior to identify nefarious activity, questions remain when it comes time to decide what metrics will determine whether these new strategies are working.

During a recent virtual trade show (VTS) sponsored by TechTarget and ISACA, information governance expert and frequent TechTarget contributor Jeffrey Ritter discussed the strategies modern, digitized companies are using to offset these risks in his webcast presentation, titled "Cybersecurity: Mitigating the Risk of Insider Threats." Here, Ritter answers VTS audience questions pertaining to insider threat prevention best practices that he did not get to answer on the day of the show, including how metrics can help companies determine if these strategies are working.

How can you show insider threat prevention efforts are working effectively?

Jeffrey Ritter: The key is building the right metrics. The question is a good one, because finding metrics that serve to describe effective prevention can be hard. In some ways, you are asking the metrics to prove a negative.

I recommend companies begin by identifying historic costs that directly flow from adverse insider behavior, and then use the forward-looking costs as a comparison. For example, if a company spent $1.5 million in the last year on outside legal counsel investigating insiders, that becomes one useful metric.

Jeffrey RitterJeffrey Ritter

The biggest sources of real costs, however, are internal expenses that are not usually tracked: supervisor time, training time, employee group meetings, information security resources devoted to evaluating access logs, abuse of role-based privileges, etc. All of these are really good metrics, if a company is tracking those already. The task is harder when a company has not had an historical awareness of the risks, and costs, of insider threats.

If one has behavior intelligence, should employees be made aware to deter them or should this be hidden to avoid a backlash on privacy?

Ritter: Local privacy laws, especially in Europe and comparable jurisdictions that closely tie privacy, surveillance and employment together, can influence the answer. Generally, transparency always wins. If you are going to track behavior, notifying employees can be an effective deterrent. You just want to be sure the monitoring looks at the right behavior -- there is nothing worse to employee morale than a pattern of false positives that are not actually identifying malicious, adverse behavior.

Which is greater: insider or external threats? Based on pure incident volume versus impact versus risk.

Ritter: The answer sort of depends on what you are counting. Insider behavior is often cited as the threat having the greatest impact. But if we are counting attempts, for example, many companies witness adverse, external attempts to breach security of their systems that can count into the thousands each day.

No one questions why hard hats are required on building construction sites. The same should be true for cybersecurity.
Jeffrey Ritterinformation governance expert

Do you feel that overt monitoring of insiders may lead to employees avoiding this active monitoring?

Ritter: It's hard to say. Monitoring that seems to be oppressive and unjustified surely depletes employee morale. But employees understand, regardless of the risk involved, that their employment is offered by them to work within a set of rules that create safe working conditions. Those rules, of course, can include cybersecurity values.

The best strategy is to make sure new monitoring is justified, whether by direct incidents inside a company or general industry statistics -- or both -- and explain them to the workforce. No one questions why hard hats are required on building construction sites. The same should be true for cybersecurity.

What should the IT organization do to ensure the IT security team can meet their objectives when it comes to insider threat prevention?

Ritter: Metrics remain vital here as well. The security objective must be one that can be expressed in ways that the related behavior or costs can be measured. As I mentioned, economics can be calculated that show how good internal security improves the business objectives of the company. The IT organization needs to support IT security by making sure the security team is getting the metrics they need to prove their value.

Would the creation of a baseline for user behavior be based on a statistical model of activity or some kind of user profile against which activity might be compared?

Ritter: The answer is yes, which may be a bit confusing. Any baseline of behavior will, itself, be a description of acceptable and unacceptable behavior. That description, in turn, will include some metrics or supporting statistics about what number of false positives will be acceptable.

For example, suppose warehouse floor workers are required to access data systems from only a single authorized work station at the end of the warehouse. However, each worker also has a handheld device, generally used for scanning bar codes, but also capable of accessing basic shift schedules, staffing assignments, etc.

How many improper uses of the handhelds will be tolerated? When will the frequency of those uses become a marker of other improper behavior, such as inattention to actually moving pallets into long-term storage inside the warehouse? The answer is statistical.

Do you know of any companies that are tracking their employees' device usage for security purposes? If so, how many?

Ritter: There are quite a few companies that have developed monitoring of employee usage of devices, including some systems that are monitoring keystroke behavior in order to freeze or suspend activity even before the activity is completed. Other tools are using artificial intelligence to monitor behavior and, over time, the tools develop rules which then tag acceptable, unacceptable and suspicious behavior.

Next Steps

Tech development: Build compliance into mobile projects

The keys to tackling insider-based IT threats

Cybersecurity: Insider threat risk still a big concern for IT

Dig Deeper on Risk management and compliance