In our latest From the Archives piece, Manoj Kulwal from RiskSpotlight identifies some of the most common human-related factors that can influence the accuracy of risk assessments. This article was originally published in issue 66 of The Risk Universe magazine in June 2017.
The effectiveness of any risk management initiative is determined by the people involved in key activities such as risk identification, risk assessment and selecting appropriate risk responses. Research on human behaviour in situations involving uncertainties suggests that, when under pressure, we often do not behave rationally or consistently. This should be taken into consideration when designing and implementing any risk management activity to ensure that the effectiveness of the activity is not adversely impacted. In this article, I will share some of the key human factors that should be considered as part of designing and implementing risk management activities.
Factor 1: Humans are better at assessing Frequency than probability
Research shows that the human mind has evolved to be able to assess frequency of events. On the other hand, probability is a relatively modern concept in context of the millions of years over which the human mind has evolved, therefore humans are not naturally good at assessing it. While some organisations use frequency for assessing risks, many organisations use likelihood. As likelihood involves assessment of probabilities, users involved in this type of assessment may be affected by this factor.
If your organisation uses likelihood for assessing risks, consider training your risk owners to use the decision tree tool, which is very effective for assessing probabilities. You could also consider using the calibration techniques shared by Douglas Hubbard in his book, How to Measure Anything, to enhance your risk owners’ confidence in assessing probabilities.
Factor 2: Bogged down by complex situations
Complex situations involve many factors which are connected in dynamic cause-effect chains. Examples of risks that involve complex situations include cyber risks, financial crime and conduct risks. The human mind cannot intuitively process all of the individual factors involved in these types of complex situations. This may prevent the team involved from fully understanding and assessing the risks involved and from ascertaining certain important facts, such as likelihood or financial impact.
To address this, consider using cause-effect chains or causal loops from the ‘system dynamics’ discipline. Such tools allow visualisation of factors in a complex situation and facilitate an enhanced understanding of risks.
Factor 3: Halo effect
The halo effect is experienced when a higher level of confidence is attributed to views from certain individuals or departments. This may be either due to the seniority level of individuals within the organisation, or their level of expertise on certain topics (such as cyber risks). For example, if a highly respected head of a business division mentions in a risk workshop that certain risks cannot possibly occur within the organisation, then the rest of the participants may accept this view and not discuss these risks any further. This may result in creating blind spots in the understanding of relevant risks.
There is another version of this effect called the ‘negative halo effect’. This is experienced when certain individuals or departments may have a reputation for not managing certain risks effectively and knowledge of this fact may influence the risk assessment outcomes. For example, if a department has experienced multiple incidents over the past two years, they may have earned a reputation for poor risk management. Due to this, all risk assessment outcomes may be viewed with suspicion, even when in reality the department may have fixed the underlying issues.
Individuals involved in the assessment of risks need to be fully aware of the potential implications of the halo effect. Appointing one or more individuals to play the role of a sort of ‘devil’s advocate’, responsible for pointing out the opposite to the general consensus, can help overcome this factor.
Factor 4: Recency bias
The human mind has evolved to overestimate the likelihood and severity of risks if a significant risk-related incident has occurred recently. This was a key survival mechanism when humans were vulnerable to dangers such as wild animals, enemy tribes and natural disasters. While our modern lives no longer face the same kinds of dangers, the default response to overestimate the likelihood and severity of risks, still exists. So, if risk assessments are conducted soon after a major incident (for example a terrorist attack or natural disaster event), the level of likelihood and severity may increase significantly.
Recency bias can also be experienced when the likelihood and severity of risks are underestimated due to a lack of recent risk-related incidents. For example, if a risk-related incident has not occurred for the last 8-10 years, the individuals involved in assessment may assume that the likelihood of such risks occurring is very low.
Factor 5: Confirmation bias
Confirmation bias is experienced where individuals involved in risk assessment believe that the likelihood or severity level of a risk is low (or high) and based on this, they only look for information that supports this belief and do not even seek information that may challenge it.
To overcome this bias, the individuals involved in risk assessment should be aware and clearly highlight any beliefs driving the information being considered to assess the risks. Independent challenge to the risk assessment outcomes could also be considered to address this bias.
Factor 6: False sense of confidence in controls
This is where individuals responsible for the assessment of controls have a false sense of confidence, resulting in them believing ineffective controls to be effective. For an example, let’s take a control where three separate approvers need to sign off purchase of a new IT system:
As a result, all three approvers have a false
sense of confidence in the control, when in
reality, the control is ineffective.
This factor can be addressed by identifying controls which may be affected by a false sense of confidence and making the control owners aware of its effect. Independent testing of controls can also be used to minimise the impact of this factor.
Factor 7: anchoring bias
This bias is experienced when individuals are dealing with a situation involving uncertainty, such as predicting the level of a regulatory fine. In such situations, if someone mentions that a bank was recently fined £10m for a similar event, then by default this number will become an anchor for the remaining discussion. Everyone involved may discuss the level of regulatory fine, but these numbers will stay around the £10m level. This may lead the assessment outcomes to actually be quite far away from reality. Research has demonstrated that even when the first number has nothing to do with the topic being discussed, it still results in anchoring bias.
As this bias occurs at a subconscious level, to overcome it, a facilitator should be appointed who can point out when this bias may be affecting the individuals involved in risk assessment.
Factor 8: Framing effects
Frames define the scope within which the risks may be assessed and managed. For example, a bank may successfully reduce their quarterly credit card fraud-related losses from £5m to £4m and highlight this as a 20% reduction. Such information can be used to create a frame that suggests the bank has become better at managing its credit card fraud. However, the average quarterly credit card fraud-related losses for its peers might be £2.5m. So when using this alternative frame, the bank may be assessed as still being ineffective at managing its credit card fraud risk.
Frames are commonly used by various stakeholders within the organisation to communicate risk-related information and trends. As information is communicated up in the organisation hierarchy from department, to business division, to group, the frames may change and the same information may be interpreted differently in different parts of the organisation. To address this factor, it is important to identify the frame being used to assess risks and communicate information about the level of risk exposure. Once the frame is identified, the individuals involved can consider whether the frame is appropriate or whether it needs to be changed.
Factor 9: influence of rewards and punishments
The potential of rewards can also influence the assessment of risks. For example, the head of the cyber security team may believe that by assessing the cyber risks as having a high level of financial and reputational impact, he/she will be able to convince the board to allocate a 40% increase in the budget for their department. In such cases, the potential reward of increased budget may influence the assessment of the potential risks.
The potential of punishments may also have an influence. For example, the procurement team may believe that by assessing the vendor-related risks as having a high level of financial impact, they may increase the level of due diligence and monitoring of vendors. Independent challenge by the group risk teams and internal audit are effective ways of addressing this factor. Senior executives making decisions on potential rewards and punishments should also be aware of its influence.
Other articles in our From the Archives series:
For monthly news updates and free white paper reports, sign up to our newsletter at the bottom of the homepage