Risk perception – a light bulb moment for better decisions about risk

In past blogs, I have discussed how organisational risk management is far more than the sum of its technical parts, going beyond organisational strategies and risk frameworks and requiring consideration of more ethereal elements such as organisational culture and psychological safety. This broader canvas is necessary to enable perennially time-poor decision makers to interpret their organisational context and more effectively engage with the ‘wall of words’ in front of them – the myriad lists of strategic settings, corporate goals, business and project plans.

Individuals making decisions about organisational risk not only have to navigate these issues, but also have to make their choices based on ‘intuition’ or a ‘rational’ application of probability and consequence tables, heat maps and risk appetite.

In his book Thinking, Fast and Slow (2011), Kahneman characterised these two types of risk choices/responses as ‘fast’ decisions - intuitive, automatic and with little/no effort or control - and ‘slow’ decisions - analytic, rules based, logical, requiring training, conscious effort and control.

Plenty has been written, both before and since Kahneman’s book, about these issues, however a relatively recent article by Professor Elke Weber[1] provided a ‘light bulb’ moment for me which firstly illuminates why so much of the rationalist and analytical dogma around risk management misses the point about how decisions are really made and secondly offers a pathway to support better decisions about risk.

The light bulb

Weber considered of a range of economic and psychological research in relation to the ‘fast’ and ‘slow’ responses to risk, and found that individual/intuitive perspectives about risk, including feelings about risk, drive responses to risk far more than the analytical/rational approach preferred by traditional economists and commonly adopted by technical experts and organisations. As Weber stated:[2]

In contrast to the economic or engineering mathematical assessments of risk as the likelihood of and severity of adverse events, psychology depicts risk perception as an intuitive assessment of such events and their consequences… Evidence from cognitive, social and clinical psychology indicates that risk perceptions are influenced by past experiencesand affective responses [such as] feelings or emotions, which influence risk perception as much or even more than analytic processes.

Weber’s thesis goes something like this:  

The human need for predictability and control is central to a psychological feeling of confidence in any system or framework. Confidence in a system, technology or financial market occurs when people believe they understand or are comfortable with how things work, which leads to a sense of being in control and a perception of low risk.

Second, when risk levels are perceived as low, particularly during periods of relative stability, existing risks are often not given the attention and weight they probably deserve. Reasons for this include default mental models that the status quo is due to the controls they have in place (rather than luck), personal worldviews and confirmation bias.

Third, this perception of low level risk tends to remain until there is very strong signal that makes a person believe there has been a significant ‘regime’ change in their risk environment. The two psychological risk dimensions likely to trigger this signal are:

  • dread risk - pandemics, stock market crashes, acts of terrorism, incidents at nuclear reactors and natural disasters etc, which generate strong emotional reactions, and

  • unknown risk – an emerging risk, such as new technology or behaviours, which has unforeseeable consequences and is perceived as being currently uncontrollable.[3]

Fourth, once a changed risk perception has been triggered, the individual’s risk perception dramatically increases in a non-linear way, and usually out of proportion to the (smaller) statistical likelihood of the risk eventuating (see Weber’s Figure 4.1 below).

Fifth, while an individual’s risk perception may increase substantively, it may not actually lead to individual behaviours, actions or decisions to effectively decrease the perceived risk level. According to Weber, this is due to a number of factors including:

  • Increased risk familiarity - While individuals tend to initially overreact in response to a risk trigger, over time, they become more tolerant of the new risk environment, particularly where unknown risks/technologies/behaviours become more familiar to them.

  • A finite ‘pool of worry’ – Individuals have a limited capacity to focus or deal with dread risk (For example, studies have shown that terrorism events led to individuals having a decreased capacity to consider other significant risks such as climate change.)

  • Single action bias – According to Weber this is where ‘decision makers are very likely to take a single action to reduce a risk that they are concerned about, but are much less likely to take additional steps that would provide incremental protection or risk reduction’.

  • A societal division of labour - Socially significant or complex problems with longer time horizons are considered ‘too difficult’ for individuals and assigned to government agencies or designated professional groups/experts to solve. (I would add that public expectations usually flow towards positive intervention that would benefit the individual and not involve some sort of loss (increased taxes or behaviour modification that involves giving up something)).

Enter rationality

So where is the ‘rational wo/man’.. the ‘slow thinker’? Can they hold back these waves of risk perception? According to Weber, and consistent with the abovementioned societal expectations, governments, organisations and those technically trained to think ‘slow’ can, and do, consider, analyse and report on risk triggers in a more calculated, evaluative and ‘rational’ way. Further, these considerations are likely to include whether existing models should be recalibrated to consider the new risk trigger.

And yet, based on two examples below, evidence, words and data do not seem to overcome or sufficiently dampen the tide of individual and public perceptions identified above.[4]

Test drives

At a basic risk management level within organisations, Weber’s thesis provides a useful explanation as to why:

  • there remains so much focus on the mere existence of risk tools (the policy, framework, consequence tables and risk register)

  • surveys of risk maturity focus too heavily on how shiny these tools are, and

  • organisations fool themselves into believing that these tools equate to management and control of their risks.

So when the risk trigger hits the consequence fan, there is a heightened sense of disbelief and panic. For public sector organisations or the financial sector, think media reports, Senate Estimates, Royal Commissions etc.. But after things settle down a bit, perhaps the risk register is updated, maybe a few risk training programs will be run, maybe a risk professional will be employed, but nothing comprehensive or cohesive is put in place.

My second example involves numerous studies of individual responses to natural disaster risks.[5] These studies have shown that:

  • As the length of time that a person does not suffer personal or property damage increases, their perception of risk decreases.

  • Correspondingly, after individuals suffer some damage/injury or have a very near miss, the risk trigger is pulled.

  • As risk perception becomes substantively higher, new actions are taken, such as increasing insurance cover, installing storm shutters or implementing home improvements.

  • However, these actions are rarely co-ordinated or applied at the same time by individuals to maximise the mitigation of risk that rational behaviour requires.

  • Once some form of action is taken, the risk expectation lowers over time.

  • In the meantime, taxpaying homeowners also expect the government to fix the risk through higher storm levies, better warning systems and more supportive disaster payments arrangements etc.

I don’t think it is too much of a stretch to extrapolate this second example further to current individual and societal responses to climate change.

Lighting the way – some next steps

Hopefully I have adequately explained my light bulb moment.. How can we use it to ‘light the way’ towards better choices about risk?

Using the first example about organisational risk, we firstly should move beyond risk management ‘window dressing’ and stop pretending that having a framework, a risk register and having nothing go wrong for a period of time = managing risk, having effective controls and/or being risk mature.

If organisations want to be serious about supporting their people to make better decisions about risk, they must go beyond their risk shopfront and properly test and understand their organisational decision making culture, how their people perceive and feel about risk, and how organisations can best assist their teams to address behavioural constraints including single action bias, finite worrying capacity and believing that ‘someone else’ or ‘some other part of the organisation’ (i.e. the Chief Risk Officer) is responsible for managing risks. Further, the organisation must assess what data it has, or should get, that can assist its risk decisions, and begin collecting and analysing that data.

These suggestions are obviously more intensive, interactive and resource intensive/expensive than risk management ‘business as usual’, and requires a longer term view about organisational resilience rather than a short term cost/benefit focus. The alternative is to keep costs down, keep crossing your fingers and telling yourself that dumb luck equals good risk management.

Applying Weber’s thesis to natural disaster risks involves broader policy and social challenges. Governments and organisations have mobilised considerable resources and generated multiple studies and data identifying ‘rational’ and ‘logical’ options to better manage the risks of natural disasters, including a suite of options such as ex ante expenditure on mitigation, relocating ‘at risk’ communities and ex post funding responses such as risk pooling, a wide variety of insurance products and Budget funded recovery and reconstruction expenditure. Governments need to find their ‘policy backbone’ and move their policy focus beyond the politically visible but highly inefficient post disaster recovery efforts and implement the research and data they themselves have often commissioned. In doing this, they need to consult with and effectively communicate these policy choices to their constituents, highlighting the economic and social advantages of taking a longer term view, beyond fear, beyond feelings, and learning from and not repeating past experiences.  



[1] Weber, E. U. (2017). Understanding Public Risk Perception and Response to Changes in Perceived Risk. In E. J. Balleisen et. al. (Eds.). Policy Shock – Recalibrating risk and regulation after oil spills, nuclear accidents and financial crises (pp. 82-106). Cambridge University Press.

[2] Op. cit. p. 88.

[3] Slovic, P. (1987). “Perception of Risk”, Science 236 (4799): pp.280-285. Slovic’s research was initially commissioned by the nuclear power industry after incidents in the 1980s which greatly increased public concern about nuclear power.

[4] Yes, guilty of confirmation bias.

[5] Some of these studies are referred to in Weber’s article.

Previous
Previous

The Grinch’s six tips for Risk-mas

Next
Next

A sneak peek at the 2023 model of the Commonwealth Risk Management Policy