Upgrade to receive CPD certificate + full access

Risk Managers, AI Will Replace You If You Don’t Upgrade Your Skills

Risk Manager

A study published in the Annals of Oncology found that a deep learning algorithm achieved a 95% accuracy rate in detecting melanoma from skin lesion images, outperforming a panel of 58 dermatologists whose average accuracy was 86.6%. In another study published in Nature, a deep learning system was able to identify breast cancer from mammograms with greater accuracy than radiologists. The AI system demonstrated a reduction of 1.2% (UK set) and 5.7% (US set) in false positives and 2.7% (UK set) and 9.4% (US set) in false negatives.

Research conducted by Siemens in 2019 demonstrated that AI-driven predictive maintenance tools could forecast equipment failures with up to 30% greater accuracy than experienced maintenance personnel. And, according to a 2021 study by J.P. Morgan, AI and machine learning models reduced default prediction errors by approximately 25% over traditional statistical models.

And yet, every time I or one of my team members do a webinar on using AI for risk management, the only question that people ask is “How accurate is AI?” Every bloody time.

So, let me share a story. In my last 5 Head of Risk roles, I had access to both a world-class team of quant risk professionals and access to different AI models, including the ones built in-house. And you know what? I have spent considerably more time verifying, checking, correcting, and validating my human risk team’s deliverables than I do now verifying RAW@AI deliverables. What used to take my team weeks to do, now can be done by AI + Python in hours.

AI doesn’t have to be always right; it just has to be less wrong than humans

In my mind, for AI to be universally adopted by risk professionals, it doesn’t need to be perfect—it just needs to be better than humans at making fewer mistakes. This is something Douglas Hubbard calls “beat the bear” fallacy. Imagine two campers confronted by a bear; one doesn’t have to outrun the bear to survive; he just needs to outrun the other camper. Similarly, AI doesn’t have to be flawless; it just needs to outperform human error rates and speed of analysis.

Humans are great at many things, but we can get tired, overlook details, be blind to certain risks, and all have our biases. Some risk managers come from accounting backgrounds and have little understanding of risk math. All these limitations make risk managers less effective, especially when dealing with probability theory, complex, interrelated risks, and decisions. AI, on the other hand, can handle huge datasets, large volumes of text, and complex calculations without getting weary or overly biased. AI still makes mistakes. That isn’t the question. Does it make fewer or more mistakes than an alternative? That is the right question.

My RAW@AI, for example, can consistently outperform most Big 4 risk consultants and RM1 risk managers. Try it.

The more data you have, the more AI outperforms humans

Large volumes of data are what give AI its risk management superpower. Unlike humans, AI can quickly go through huge amounts of both structured data (risk registers, spreadsheets, and databases) and unstructured data (risk reports, interview transcriptions, annual reports, and research papers).

This ability lets AI gather a wide and current view of potential risks and quantify most risks on the planet. Most risk managers can, of course, do the same, but it will take them 10x the amount of time to achieve a comparable level of quality.

The human brain is incredibly adept at recognizing familiar patterns, but it struggles with the sheer complexity and subtlety of patterns found in today’s probabilistic risk landscape. AI, on the other hand, excels at finding complex and non-linear relationships within massive datasets (distilling large texts into key points, not so much, but it’s only a question of time).

This can reveal hidden connections between seemingly disparate events or data points, highlighting risks that would otherwise go unnoticed until it’s too late. According to a 2022 report by IBM, AI systems detected and responded to security breaches an average of 40% quicker than human-led teams.

You no longer need a math PhD to do quant risk analysis

In the past, every time I joined a company, I would struggle to find quants who understood risk management and were capable of abstract thinking to integrate into decision-making. If you ever tried hiring a quant for risk management, you know what I mean.

Well, AI is changing the game. AI models with access to Python environment are putting powerful quantitative tools into the hands of a wider range of professionals. AI models take care of the complex math, allowing risk managers to focus on empowering risk-taking and integrating risk analysis into decision-making.

Just like calculators made complex computations accessible to everyone, AI and SIPmath are doing the same for risk modeling. You don’t need to understand the inner workings of a calculator to get the answer, and you no longer need to be a mathematics whiz to perform sophisticated risk analysis.

You still need to be able to double-check the calculations because calculation errors are frequent. But you know what is even more frequent? Calculation errors by human risk managers. It’s much more frequent!

The question isn’t whether AI will transform risk management. It’s whether you will upskill quickly enough to utilize AI and guide its insights or your team will be replaced by the next version of RAW@AI.

Learn how to start using AI models in your risk department at #RAW2024.

Algorithm Aversion

Douglas Hubbard made popular another term “algorithm aversion”. It describes the phenomenon where people prefer human judgment over algorithmic or machine-generated solutions, even when the algorithm performs better or equally well. This aversion often persists even after the person has experienced the algorithm’s superior performance, typically due to biases or a lack of trust in automated systems.

Look at just some of the studies on algorithm aversion; it’s not new:

  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err.” Published in the Journal of Experimental Psychology: General, this study is one of the foundational pieces of research on algorithm aversion. It demonstrated that people are less likely to use an algorithm after seeing it perform imperfectly, despite the fact that the algorithm outperforms humans on average.
  • Logg, J. M., Minson, J. A., & Moore, D. A. (2019). “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment.” Published in Organizational Behavior and Human Decision Processes, this study provided a counterpoint to the typical findings of algorithm aversion, suggesting that under certain conditions, people might prefer or appreciate algorithmic advice over human advice.
  • Onkal, D., Goodwin, P., Thomson, M., Gönül, S., & Pollock, A. (2009). “The Relative Influence of Advice from Human Experts and Statistical Methods on Forecast Adjustments.” This study in the Journal of Behavioral Decision Making explored how professionals adjust their forecasts based on advice from statistical methods compared to human experts, highlighting a bias towards human advice even when statistical methods are known to be more accurate.
  • Prahl, A., & van Swol, L. M. (2017). “Understanding Algorithm Aversion: When Is Advice From Automation Discounted?” This article in the Journal of Forecasting delves into conditions under which individuals may or may not follow automated advice, identifying factors that can influence the acceptance of algorithmic input.
  • Burton, J. W., Stein, M-K., & Jensen, T. B. (2020). “A Systematic Review of Algorithm Aversion in Augmented Decision Making.” This review, published in the Journal of Behavioral Decision Making, consolidates various studies on algorithm aversion, providing a comprehensive overview of how and when algorithm aversion occurs in decision-making processes involving automation.

Important Limitations of AI in Risk Management

Of course, AI isn’t perfect. It still has various limitations, which we’ll outline below:

  • Utilizing AI in risk management involves handling sensitive data, which can raise compliance and privacy issues. Some risks are too sensitive to be analyzed by AI, unless it is an in-house closed model.
  • Using AI for risk management will probably be considered a high-risk activity under the EU AI Act and will require significant compliance controls.
  • In cases where AI-driven decisions lead to financial losses or compliance breaches, establishing accountability can be challenging. Determining whether the fault lies in the data, model, or decision-making process requires clear protocols.
  • Effective use of AI in risk management requires specialized skills that may not be readily available within traditional risk teams. At least hiring or upskilling personnel to work effectively with AI tools is easier than finding a good risk quant who understands decision science and behavioral economics.

As AI continues to reshape risk management, it's important for you to stay ahead of the curve. Attending RAW2024 is an investment in your future, providing you with the skills, knowledge, and networking opportunities needed to thrive in an AI-enhanced landscape. Sign up for RAW2024 today.

Talk to our AI risk management advisor 👋 🕵🏽‍