The Evolution of Risk Management

It is said that the first acts of risk analysis were performed in caves by early humans playing games with dice and bones.  With gaming came the implicit concept of odds making, and soon the human lexicon included references to risk, reward, fortuity, and loss.  Early astronomers observed what seemed to be the random appearances of comets and other astrological phenomena as warnings of impending harm, and thus set about preparing for the next ominous galactic arrival – and in so doing introduced the concept of managing risk by developing the foresight to think ahead and prepare for what they knew was sure to come eventually but they had no idea when or where or how often.

The term risk management came into vogue after WWII and was typically used in the context of physical perils and risk transfer via insurance.  Decades later, banks and other financial institutions began to identify, quantify, and either prevent or control risks inherent to the granting of credit and the repayment of debt – risks that had the capacity to cause institutional failure.

It was in this milieu that the study of risk management began to encompass other business threats of other types, with a primary focus on threats that had the potential to lead to financial failure and liquidation of the entire organization. 

This branch of thought was dubbed “enterprise risk management.”  Soon a whole new lexicon was introduced to define this emerging discipline, which took on a renewed emphasis when the spectacular failures of firms such as Enron, WorldCom, Tyco, and Adelphia gave light to the unacceptable risk-taking many corporate executives routinely practiced, and to the inadequate controls in place to prevent such risks from overwhelming the firms taking them.  Enterprise risk management expanded the traditional boundaries of hazard/peril risk management to include risks caused by business cycles, external factors such as interest rates, and such ubiquitous phenomena as changing consumer tastes and outright massive fraud.  It also elevated risk management from the mere tallying of insurance costs and predicting claim activity to defining the real time business risks a firm is taking in its daily operations, and making sure those risks don’t end up causing the firm’s demise. 

In 1996, Alan Greenspan and Bill Clinton made a momentous decision, although they likely did not realize it at the time.  That decision reclassified financial guarantee/derivative contracts, which had previously been considered as insurance policies subject to state regulations, as mere common law contracts.  This had two major effects.  First, it eliminated the requirement that insurers post loss reserves to fund for claims which had occurred, but which had not yet been reported called IBNR.  Most importantly, it no longer required insurers to allocate precious policyholders’ surplus to support writing these contracts.  They could have their cake and eat it too.

As a result, underwriters, who by and large had heretofore refused to write risky derivative contracts, fearful of wasting their surplus and reserve positions on a line of business under so much scrutiny, now began writing these newly deregulated contracts in earnest, led by the then biggest insurer of all, AIG.  Because these derivative writings were unregulated the world knew precious little of the extent of the derivative obligations undertaken by companies like AIG.  A line of business allowed to post a zero percent loss ratio is a tempting endeavor, as profits could be declared immediately, with executive bonuses and stock options following in short order.  And so the rush began.

With a seemingly unlimited supply of insurer backed derivatives, major Wall Street banks, led by Goldman Sachs, began using these contracts, specifically written as financial guarantees, to underwrite loans they were making to third parties.  This had the effect of moving that debt to AIG’s coveted AAA rated balance sheet, a rating that allowed the banks to book these transactions without using their own capital, as the entirety of the risk was now transferred to a firm considered as nearly as financially bulletproof as the U.S. government.

When the sub-prime crisis hit the mortgage backed securities market in 2008, many of these financial guarantee contracts were called upon to make good on losses that otherwise would have remained with the original lender.  When AIG’s claims from derivative contracts mounted to a level that no one ever anticipated, AIG itself lost its AAA credit rating, instantly triggering a collateral call from Goldman and others that brought AIG to the brink of bankruptcy, saved only by a massive intervention by the New York Fed and the U.S. Treasury Department, which prevented a worldwide economic meltdown.

The recent failures of Silicon Valley Bank and Credit Suisse are the latest examples of organizations whose risk management oversights allowed their corporate cultures to miscalculate aggregate enterprise risk and inadvertently allow the concentration of financial assets subject to interest rate value fluctuations to grow to frightening levels.  When the Fed increased interest rates to fight rising inflation, the value of the government bonds the banks were holding as capital fell dramatically.  Depositors took note of the losses to its balance sheet, and they began to withdraw their deposits.  When word of the withdrawals spread, the ensuing run on deposits spelled their doom in a matter of a few hours. 

Soon after SVB collapsed, it was reported that the position of chief risk officer at the bank had been vacant for some time, and that the most recent occupant of the role had done so while also leading SVB’s equity and inclusion efforts, a disservice to both disciplines.  Humans, it seems, have yet to fully master risk taking. Next: The Essential Elements of Risk Management

Author: Mark Mendes, Vice President and Risk Management Leader

Call Email Claims Payments