Artificial Intelligence, the SEC, and FI Insurance – the Future is Now?

Artificial Intelligence, the SEC, and FI Insurance – the Future is Now?

As an underwriter specializing in professional lines insurance for financial institutions such as banks, asset managers, and hedge funds, I’m always on the lookout for potential risks or regulatory developments in the financial sector – especially those that are deemed “Systemic” in nature.

So last week I came across two interesting predictions by SEC Chairman Gary Gensler regarding the risks associated with the use of Artificial Intelligence (AI) and financial services.   

Prediction #1: Potential Systematic Risk of AI Adoption

No alt text provided for this image

Mr. Gensler spoke last week at an annual conference hosted by FINRA, Wall Street’s self-regulatory body, and he expressed concerns about the proliferation of AI technology by major financial firms, warning that it could lead to a new systemic risk. 

Please find the WSJ article linked here.

Systemic Risks are events that have the potential to trigger a collapse of an entire industry, such as the banks collapsing during the 2008 credit crisis. Such risks are inherent to the market as a whole and influenced by economic, geopolitical, and financial factors (more information on such risks can be found here).

Financial institutions have increasingly integrated AI into various functions, such as evaluating new customers for loans, customizing product recommendations, detecting fraud, and automating customer service inquiries. JPMorgan announced it is “developing a ChatGPT-like A.I. service that gives investment advice” (the CNBC article can be found here). And Ken Griffin, the founder, CEO, and Co-CIO of the large hedge fund Citadel, said his firm is negotiating a company-wide license to use ChatGPT, thinking it will “automate an enormous amount of work” (the Bloomberg article can be found here).

Mr. Gensler is worried an overreliance on a single base-level technology could lead to a future crisis. He said that observers years from now might look back and say: “The crisis in 2027 was because everything was relying on one base level, what’s called [the] generative AI level, and a bunch of fintech apps are built on top of it.” 

The increased adoption of AI will also increase potential cyber threats, compliance concerns, and the potential for discrimination in algorithmic decision-making. The WSJ article noted Mr. Gensler will also be “scrutinizing” the use of AI by banks and other financial institutions for the “potential for biased decisions” around lending. 

Prediction #2: Increased Regulatory Focus on AI

Also last week, Mr. Gensler delivered an informative and wide-ranging speech at the Atlanta Federal Reserve Financial Markets Conference, where he discussed the need for increased regulatory oversight to address the emerging risks of AI in finance.

No alt text provided for this image

A transcript of the speech can be found here.

(And of interest to a history major who lives in Chicago, the speech was entitled “Lessons from Mrs. O’Leary’s Cow” – according to legend, it was Mrs. O’Leary’s cow Daisy which supposedly kicked over a lantern and started the famous Chicago Fire of 1871).

Although he acknowledged the positive transformations and efficiencies AI is creating in finance, such as compliance programs, trading algorithms, and robo-advisors, Mr. Gensler also noted the potential for increased “financial fragility as it could increase herding, interconnectedness, and expose regulatory gaps.”  

Towards the end of the speech, under the heading “Risks on the Horizon,” Mr. Gensler specifically discussed AI and noted the current levels of regulatory oversight are likely inadequate to address emerging risks:  

Existing financial sector regulatory regimes - built in an earlier era of data analytics technology -are likely to fall short in addressing the systemic risks posed by the broad adoption of deep learning and generative AI in finance

Notably, a WSJ article from late April noted regulatory agencies, such as the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Federal Trade Commission (FTC), have already shown an interest in AI and its potential impact in financial services, in particular on how “ChatGPT and other tools could be used to discriminate or harm competition” in the industry.

No alt text provided for this image

That article can be found here.

Rohit Chopra, the director of the CFPB, said “his agency is looking at the risks that chatbots used for customer service could provide biased or wrong information.”

“There is not an exemption in our nation’s civil-rights laws for new technologies”, said Mr. Chopra. “Companies must take responsibility for the use of these tools.”

To a financial insurance underwriter, increased regulatory scrutiny is a worrisome exposure, as regulatory investigations can often lead to claims covered by Directors’ and Officers’ insurance policies.    

If you are interested in learning more about Artificial Intelligence and its societal impact, the Harvard Business Review recently did a lengthy series of articles on the next-generation technologies that are “poised to cause society-shaking shifts at unprecedented speed and scale.”

 

No alt text provided for this image

That thought-provoking series can be found starting here

To view or add a comment, sign in

Others also viewed

Explore content categories