Join the Community

21,469
Expert opinions
43,716
Total members
378
New members (last 30 days)
131
New opinions (last 30 days)
28,520
Total comments

The Generative Generation - AI, chatbots and financial compliance

Be the first to comment

In March 2023, SEC Chairman Gary Gensler described Artificial Intelligence as “the most transformative technology of our time, on par with the internet and mass production of automobiles". 

When any groundbreaking tool arrives, a period of adaptation is required. This is more pronounced for regulators, who need to quickly assimilate enough information to not only understand, but eventually govern the technology in question. Meanwhile, that technology permeates the industry at a breakneck pace and new habits are established, for better or worse. 

This adds significant pressure to a role that already deals with plenty. Regulators are perennially playing catch-up; it’s a reactive role, governed by many factors outside their control. When something as transformative as artificial intelligence comes along, that handicap intensifies. 

AI adds an element of chaos. There is a huge amount of responsibility to govern it effectively; this feels like a critical moment in human development, and one that we must either get right or learn from quickly. This applies broadly, but also to specific industries like finance that represent a microcosm of modern society. 

Below we’ll analyze the regulators’ current positions, existing frameworks that AI already falls into, and where its regulation could be heading. 

Regulators’ positions so far 

SEC  

In July 2023, SEC Chairman Gary Gensler expressed concerns over the use of AI in investment decision making. He stated that it brings the risk of accentuating the dominance of a small number of tech platforms, and questioned whether AI models can provide factually accurate, bias-free advice. Gensler was well positioned to query this – false rumors of his resignation circulated due to misinformation generated by AI.  

In June 2024, the SEC's Investor Advisory Committee held a panel discussion on the use of AI, and Gensler reiterated his concerns, stressing that it could lead to conflicts of interest between a platform and its customers. He also emphasized that fundamental requirements still apply, and “market participants still need to comply with our time-tested laws”. 

Despite this, there had been little concrete guidance provided up to that point, with some proposals discussed last year remaining under consideration

FINRA  

In the 2024 FINRA Annual Regulatory Oversight Report, FINRA explicitly classified AI as an ‘emerging risk’, recommending that firms consider its pervasive impact and the regulatory consequences of its deployment. 

Ornella Bergeron, FINRA senior vice president of member supervision, said that despite the operational efficiencies afforded by developments in AI, there were worries. 

“While these tools can present really promising opportunities, their development has raised concerns about things like accuracy, privacy, bias and intellectual property." 

In May 2024, FINRA released updated FAQs to clarify its stance around AI-created content. These essentially stressed that regulatory standards still applied, and firms were accountable for their output regardless of whether it was generated by humans or AI. 

CFTC 

The Commodity Futures Trading Commission (CFTC) has been relatively active around AI. In May, it released a report entitled “Responsible Artificial Intelligence in Financial Markets: Opportunities, Risks & Recommendations.” This seemed to signal the CFTC’s desire to oversee the space. 

In summary, the agency appeared perturbed that AI “could erode public trust in financial markets”. The report outlined potential risks, including the lack of transparency around AI decision processes. 

While the CFTC seemed happy to take the reins, the report called for continued collaboration across federal agencies. It also recommended hosting public roundtable discussions to foster a deeper understanding of AI’s role in financial markets, and to develop transparent policies. 

How are existing frameworks impacted? 

Fundamental recordkeeping regulations like the SEC Marketing Rule and FINRA rule 2210 put strong emphasis on the accuracy and integrity of information that a firm communicates to its customers. The use of AI tools may well jeopardize these tenets due to the unpredictable and often inaccurate rhetoric that language models have built a reputation for. 

As FINRA earlier clarified, it is the content itself that firms will be held accountable for – the tools that are used to create it are not necessarily relevant. This means that at the very least, all machine-generated output should be reviewed thoroughly before publication. AI-Washing 

Despite much regulation around AI barely reaching the proposal stage, we have already begun to see enforcement in some relevant areas. 

In March, the SEC launched enforcement actions targeting ‘AI-washing’ — accusing two investment advisory firms of exaggerating the use of AI in their products and services to mislead investors. While the penalties imposed in these cases were minimal, the director of the SEC’s Enforcement Division, Gurbir Grewal, confirmed that they hoped to send a message to the industry. 

“I hope these actions put the investment industry on notice. If you are rushing to make claims about using AI in your investment processes to capitalize on growing investor interest, stop. Take a step back, and ask yourselves: do these representations accurately reflect what we are doing or are they simply aspirational? 

“If it’s the latter, your actions may constitute the type of “AI-washing” that violates the federal securities laws.” 

What’s next? 

SEC 

At June’s Investment Advisory Committee meeting, the SEC discussed rules which were initially proposed in July 2023, addressing potential conflicts of interest from using predictive data analytics (PDA) in investor interactions. The proposals called for any of these conflicts of interest to be recorded, and then quickly eliminated. 

The June 6th panel participants were largely supportive of these proposals, which are now expected to proceed quickly. In the meantime, by quickly applying punishments and sending a message on AI-washing, the SEC appears eager to show strength through enforcement in more clear-cut scenarios. 

FINRA 

As well as confirming companies’ responsibility for chatbot generated output, the updates to FINRA’s FAQs stressed that firms must also supervise these communications. This means that policies and procedures must be established.  

Those guidelines could address how technologies are selected in the procurement phase, how staff are trained to use them, what level of human oversight exists after content has been generated etc. If firms have already adopted chatbot technology, or if they’re considering it, the next step should be to develop this internal framework. 

CFTC 

The CFTC's forthright views on how AI should be regulated showed a clear commitment to taking responsibility and leading the way. They encouraged public discourse and collaboration across agencies, while their report identified “opportunities, risks and recommendations”. The next step, again, is to build that information into a formalized framework. 

Meanwhile, the Department of the Treasury published a request for information on the use of AI in the financial services sector, four months after the CFTC did the same. They specifically highlighted a potential ‘human capital shortage’ - a scenario whereby companies use AI tools, with insufficient employees fully understanding their intricacies. The Treasury’s involvement has amplified the voices of the CFTC, FINRA and the SEC, and it’s now just a case of waiting for their frameworks to be collectively drafted. 

That may not take as long as anticipated. In a fitting development, regulators are using AI themselves now to help them keep up. 

“The SEC has begun analyzing how generative AI models could potentially help tackle the regulators’ workload”, said Scott Gilbert, vice-president, risk monitoring, member supervision with FINRA, at the FINRA conference. 

The human touch 

A recent report from the FINRA Investor Education Foundation revealed that despite AI’s increasing influence across society, few consumers would rely on it for personal finance advice, and remain skeptical about the financial information it produces. This lack of consumer trust backs up the regulatory concerns we have dissected, and raises the likelihood of strict governance.  

Just as it took several years for regulators to catch up with WhatsApp use across the industry, there is always a grace period. However, just because new technology is not specifically named in existing frameworks, it doesn’t mean that organizations like the SEC will have any hesitation backdating penalties which undermine their fundamental principles. 

While regulators deliberate over frameworks for AI models and the content they generate, firms must record all output, by man or machine. This will ensure compliance is covered from all angles; foundational principles and modern interpretations.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

21,469
Expert opinions
43,716
Total members
378
New members (last 30 days)
131
New opinions (last 30 days)
28,520
Total comments

Trending

Abhinav Paliwal

Abhinav Paliwal CEO at PayNet Systems- A Neo Banking Software Platform

What Are Digital Wallets? Exploring Their Rising Popularity

Donica Venter

Donica Venter Marketing coordinator at Traderoot

Why Bankers Need to Think Like Entrepreneurs

Dmytro Spilka

Dmytro Spilka Director and Founder at Solvid, Coinprompter

Can The Payments Industry Use AI To Detect Fraud In 2024?

Now Hiring