Artificial Intelligence

There’s Room for Generative AI in Financial Services

September 26, 2024by Smarsh

Subscribe to the Smarsh Blog Digest

Subscribe to receive a monthly digest of articles exploring regulatory updates, news, trends and best practices in electronic communications capture and archiving.

Smarsh handles information you submit to Smarsh in accordance with its Privacy Policy. By clicking "submit", you consent to Smarsh processing your information and storing it in accordance with the Privacy Policy and agree to receive communications from Smarsh and its third-party partners regarding products and services that may be of interest to you. You may withdraw your consent at any time by emailing privacy@smarsh.com.

The quick take

Generative AI use cases in financial services are increasing as regulators clarify their rules. This opens up opportunities for firms to use tools like ChatGPT Enterprise — as long as they have appropriate policies and controls in place. Read the full story below.

Business and technology move fast. It wasn’t long ago when easy access to artificial intelligence was viewed more as a cliché plot device in science fiction. Now, AI is touted in nearly every business technology.

Even financial services firms — organizations bound by regulations and known for being cautious technology adopters — are warming up to generative AI. Some have even actively deployed it in specific use cases. (And many are already using it, albeit unknowingly. More on that later.)

poll where are you focused on ai enabled comms tools

While prohibiting generative AI tools like ChatGPT still hovers near 20%, this is a far cry from a year-and-a-half ago.

“I would say that is a rather big number, but I think it was a lot bigger 12-18 months ago,” says Matt Cohen, partner at Kirkland & Ellis. “I think 12-18 months ago, outright prohibition was the default approach that most firms were taking.”

The reason for this change? Firms needed to meet the needs and demands of their employees.

Internal stakeholders, business teams, marketing, and investment teams want to explore the use of AI to help the business. With employees finding innovative ways to boost productivity, firms must now examine their policies and procedures and take a risk-based approach that allows their employees to use AI to some extent.

Start with exploring potential use cases

From a macro perspective, new use cases arrive every day. How will this benefit the firm?

That’s the big question that must be answered when considering new use cases. Anything client-facing requires considerable oversight for compliance, so it’s important that firms also ensure they have the necessary back-end structure for any new use case they want to adopt.

For example, FINRA’s recently released AI FAQ focused on the inclusion of AI-generated communications. If firms communicate with the public, whether retail, institutional or other, firms must apply proper guardrails and procedures.

The SEC also provided several public comments about generative AI, with Chairman Gensler warning of its potential to impact “financial market stability” given its ability to attract fraud and “conflicts of interest.”

“It's really a more proactive approach that they've proposed in terms of eliminating conflicts,” says Cohen. “The proposed rules basically say that when you're using predictive data analytics in investor interactions, you have to adopt policies and procedures to identify and determine conflicts that would result in putting the firm's interests ahead of clients, and then eliminate or neutralize those conflicts.”

How to move forward with AI

Cohen argues that, from a broad perspective, the general principles of current codes of ethics and other policies already apply to the use of AI.

“You cannot share material non-public information with others,” says Cohen. “That would include putting it into an AI or similar system that's publicly available or available to the provider.”

While there’s some existing policies that can be applied, there’s still a need for more specificities to move forward with AI. While prohibition was the prevailing policy, there was also a rollout process that educated employees on what fell inside and outside the scope of that prohibition rule — which is how many people were able to initially implement AI use.

Now, people have a much better handle of the implications of AI and how their firm needs to use it. For any firm that has been on the prohibition side of AI, the first step is to lean into more flexibility around AI use. Cohen advises firms to gain an understanding of the different groups and stakeholders, and how they would like to use AI.

At the end of the day, if you’re going to move forward with AI, your people need to gain internal expertise on AI capabilities and implications, even if you’re relying on external assistance for oversight and supervision.

This all comes back to off-channel communications.

Many firms have already established and expanded processes to deal with off-channel communications. If a firm has Copilot and Teams but doesn't have a policy defining how they can be used, that’s a potential off-channel exposure to address. The work that's been done over the past two years to get those processes in place now can be extended and leveraged for AI — and how it’s going to be evaluated.

Share this post!

Smarsh
Smarsh Blog

Our internal subject matter experts and our network of external industry experts are featured with insights into the technology and industry trends that affect your electronic communications compliance initiatives. Sign up to benefit from their deep understanding, tips and best practices regarding how your company can manage compliance risk while unlocking the business value of your communications data.

Ready to enable compliant productivity?

Join the 6,500+ customers using Smarsh to drive their business forward.

Contact Us

Tell us about yourself, and we’ll be in touch right away.