FINRA Annual Conference 2024: Takeaways for AI Compliance
Financial service professionals understand the importance of staying ahead of regulatory updates and on top of current (and emerging) best practices. After attending the FINRA Annual Conference in Washington, DC, I noted two key topics that took the spotlight:
- The ongoing focus on off-channel communications
- The rise of artificial intelligence (AI)
In the second part of my two-part series, I'm offering my insights into the rise of AI and providing actionable items to help your firm navigate this area more effectively.
For insights around off-channel communications, see part 1 here.
Artificial intelligence: A game-changer with risks and rewards
AI is making waves in the financial services industry, and it's essential to understand its capabilities, risks, and regulatory implications. Large Language Models (LLMs), a type of AI, can tackle various tasks, from summarizing documents and answering questions to sentiment analysis and coding.
But with great power comes great responsibility. AI comes with its own risks and challenges that should be carefully weighed and addressed before engaging in these technologies. Below are a few areas of concern identified at the FINRA conference.
Accuracy can be a stumbling block if AI models need training on more specialized data. Attribution is another tricky area – if AI can't cite its sources, it can lead to trust issues. Explainability is also crucial; sometimes, the reasoning behind AI's responses can be a black box.
Privacy is a significant concern, mainly if the training data includes personal information that could be compromised. Bias is another big one — AI can pick up and amplify biases based on sensitive characteristics, which is a huge no-no.
There's also the risk of AI overstepping its boundaries and generating content beyond its intended purpose. "Hallucination" can occur when AI spits out incorrect responses due to insufficient training data. "Jailbreaking" is another worry, where direct user interaction could allow bad actors to manipulate the AI through sneaky prompts.
Finally, there's the issue of toxicity — AI might inadvertently regurgitate offensive language, stereotypes, or baseless perspectives it picks up from training data.
Best practices for AI & model governance
Adopting AI in financial services is a hot topic right now. During the FINRA Annual Conference, some best practices for AI and model governance were discussed, providing a roadmap for firms to help navigate the evolving landscape of AI technologies. Here's a breakdown of the key takeaways:
Revamp your policies: With AI and LLM capabilities advancing rapidly, revisiting and updating your supervisory and risk policies is essential. Ensure that your policies account for new AI developments and clearly define who is responsible for overseeing these technologies within your firm.
Modernize risk management: Traditional risk management frameworks might not fully capture the unique risks posed by generative AI and LLMs. Take a fresh look at your risk identification, measurement, and monitoring processes. Update these frameworks to ensure they are equipped to handle the complexities of modern AI tools.
Conduct independent risk reviews: Regularly perform independent assessments to understand the risks associated with AI models. This includes evaluating strategic, operational, financial, regulatory, and reputational risks to understand potential impacts comprehensively.
Thorough compliance and legal checks: Throughout the development and deployment of AI technologies, ensure that Compliance, Legal, and Technology teams are involved in the review process. Their input is crucial for ensuring that AI applications meet regulatory requirements and align with your firm's policies.
Ensure models meet expectations: It's vital to confirm that AI models function as intended. This means rigorous testing and validation to ensure models deliver reliable results without unexpected outcomes. Make sure your models are doing what they're supposed to do and doing it well.
Review and adapt controls: Existing controls should be revisited to confirm their effectiveness in the context of generative AI and LLMs. Make any necessary adjustments to ensure these controls are still applicable and rigorous enough to manage new AI-related risks.
Maintain continuous oversight: Effective AI governance requires ongoing oversight. This involves regular monitoring of AI models, conducting periodic audits, and maintaining clear communication with stakeholders. Continuous oversight ensures that AI applications remain compliant and effective over time.
By following these best practices, firms can better manage the complexities and risks associated with AI, ensuring they comply with regulatory standards and leverage AI's potential to enhance their operations. Staying informed and proactive in your approach to AI governance will position your firm for success in this rapidly evolving field.
Understanding FINRA’s advertising FAQ on the use of AI in communications
FINRA's Advertising FAQs now include guidance on AI-generated communications, such as chatbots. The main point to remember is that firms are required to supervise these communications just as they would any other correspondence, retail communications, or institutional communications. The applicable rules depend on the nature and audience of the communication.
This is where FINRA Rules Advertising (2210) and Supervision (3110) come into play. Firms must establish written procedures for reviewing both incoming and outgoing electronic correspondence. However, it's important to note that these procedures should be customized to your firm's business model, size, structure, and customer base.
Another crucial aspect to consider is content standards. FINRA Rule 2210 serves as a guide for ensuring that all communications, whether generated by humans or AI, are fair, balanced, and free from any false, misleading, promissory, or exaggerated statements. Firms are responsible for ensuring compliance with federal securities laws, SEC regulations, and FINRA rules, specifically Rules 2210 and 2220. Adhering to these content standards is essential to maintain compliance.
Recordkeeping should also be considered. The SEC and FINRA require archiving and retrieving all the necessary communications, including those created by AI. So, it's crucial to have a system to manage and store these records properly.
Being proactive is key moving forward
Navigating the compliance landscape in the financial services industry is like trying to hit a moving target. AI brings both challenges and opportunities, and it's clear that staying on top of regulatory updates is no small feat. By proactively tackling AI and machine learning technology and putting solid procedures in place, firms can help remain compliant and make the most of these technologies to improve their operations. Continuous learning and adaptation are crucial to staying ahead in this fast-paced environment. Embrace the changes, keep informed, and your firm will be better equipped to navigate whatever comes next.
Share this post!
Smarsh Blog
Our internal subject matter experts and our network of external industry experts are featured with insights into the technology and industry trends that affect your electronic communications compliance initiatives. Sign up to benefit from their deep understanding, tips and best practices regarding how your company can manage compliance risk while unlocking the business value of your communications data.
Ready to enable compliant productivity?
Join the 6,500+ customers using Smarsh to drive their business forward.
Subscribe to the Smarsh Blog Digest
Subscribe to receive a monthly digest of articles exploring regulatory updates, news, trends and best practices in electronic communications capture and archiving.
Smarsh handles information you submit to Smarsh in accordance with its Privacy Policy. By clicking "submit", you consent to Smarsh processing your information and storing it in accordance with the Privacy Policy and agree to receive communications from Smarsh and its third-party partners regarding products and services that may be of interest to you. You may withdraw your consent at any time by emailing privacy@smarsh.com.
FOLLOW US