FINRA’s 2026 Oversight Report Identifies AI as Both Opportunity and Risk
FINRA’s 2026 Annual Regulatory Oversight Report arrives at a moment of convergence for financial services firms.
Artificial intelligence is rapidly moving from experimentation to operational dependency. Communications channels continue to multiply and fragment. Financial crime is becoming more sophisticated, faster, and harder to detect. Together, these forces are reshaping supervisory expectations and exposing gaps that traditional compliance approaches are no longer equipped to manage.
Why FINRA 2026 oversight priorities on AI matter to financial services firms
Generative AI is no longer an emerging concept — it’s an operational reality. FINRA’s decision to introduce a dedicated AI section reflects the regulator’s view that AI outputs can create regulatory, legal, privacy, and information security risks when they are not governed with the same rigor as traditional systems.
What FINRA’s 2026 report says about AI tools
FINRA remains technology-neutral. It’s not mandating specific tools or technologies. However, that neutrality comes with heightened expectations that firms apply scalable, risk-aligned approaches to supervision and governance, whether AI or other controls are used. Firms must be able to explain how AI is used, why it’s appropriate for a given purpose, and how outputs are tested, monitored, and documented.
“The good news is that the same standards apply when it comes to the regulatory framework,” says Ana Petrovic, Director of Regulatory Consulting at Kroll — a global firm that provides risk advisory solutions, particularly in corporate investigations, cybersecurity, and regulatory compliance. But Petrovic does acknowledge that the key challenge is that technology is evolving so quickly that fitting it into that framework can be “trickier than it appears at first glance.”
In practice, this means firms must be able to scale their supervisory standards to faster, more opaque technologies.
Unapproved AI tools, or “shadow AI,” further increases exposure. Tools adopted informally for notetaking, summarization, or productivity may still generate records, process sensitive data, or influence decision-making.
This is a real problem. If firms are prohibiting certain AI uses, just like with off-channel or other communications, this needs to be spelled out. Those expectations should be reflected in policies and reinforced through training, so employees understand what actions are off-limits.
Generative AI vs. agentic AI
The emergence of agentic AI raises the stakes even higher. These systems don’t merely generate content — they take actions.
When AI starts acting — pulling data, triggering workflows, making decisions — firms need transparency into how those actions occur and what assumptions are embedded in the system. And who is accountable when outcomes fall short? Without that visibility, accountability becomes difficult to demonstrate.
Agentic AI takes it to the next level. AI agents perform actual actions, not just generating content. That alone raises the level of risk. We should expect those risks to surface more as firms increase their use of agentic AI.
“We've seen a lot of our clients treat AI systems and agentic AI as almost as a Google search,” says Olivia Eori, Director of Compliance Consulting at Kroll. “In some cases, it can be that simple. But because there are concerns like hallucinations, bias, and other opaqueness in the system, there needs to be other layers of control and training in place to make sure that employees are using things in a correct way.”
How can firms respond to FINRA’s 2026 oversight report?
The report sends a clear signal: FINRA isn’t prescribing technologies, but it is raising expectations around governance, documentation, testing, and accountability. Firms are expected to:
- Establish enterprise-grade AI governance
Leaders should move AI oversight out of experimentation and into formal governance structures that define ownership, acceptable use, escalation paths, and accountability. This includes tiering AI use cases by risk and ensuring senior leadership understands where AI is influencing decisions. - Embed human accountability into AI workflows
Human-in-the-loop validation isn’t just best practice. It’s a supervisory necessity. Leaders should ensure that AI outputs influencing advice, communications, or operational decisions are reviewed, explainable, challengeable, and traceable to a responsible role or function. - Apply third-party risk discipline to AI platforms
AI tools should be treated as high-risk vendors, with documented due diligence, testing, monitoring, and contractual clarity around data use and security. And all this means meeting regulatory retention obligations.
“There's going to be increased desire and business pressures to adopt AI technologies from a business competitive standpoint, which is great,” says Petrovic. “AI offers wonderful tools, but it also introduces heightened risk.”
Fortunately, there are practical next steps firms can take:
- Create a comprehensive AI inventory
Identify every AI use case across the organization, including informal or employee-adopted tools, to eliminate blind spots. Petrovic warned, “Don’t assume no one is using AI — ask the question and verify.” - Implement logging and retention controls
Capture prompts, outputs, and version histories to support supervision, audits, and investigations. - Design repeatable testing protocols
Regularly assess AI tools for accuracy, bias, hallucinations, and cybersecurity impact as models and use cases evolve.
How can Smarsh help
Smarsh helps firms translate these expectations into action. Through comprehensive multi-channel capture, AI-driven supervision, and immutable, audit-ready archives, Smarsh enables compliance teams to use AI supervision and review technologies — and demonstrate defensibility.
Share this post!
Smarsh Blog
Our internal subject matter experts and our network of external industry experts are featured with insights into the technology and industry trends that affect your electronic communications compliance initiatives. Sign up to benefit from their deep understanding, tips and best practices regarding how your company can manage compliance risk while unlocking the business value of your communications data.
Ready to enable compliant productivity?
Join the 6,500+ customers using Smarsh to drive their business forward.





Subscribe to the Smarsh Blog Digest
Subscribe to receive a monthly digest of articles exploring regulatory updates, news, trends and best practices in electronic communications capture and archiving.
Smarsh handles information you submit to Smarsh in accordance with its Privacy Policy. By clicking "submit", you consent to Smarsh processing your information and storing it in accordance with the Privacy Policy and agree to receive communications from Smarsh and its third-party partners regarding products and services that may be of interest to you. You may withdraw your consent at any time by emailing [email protected].
FOLLOW US