Artificial Intelligence

AI in Financial Services: Enhancing Review, Elevating Oversight

June 26, 2025by Tiffany Magri

Subscribe to the Smarsh Blog Digest

Subscribe to receive a monthly digest of articles exploring regulatory updates, news, trends and best practices in electronic communications capture and archiving.

Smarsh handles information you submit to Smarsh in accordance with its Privacy Policy. By clicking "submit", you consent to Smarsh processing your information and storing it in accordance with the Privacy Policy and agree to receive communications from Smarsh and its third-party partners regarding products and services that may be of interest to you. You may withdraw your consent at any time by emailing privacy@smarsh.com.

As financial services firms increasingly adopt artificial intelligence to streamline data review and strengthen compliance workflows, one challenge persists: how to balance automation with the human judgment required for regulatory defensibility?

Why it matters

AI tools are already reshaping how firms manage compliance reviews, but without proper oversight, they can introduce new risks instead of mitigating existing ones.

Firms must demonstrate effective supervision over how AI is used in compliance workflows — yet the regulatory framework of AI tools is still developing. Multiple regulators are beginning to address this topic, including U.S. federal and state authorities, as well as counterparts in the EU and other jurisdictions.

All agree that human oversight is a critical safeguard for regulatory defensibility — ensuring decisions are explainable, auditable, and free from bias. That oversight depends on data visibility; without access to the underlying AI outputs and how they’re used, firms cannot supervise the tools or the employees who leverage them. While oversight expectations are clearer in areas like communications surveillance or conduct monitoring, others — such as the treatment of generative AI outputs or bias mitigation — remain in regulatory gray zones where best practices are still emerging.

This makes it essential to understand where AI works best (such as handling review tasks), where it must be governed, and to identify the crucial junctures where human expertise and judgment must intervene.

Where does AI fit into the review process?

AI is built for tackling the sheer volume and complexity of data in review streams. Its strengths lie in automating repetitive tasks, processing data at speed, and identifying patterns that humans might miss.

  • Enhanced efficiency and speed: AI can quickly process vast amounts of data, enabling real-time monitoring and analysis while automating mundane review tasks. For instance, AI can streamline internal research by summarizing documents and improving document navigation, as well as swiftly processing communications data. This significantly reduces manual effort and saves time.
  • Improved filtering and prioritization: AI is effective at filtering out false positives and reducing duplicative content in review streams. It can also help prioritize high-risk items for human review.
  • Advanced pattern detection and risk identification: AI can spot patterns of misconduct more effectively than traditional methods. It increases risk discovery and improves the precise identification of true positives. AI can identify compliance gaps more efficiently and even find crimes that rules-based systems might miss. AI-powered systems are well-suited for monitoring conduct and detecting anomalies in real time.
  • Information extraction and summarization: Generative AI can process and summarize large and complex documents or transcripts, extracting key information and presenting insights quickly. This supports tasks such as customer due diligence by summarizing client documentation or enhancing internal research.
  • Expanding review coverage: AI can analyze diverse data sources beyond structured data, including social media, emails, and other sources, providing a more comprehensive understanding. It can extend the breadth of coverage in employee investigations.

AI needs human oversight

As AI systems increase the speed and scope of review, regulators and compliance leaders are placing more emphasis on explainability, oversight, and the integrity of review decisions. Human judgment must complement AI output to ensure defensible decisions and mitigate regulatory risk.

This obligation will remain despite the growing use of agentic AI systems that take autonomous actions based on internal goals. Even as these tools advance, they will still require human governance, particularly in regulated environments.

AI needs human oversight

Despite AI's analytical power, human involvement remains critical for interpreting nuanced situations, applying judgment, ensuring accuracy in sensitive contexts, and maintaining accountability within the review workflow.

Validation and verification of outputs

AI outputs, especially from generative AI, can suffer from “hallucination” (generating incorrect information) or lack of robustness. Human review is necessary for validating AI-generated content. This acts as a crucial backstop to ensure accuracy and mitigate these risks.

Humans must check AI-generated content for compliance with laws and rules before it is distributed — even though regulatory guidance on the recordkeeping status of generative AI outputs is still evolving. Clear regulatory expectations on this front may not emerge in the short term. Firms should be prepared for guidance to develop reactively, particularly through future enforcement actions.

Handling ambiguity and edge cases

Complex or novel situations, particularly those involving interpretation of regulatory obligations where clear guidance is lacking, require human evaluation of associated risk or value. AI may struggle with such ambiguous cases, requiring human intervention — especially where no precedent exists to guide automated decisions. This includes scenarios that may not result in direct investor harm but still raise compliance concerns, such as ambiguous marketing claims about AI capabilities.

Supporting AI governance with data visibility

Regulatory clarity on whether AI-generated outputs constitute “books and records” is still developing. Many firms are treating AI-assisted decision-making — particularly in regulated workflows — as requiring some form of documentation and oversight.

Firms cannot oversee what they cannot see.

Effective AI oversight and governance require capturing the data associated with AI-assisted decisions and actions. This includes both the outputs generated by AI and the human interactions with those outputs. Without a reliable, reviewable record of AI activity, firms cannot supervise how AI tools are being used — or misused — by employees. Data capture is foundational to transparency, auditability, and risk mitigation.

AI governance: the key to moving forward with AI

AI empowers financial services firms to review data at unprecedented speed and scale, improving efficiency and uncovering risks buried in vast datasets. However, the inherent limitations of AI necessitate robust human oversight within the review process.

As regulatory guidance continues to evolve, firms that proactively implement data visibility and human-in-the-loop review mechanisms will be better positioned to defend their use of AI tools.

By combining AI’s capability for rapid analysis and pattern detection with the human capacity for critical thinking, nuanced judgment, and ethical consideration, firms can achieve a powerful and compliant review framework that leverages the best of both worlds. The “human-in-the-loop” remains essential, particularly for high-risk decisions that require accountability, defensibility, and trust.

Share this post!

Tiffany Magri
Smarsh Blog

Our internal subject matter experts and our network of external industry experts are featured with insights into the technology and industry trends that affect your electronic communications compliance initiatives. Sign up to benefit from their deep understanding, tips and best practices regarding how your company can manage compliance risk while unlocking the business value of your communications data.

Ready to enable compliant productivity?

Join the 6,500+ customers using Smarsh to drive their business forward.

Contact Us

Tell us about yourself, and we’ll be in touch right away.