AI Governance in Financial Services Framework
Ask five firms what effective AI governance looks like, and you'll get five different answers. Ask regulators, and the response is consistent but intentionally broad — existing rules apply, and firms are expected to adapt. AI adoption is accelerating faster than policy can keep up, and the firms that build governance frameworks now will be far better positioned when examiners come asking.
Key takeaways
- AI governance in financial services is an enterprise-wide responsibility, not a compliance or IT issue.
- Regulators have stated that rules are "technology agnostic" and expect firms to treat AI like any other communications platform.
- Shadow AI (the use of unapproved generative AI tools by employees) creates significant regulatory, litigation, privacy, and data security exposure.
- A practical AI governance framework includes up-front and ongoing assessment of AI benefits and risks, automated controls, continuous improvement, and clear accountability.
- Communications retention and oversight are leading indicators of responsible AI use to regulators and in e-discovery.
Why AI governance is a distinct compliance challenge
Unlike static software, AI systems drift, update, and produce outputs that can't always be predicted or explained. A governance model built around periodic policy review and annual vendor assessments isn't equipped for that reality. In financial services, the consequences of being unprepared are regulatory, not just operational.
Industry report
What is AI’s place in financial services firms? Get this survey report to understand the state of AI adoption in financial services.
The governance challenge is compounded by how AI actually enters firms. It rarely arrives through a single, sanctioned deployment. It comes through four distinct channels, each carrying its own risk profile.
- Stand-alone tools used by employees (ChatGPT and similar platforms) create data leakage and shadow AI exposure when used outside approved frameworks.
- AI features embedded in approved communications platforms (Microsoft Copilot in Teams, for example) create supervision blind spots when outputs aren't captured or reviewed.
- Agentic AI systems with autonomous access to internal and external systems introduce access control and identity management risk that existing controls weren't designed to address.
- AI applied to specific business processes (communications surveillance, investment research, client reporting) creates model explainability obligations firms must be prepared to satisfy in exams and litigation.
Governance attaches to all four. That means policies and controls must cover not just the tools themselves, through model risk management and third-party vendor reviews, but also their outputs and the regulatory obligations those outputs may trigger.
Assessing benefits and risks
Effective governance starts with a clear-eyed assessment of whether a specific AI use case delivers measurable value and what could go wrong if it doesn't perform as expected. The question isn't only whether a use case meets a written regulatory requirement. It's whether an inaccurate or off-target output could harm investors, expose intellectual property, or open the firm to litigation — and whether the firm can explain the model's decision-making if that question ever lands in front of a regulator or opposing counsel.
Governance structure and oversight
Most firms operationalize AI governance through a cross-functional council spanning compliance, legal, technology, business lines, information security, and data privacy. The council's job is to maintain a current picture of AI use across the organization, define criteria for evaluating new tools, set stage-gates for broader deployments, and conduct the kind of horizon scanning that keeps governance frameworks current as both regulations and tool capabilities evolve.
Build a robust risk management framework
Broad AI risk management frameworks offer useful starting points, including the NIST AI Risk Management Framework (AI RMF 1.0) and ISO 42001. Both emphasize risk assessment, governance, and continuous monitoring across the AI system lifecycle. In financial services, a few specific elements come into sharper focus, particularly around communications technology.
Tip
Get the Smarsh Financial Services and Generative AI e-book for a closer look at how generative AI is reshaping compliance obligations.
Key regulatory drivers shaping AI governance
Understanding the current regulatory landscape is a governance requirement — one that changes as agencies issue new guidance and examine firms' AI practices.
FINRA guidance on AI and supervision
FINRA Regulatory Notice 24-09 clarifies how firms are expected to supervise AI systems, including requirements that systems perform accurately and consistently, and undergo thorough testing and updates to manage model drift. FINRA expects firms to apply existing supervision frameworks to AI tools, including generative AI used in communications and research.
FINRA's 2026 oversight priorities identify AI as both an opportunity and an examination risk, with recordkeeping and supervision flagged as key areas of focus. Most recently, FINRA announced it will survey firm practices related to agentic AI to determine where further guidance may be needed. Examiners are looking for documented governance structures, not just technical controls.
SEC expectations for AI in financial markets
The SEC has emphasized that its rules are "technology agnostic." Firms are expected to inspect for violations of existing obligations, particularly where investor harm is a potential risk. The SEC expects disclosure of AI use in investment decision-making and client communications, clear explanation of how AI models operate and what controls are in place and assigned ownership of AI oversight at the leadership level.
With no prescriptive federal guidance, several states (including California and Colorado) have introduced their own AI bills. The current administration has sought to preempt those state actions via executive order, a matter currently being resolved in the courts. Firms operating across multiple jurisdictions should monitor these developments closely.
Global developments influencing AI governance
A patchwork of international regulatory frameworks has emerged. The UK's Financial Conduct Authority has taken a principles-based approach, while the EU AI Act establishes a more prescriptive risk-based framework with compliance timelines.
What most global frameworks share is an emphasis on transparency, human oversight, and accountability for AI systems. For US-headquartered firms with global operations, overlapping obligations are already a reality. Designing controls with regulatory resilience in mind, rather than optimizing for any single jurisdiction, is a more durable approach.
The growing risk of shadow AI in financial services
Shadow AI refers to the use of unapproved or unsanctioned AI tools by employees, outside firm-sanctioned governance frameworks. The exposure it creates is significant.
- Employees may input sensitive client data, proprietary research, or regulated communications into AI tools without adequate data protection controls.
- Communications or decisions influenced by shadow AI may not be captured, retained, or supervised.
- Firms cannot demonstrate oversight of activity they cannot see.
The parallels to off-channel communications enforcement are direct. Generative AI tools are widely accessible and easy to use; they are often adopted informally before governance policies catch up. And as firms encourage employees to experiment with productivity tools, the risk of unsanctioned usage grows alongside the legitimate use cases.
Firms with infrastructure already established in response to off-channel enforcement have a meaningful head start. That same foundation, combined with updated policies and monitoring capabilities, applies directly to shadow AI — the governance problem is familiar, even if the technology is new.
Practical steps include:
- Clear policies defining approved AI tools and prohibited use cases
- Monitoring across communication channels to identify unsanctioned AI activity
- Training programs that help employees understand what requires approval and why
- A published, maintained AI tool framework with documented vetting criteria and a list of tools prohibited for business use
How financial institutions implement AI governance
Effective AI governance requires moving from policy to practice. The following areas represent the core elements of implementation.
Establish governance policies and leadership oversight
Assigning clear ownership across compliance, legal, technology, and business lines is the foundation. Firms benefit from an AI governance committee or working group with cross-functional representation, one that documents approved use cases, prohibited activities, and escalation paths for new tools. Senior leadership should be able to articulate governance structures during regulatory exams, not just point to a policy document.
Inventory and monitor AI systems
Firms should maintain a current inventory of all AI tools in use, including third-party and vendor-provided systems, and establish a review process for evaluating new tools before deployment. Any AI-assisted process involving regulated activity or client-facing communications should retain a human review component.
Implement model oversight and documentation
Documenting how AI models are used, how decisions are made, and what human review is in place is essential for regulatory readiness. Model validation records, performance testing results, and bias assessments form the evidentiary foundation firms need to demonstrate explainability during exams and respond credibly to client inquiries.
Supervise AI-generated communications and workflows
Existing supervision frameworks should extend to AI-generated communications: meeting summaries, research outputs, and client-facing content alike. AI-generated material that documents regulated activity (investment advice, trade ideas, client instructions) must be captured and retained. Communications capture and archiving across email, messaging, and collaboration platforms is where governance policy meets operational reality.
AI governance and compliance programs
AI governance isn't separate from existing compliance programs — it extends and reinforces them.
Compliance teams must apply their supervision frameworks to AI tools and AI-generated outputs, not just human-authored communications. AI-generated content that documents regulated activity is subject to the same retention obligations as other business records, including SEC Rule 17a-4 and FINRA Rule 4511. Firms must be prepared to demonstrate governance programs during exams, with organized documentation, evidence of oversight, and clear policy trails; describing a program is not the same as showing one.
The practical intersections are specific.
- Supervise AI-influenced messages and outputs across all channels.
- Incorporate AI risk into enterprise risk frameworks rather than treating it as a separate category.
- Assess governance controls, identify gaps, and validate that controls are functioning as designed.
Learn how Smarsh can help your firm approach AI governance.
Frequently asked questions
Share this post!
Smarsh Blog
Our internal subject matter experts and our network of external industry experts are featured with insights into the technology and industry trends that affect your electronic communications compliance initiatives. Sign up to benefit from their deep understanding, tips and best practices regarding how your company can manage compliance risk while unlocking the business value of your communications data.
Ready to enable compliant productivity?
Join the 6,500+ customers using Smarsh to drive their business forward.





Subscribe to the Smarsh Blog Digest
Subscribe to receive a monthly digest of articles exploring regulatory updates, news, trends and best practices in electronic communications capture and archiving.
Smarsh handles information you submit to Smarsh in accordance with its Privacy Policy. By clicking "submit", you consent to Smarsh processing your information and storing it in accordance with the Privacy Policy and agree to receive communications from Smarsh and its third-party partners regarding products and services that may be of interest to you. You may withdraw your consent at any time by emailing [email protected].
FOLLOW US