Overview
Financial Services and Generative AI: Navigating a New Era of Innovation
How Financial Services Firms are Embracing — and Governing — Generative AI
Generative AI has been unleashed upon the financial services industry, with immense top-down pressure from firm leadership to harness its capabilities. At the same time, users are actively experimenting with use cases both within and outside of existing compliance controls and ambiguous regulatory obligations.
Caught in the middle are compliance officers, who are in an uncomfortable position – not to say “No” to the use of generative AI, but to help guide the firm on “How.”
The bottom line: Modern financial firms must adapt and integrate generative AI within their communications and collaborative technology infrastructures if they wish to stay competitive. Compliance teams must establish the appropriate controls and guardrails to ensure adherence to regulatory requirements. In this e-book, we utilize insights from industry experts to highlight the steps compliance and other risk stakeholders can take to support their firm’s use of this transformative technology.
In this e-book, you’ll find answers to burning questions around how financial firms are:
In this e-book, you’ll find answers to burning questions around how financial firms are:
- Evaluating the benefits and risks associated with generative AI adoption
- Assessing the impact of generative AI on regulatory obligations
- Creating actionable policies for managing generative AI risks
- Evaluating and selecting specific generative AI models
Table of Contents
Chapter 1
Making Strategic Decisions About Generative AI and Balancing the Regulatory Risks
How are financial firms thinking about generative AI today?
Financial services firms are approaching generative AI with a mixture of enthusiasm and caution. They recognize its transformative potential while acknowledging the complex regulatory landscape inherent to the industry.
Internally, organizations aim to improve efficiency by automating manually intensive tasks and functions, such as searching and retrieving information, summarizing meetings and documents, and strengthening risk management. Externally, firms are exploring client-facing use cases, such as AI-driven customer service solutions, personalized financial advice platforms, and product recommendation systems. Each externally-facing use case intersects with current financial services regulatory obligations, causing firms to pursue these cases more cautiously.
The state of AI regulation plays a significant role in shaping generative AI strategies, particularly for multinationals. Firms are developing implementation plans that account for both current and anticipated regulations, such as the recently enacted EU AI Act. This proactive stance includes identifying potential "high-risk" AI applications early and establishing robust governance structures and documentation practices.
“It's not the tool you use; it’s what these tools could do. The same worries that people have about generative AI were applied to machine learning on structured data around issues like discrimination in consumer lending.”
-- Matthew Bernstein, Information Governance Strategist, MC Bernstein DataEmerging Best Practices
- Create a dual-focus strategy for external and internal use cases
- Focus on high-value areas like customer service, large-scale data analysis and compliance review
- Adopt a model that combines generative AI automation with human oversight
- Deploy strategies that account for continuing regulatory fluidity
How are firms evaluating the benefits and risks of generative AI?
Many firms are implementing holistic evaluation processes that examine potential generative AI use cases and associated risks across critical business functions, including IT, information governance, privacy, data management, legal, and compliance risk management.
There's also growing recognition that business units need to view data as a strategic asset and that generative AI initiatives should be aligned with clear business outcomes and value propositions.
Generative AI will remain over-hyped for the foreseeable future. Regulators have already signaled their intent to focus on false or misleading claims over the use of AI (“AI Washing”). Firms need to exercise care to invest in generative AI approaches that have been thoroughly vetted for specific use cases. Many generative AI approaches will never be suitable for regulated firms, and a separation of those that can be characterized as ‘regulatory grade’ will eventually occur. Close collaboration between data science teams from those business and compliance stakeholders will continue to be imperative.
“When assessing whether and how to incorporate generative AI into business processes, consideration should be given by compliance professionals to the limits of the technology to ensure clarity around how it will be used and for what purposes. Transparency and explainability will be key requirements.”
-- Nina Bryant, Senior Managing Director, FTI ConsultingEmerging Best Practices
- Establish AI governance councils to oversee initiatives, organizational alignment, compliance, and ethical standards
- Develop comprehensive evaluation frameworks that cover all aspects of generative AI implementation
- Engage diverse stakeholders across various departments to ensure a holistic assessment
- Be aware of technology limits by staying in contact with data science teams to surface false and misleading vendor claims
How are stakeholder perspectives integrated into generative AI governance and risk management practices?
Generative AI can be a shiny new toy to some; however, the financial services industry recognizes the importance of balancing innovation with risk mitigation for generative AI use cases.
Generative AI has united functional stakeholders around one common element: the intellectual capital and risk associated with the firm’s information. Generative AI can be embedded in, on, around, or with the firm’s IP, which has broadened interest in the topic beyond the risk and data science teams.
What we have also found is that many organizations rely heavily on external expertise, indicating a shortage of in-house knowledge. This expertise gap underscores the need for substantial internal capacity building in AI governance. Firms are increasingly recognizing the value of diverse stakeholder input in generative AI decision-making processes, aiming to ensure that their strategies are both innovative and responsible.
“What I'm seeing is a lot of focus on the process up front, and a real effort to try to balance the desire to innovate with the desire to mitigate risk.”
-- Amy Longo, Partner, Ropes & Gray LLPEmerging Best Practices
- Establish C-level executive risk-awareness to balance innovation potential and mitigating risk
- Enable cross-functional collaboration so risk stakeholders can learn from other teams’ experiences
- Invest in internal expertise development to build robust in-house AI governance capabilities
- Engage in strategic external partnerships to stay abreast of best practices
Get the e-book for full insights and analysis
Download the complete e-book for a greater understanding of generative AI governance and the where the future of financial services is heading with AI and compliance.
Get the e-book
Download the e-book to help you support your firm's compliant adoption and use of generative AI.