AML Conference Insights: Key Principles on the Use of AI for Banks
With technological innovations, banks now have greater access to computing power and vast amounts of data, thanks to cloud computing resources and the availability of structured and unstructured data. This has led to the widespread availability of AI tools and services for banks of all sizes. These tools have proven valuable in enhancing fraud prevention controls, improving AML/CFT monitoring activities, and identifying potential fair lending violations. AI has the potential to enhance overall risk management, compliance monitoring, and internal controls.
However, it is crucial for banks to ensure effective governance processes and controls are in place during the planning, implementation, and operation of these innovative solutions. This validation is essential to reap the benefits of AI while mitigating unintended risks.
The focus on explainability
The examination of AI applications focuses on several key principles, perhaps the most crucial being explainability. Explainability ensures that the system's workings can be understood and challenged. Banks must assess the transparency and interpretability of the AI systems under examination.
By emphasizing explainability, you can ensure that the decision-making process is clear and comprehensible, enabling regulators and stakeholders to trust the outcomes. Assessing transparency and interpretability helps build trust, meet regulatory expectations, and allows outcomes to be comprehended and challenged by stakeholders.
Here’s a close look at the top five insights from compliance examiners, which shed light on the importance of AI explainability, the need for a plain English executive summary, and proactive communication with examiners.
- AI explainability: A lack of explainability has contributed to the limited number of AI applications currently available. Banks must recognize the significance of transparently elucidating how their AI models work and addressing any potential concerns in this regard.
- A plain English executive summary: Simplicity is key as banks must provide a plain English, executive summary of their AI models. It should not rely on theoretical computations but rather focus on presenting complex concepts in the simplest terms possible. By doing so, banks can ensure that stakeholders at all levels — including the board — comprehend the fundamental workings of their AI systems.
- Utilizing executive summaries at the board level: The executive summary should serve as a concise yet comprehensive overview of the AI model's functionality, addressing key components and potential risks. This ensures that decision-makers have a clear understanding of the AI initiatives employed within the institution.
- High-level summary: Banks must be capable of delivering a high-level summary of their AI models. Failing to provide such a summary raises concerns regarding the bank's ability to withstand critical challenges to the system. Examiners perceive this as a red flag, indicating potential vulnerabilities or inadequate risk management strategies.
- Proactive communication: Strengthening the examination process in the spirit of transparency and collaboration, banks are encouraged to proactively engage with examiners before formal examinations. By providing an overview of their AI initiatives and addressing any questions or concerns raised by the examiners, banks can establish a foundation of trust and demonstrate their commitment to compliance and effective risk management.
Other key principles discussed by compliance examiners include:
- Data management: A comprehensive understanding of data management practices is crucial in evaluating AI applications. It involves evaluating data sourcing, processing, and maintenance practices throughout the AI lifecycle. Emphasizing effective data governance, including considerations for data quality, privacy, and consent, is vital to comply with regulatory obligations, protect customer information, and ensure data integrity. By evaluating data sourcing, processing, and maintenance practices and complying with regulatory obligations, organizations can safeguard customer information, maintain data integrity, and adhere to privacy and consent requirements.
- Privacy and security: As AI systems often handle sensitive data, privacy and security must be prioritized in the evaluation process. It is important to assess the measures in place to safeguard data integrity, confidentiality, and protection. This includes evaluating factors such as encryption, access controls, and compliance with privacy regulations to ensure that the AI systems align with regulatory expectations. By prioritizing privacy and security measures, organizations can safeguard data integrity, maintain confidentiality, and meet regulatory requirements in protecting sensitive information.
- Risk management: Assessing and mitigating risks associated with AI deployment is a key responsibility. It involves implementing a robust risk management framework for AI deployment and evaluating potential risks and their potential impact on compliance and customer protection. This includes establishing governance structures, policies, procedures, and ongoing monitoring to effectively manage and mitigate risks. By implementing a comprehensive risk management framework, organizations can proactively identify and address risks associated with AI, ensuring compliance and protecting customer interests.
- Compliance monitoring: Establishing a strong compliance monitoring program is essential to ensure ongoing adherence to regulatory requirements. This involves conducting regular audits and assessments to identify any deviations from compliance standards and providing opportunities for corrective actions. By establishing a robust compliance monitoring program, organizations can proactively identify and address compliance issues, maintain regulatory adherence, and take corrective actions when necessary. Regular audits and assessments play a crucial role in ensuring continuous compliance and mitigating any potential risks or non-compliance issues.
If third-party vendors are involved, additional considerations regarding data control and security are necessary. Banks must engage in proper risk management and ongoing oversight when using third-party solutions or collaborating with vendors for AI applications to ensure compliance, consumer protection, and privacy.
Governance plays a crucial role in managing risks associated with AI. Banks must demonstrate proper documentation, testing protocols, model management, and vendor management. Ongoing audits are necessary to ensure compliance and effectiveness. Proactive communication with examiners before formal examinations is encouraged to provide an overview of AI initiatives and address any questions or concerns.
Risk analysis is a focus in AI applications, allowing for better risk assessment and analysis. However, the lack of explainability in some AI applications poses challenges. Limited applications exist due to these explainability challenges.
AI is being used in audit processes, specifically in natural language processing for analyzing large data sets. It helps identify narratives that lack certain elements, enabling targeted sampling and data assessment. AI tools assist in data visualization and assessment, identifying anomalies or patterns that require further investigation.
The examination of AI and machine learning applications is often approached from an operational risk standpoint. The Office of Comptroller of the Currency’s (OCC) Comptroller’s Handbook on Model Risk Management is a must read for any bank engaging AI models. This booklet will provide insights into the OCC’s approach to AI examination and help banks prepare their model risk management procedures accordingly.
Share this post!
Our internal subject matter experts and our network of external industry experts are featured with insights into the technology and industry trends that affect your electronic communications compliance initiatives. Sign up to benefit from their deep understanding, tips and best practices regarding how your company can manage compliance risk while unlocking the business value of your communications data.
Ready to enable compliant productivity?
Join the 6,500+ customers using Smarsh to drive their business forward.