In the complex ecosystem of modern banking, artificial intelligence (AI) and its subset machine learning (ML) are revolutionizing traditional processes, from enhancing customer service to redefining credit risk assessment. Based on my extensive experience
in risk management, and now in auditing large financial organizations, I’ve witnessed the transformative potential of these technologies. Yet, with innovation comes new avenues for fraud and a pressing need for vigilant oversight.
Whether you’re drawn into a case as a fraud examiner, an auditor, or a customer encountering identity fraud, understanding credit risk modeling within financial organizations is essential. This knowledge is crucial regardless of your role because it impacts
how financial products, from bank accounts to mortgages and student loans, are managed and secured globally. Appreciating how financial institutions assess the risks tied to each customer — whether an individual or a corporation — is fundamental for
anyone engaged with financial services. This understanding shapes how we perceive the safety and reliability of financial products that affect everyday financial decisions.
The evolution of credit risk modeling
Credit risk modeling, a cornerstone of banking operations, has evolved significantly with the advent of AI and ML. These technologies offer sophisticated tools to analyze vast datasets, predict loan defaults with greater accuracy and tailor financial
products to individual customer profiles. However, this shift also introduces complexities in data integrity, algorithmic bias and the transparency of decision-making processes.
The Basel Committee on Banking Supervision’s Principles for Effective Risk Data Aggregation and Risk Reporting, first issued in January 2013, aims to enhance the banking sector’s capability to manage risk data effectively, particularly for global systemically
important banks (G-SIBs). (See “Global systemically important banks: assessment methodology and the additional loss absorbency requirement,” Basel Committee on Banking Supervision, Nov. 27,
2023.) The principles cover several areas, including risk data aggregation capabilities, risk-reporting practices, and the importance of robust governance and data architecture to support these functions. (See “Principles for effective risk data aggregation and risk reporting,” Basel Committee on Banking Supervision, January 2013.)
Machine learning’s double-edged sword
Machine learning, while powerful, operates on the principle of “garbage in, garbage out.” The quality of datasets and the objectivity of algorithms are paramount. Biases in data or design can lead to skewed risk assessments, unfairly affecting loan approvals
or interest rates. Herein lies a significant challenge for fraud examiners: ensuring these innovative models do not inadvertently facilitate financial fraud or discrimination.
Real-life discussions offer in-depth insight into the practical implications of AI and ML in banking
Example 1: Credit risk scoring challenges
Credit scoring systems are pivotal in financial institutions; they use sophisticated scorecards that assign individuals a three-digit score, typically ranging from 300 to 850. This score helps determine one’s borrowing capability. However, interpreting
these scores presents challenges, as the scoring system often categorizes individuals without sufficient context or explanation. For example, someone with a score of 720 falls within the “good” range, but the underlying factors contributing to this
score — like timely payments or credit utilization — aren’t explicitly detailed. This lack of transparency can perplex both customers and loan officers, leading to potential misunderstandings or misjudgments in lending.
Example 2: Algorithmic bias in loan approvals
In the U.S., a significant challenge has arisen from the use of postal codes within algorithms to determine mortgage eligibility. These algorithms, while designed to streamline the approval process by assessing geographical data, inadvertently expose
racial biases. For instance, certain neighborhoods that are predominantly inhabited by minority ethnic groups might receive unfavorable terms or outright denials, not due to individual creditworthiness but due to historical socioeconomic factors affecting
those postal codes. This unintended consequence of algorithmic decision-making illustrates how AI can perpetuate existing societal biases if not carefully monitored and adjusted.
Example 3: Machine learning models for financial crime detection
Banks employ both supervised and unsupervised machine learning models to detect and prevent financial crimes. These models analyze patterns in transaction data to identify anomalies that may indicate fraud. However, one significant challenge is their
dependency on historical data, which may not fully capture new and evolving fraudulent tactics. For instance, as cyber criminals adopt more sophisticated methods, the models might fail to recognize these patterns, leading to blind spots in fraud detection.
Continuous updates and training with new datasets are essential to maintain the effectiveness of these systems.
Benefits and challenges of ML in banking
Benefits:
- Improved prediction accuracy: Machine learning significantly enhances the accuracy of predicting loan defaults and customer behavior. By analyzing vast arrays of historical data, AI algorithms can identify subtle patterns that may be invisible
to human analysts.
- Enhanced detection of financial
crime and compliance issues: AI systems are invaluable in their ability to monitor transactions in real time, swiftly identifying irregularities that could indicate money laundering, insider trading or other compliance issues.
Challenges:
- Algorithmic bias: As seen with postal code usage in loan processing, ML systems can inadvertently learn and perpetuate biases present in their training data. This issue requires ongoing vigilance and periodic reviews of the model’s decision
criteria.
- Lack of transparency
and explainability: AI systems often operate as black boxes, with decision-making processes that aren’t visible or understandable to users or regulators. This opaqueness can undermine trust in AI systems and complicate regulatory compliance efforts.
- New forms of financial fraud: As AI tools become more sophisticated, so do the tactics of those looking to exploit financial systems. This arms race between financial institutions and fraudsters necessitates a constant evolution of AI technologies
to detect and mitigate these risks.
The importance of addressing these challenges is highlighted in regulatory documents such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), launched in April 2024, and the Bank of England’s FS2/23. Both frameworks emphasize the need for transparency,
fairness and bias mitigation. (See “Technical and Policy Documents,” National Institute of Standards and Technology and “PRA Regulatory Digest - October 2023,”
Bank of England, Nov. 1, 2023.)
Although the integration of AI and ML into banking operations clearly offers significant advantages, it also presents unique challenges that must be addressed. For Certified Fraud Examiners (CFEs), understanding these nuances is critical — not only to
leverage AI effectively but also to ensure that it’s used in a manner that’s fair, transparent and secure. As these technologies continue to evolve, so too must the strategies employed by those responsible for overseeing their use in the financial
sector.
Operational risks and human oversight
Operational risks in deploying AI and ML models extend beyond data integrity to include issues of cybersecurity, model governance and ethical use. The human factor remains crucial, underscoring the need for skilled fraud examiners who can navigate the
nuances of AI applications, identify potential weaknesses and advocate robust ethical standards.
Selected key governance frameworks and principles
Real-life insights include the following:
- Example 1: Model validation
failures
In one instance, a reinforcement learning model (a subset of AI focused on making decisions by learning from interactions) used for market hedging led to significant losses due to incorrect assumptions. Despite passing initial validations,
the model failed to adapt to dynamic changes in market conditions. (See “Deep Hedging of Derivatives Using Reinforcement Learning,” by Jay Cao, Jacky Chen, John Hull and Zissis Poulos,
Joseph L. Rotman School of Management, University of Toronto, July 2020.) - Example 2: Conceptual soundness
and explainability
A credit scoring model, such as one used by FICO or Experian, evaluates a person’s credit risk by examining various data points from their credit report. If a model incorrectly predicts higher default rates due to issues
like data drift, it underscores the importance of robust conceptual soundness and explainability. Credit score methodologies are widely used and typically consider factors such as payment history, amounts owed, length of credit history, new credit
and types of credit used. More complex credit scoring models are in place for corporate customers with regulatory reviews of the model methodologies (e.g. probability of default (PD), loss given default (LGD) and other parameters) in place.
Data
drift refers to the change in the model’s input data distribution over time, which can affect the model’s performance. This is particularly critical for credit scoring models that rely on historical data to predict future behaviors. (See “Credit
Scoring with Drift Adaptation Using Local Regions of Competence,” by Dimitrios Nikolaidis and Michalis Doumpos, Operations Research Forum, Nov. 25, 2022.)
Conceptual soundness ensures that a model’s theoretical foundation and assumptions
are valid. Explainability in AI and ML models means making the model’s decisions understandable to humans, which is crucial in regulated environments like credit scoring. (See “Model risk management principles for banks,”
Bank of England PRA, May 2023 and “Model Risk Management,” Office of the Comptroller of the Currency, August 2021.)
Regulatory
frameworks and guidelines
- NIST AI RMF and Public
Working Group participation.
NIST, an agency of the U.S. Department of Commerce, developed the AI Risk Management Framework (AI RMF), a comprehensive tool designed through global collaboration with more than 2,500 members, including myself.
Its main goal is to enhance the trustworthiness of AI systems by providing guidelines covering governance, risk assessment and the responsible deployment of AI technologies. (See “Technical and Policy Documents,”
National Institute of Standards and Technology.) - EU AI Act (April 2024).
The EU AI Act, finalized on April 30, 2024, represents a landmark piece of legislation aiming to set a comprehensive legal framework for the use and regulation of AI across the European Union. (See “EU AI Act: first
regulation on artificial intelligence,” European Parliament, Aug. 6, 2023.) The key components include:
- Risk-based categorization: unacceptable, high, limited and minimal risks.
- Transparency requirements: ensuring that AI systems are transparent about their functionality.
- Links to the General Data Protection Regulation (GDPR)
and digital regulation: reinforcing protections against AI that might compromise personal data privacy or lead to discrimination. (See “What is GDPR, the EU’s new data protection law?”
GDPR.EU.)
- Implications for credit
risk modeling: strict accuracy, documentation, and mitigation procedures to handle errors or biases in AI-driven models.
- BiS - Basel Committee guidelines
Principles
for the Sound Management of Operational Risk (2021): Strengthening operational resilience, information and communication technology (ICT) continuity and business continuity plans. (See “Principles for the Sound Management of Operational Risk,”
Basel Committee on Banking Supervision, June 2011.)
Regulatory sandboxes and global developments
- E.U. and U.S. Sandboxes
The European Blockchain Regulatory Sandbox and the DLT Pilot Regime 2022 serve as testing grounds for blockchain innovations within a controlled environment. (See “European Blockchain Regulatory Sandbox,”
European Commission and “DLT Pilot Regime,” European Securities and Markets Authority.) Similarly, U.S. states like Arizona and the Financial Conduct Authority in the U.K. have introduced
regulatory sandboxes to experiment with and advance blockchain technologies. (See “Welcome To Arizona's Regulatory Sandbox,” Arizona Attorney General and “Regulatory Sandbox,” Financial Conduct Authority, March 27, 2022.)
Monetary Authority of Singapore’s Veritas initiative
The Veritas initiative emphasizes responsible AI use in finance, focusing on fairness, ethics, accountability, and transparency (FEAT) principles. It involves collaborative efforts among financial institutions and tech firms to integrate FEAT principles
into AI systems. (See “Veritas Initiative,” Monetary Authority of Singapore, Oct. 26, 2023.)
Future directions and ethical considerations
The future of AI in banking will undoubtedly bring further innovations, along with regulatory and ethical considerations. The development of global standards for AI use in financial services, transparency in algorithmic decision-making and the protection
of consumer privacy are areas requiring careful attention. By leveraging these technologies responsibly and maintaining a keen eye on their implications, we can harness their potential while safeguarding the integrity of the financial system. As fraud
examiners, our journey is one of continuous learning and adaptation, ensuring that as the banking world evolves, we’re always several steps ahead in the prevention of fraud and the mitigation of its related risks.
Britta
Bohlinger, CFE, is a compliance auditor focusing on data governance and the auditing of systems and processes within the financial sector on behalf of a governmental authority. With a substantial background in investment banking, Bohlinger utilizes
an in-depth understanding of risk management to address the complexities of fraud prevention and AI applications in banking. A Certified Fraud Examiner (CFE) and Agile-certified professional, Bohlinger has been an active member of the ACFE since 2014
and has dedicated herself to mentoring aspiring fraud examiners within the organization since 2018, promoting innovative and ethical practices in fraud examination.
Contact her on LinkedIn.
Links
for Reference:
- Bank of England’s official publication section, Bank of England
- BiS – Basel Committee on Banking Supervision
- Europa Banking Authority (EBA)
- European Parliament - EU AI Act
- FCA UK
- NIST AI Risk Management Framework (April 2024), NIST
- Singapore MAS