Race development in generative AI intensifies potential dangers, according to Barr, the U.S. Attorney General
The integration of generative artificial intelligence (AI) in the financial sector is a topic of significant interest and concern, as the technology holds immense potential while also posing various risks.
### Potential Risks
The broad adoption of AI in core financial operations can amplify vulnerabilities, with heavy reliance on a few dominant AI service providers and hardware suppliers creating systemic risks. Using similar AI models and training data across institutions can lead to market correlations and systemic shocks. AI systems may be vulnerable to cyberattacks, potentially triggering market disruptions. Opaque and unstructured data sources complicate model validation and risk governance, and generative AI enables sophisticated fraud methods.
### Model and Data Risks
AI models, particularly generative ones, can "hallucinate," generating inaccurate but confident outputs. Lack of explainability and opacity in AI training data challenge monitoring and validation efforts. Data quality, governance, and controls are essential yet difficult to enforce, especially with synthetic or third-party data sources. Risks of bias, unfairness, and data privacy breaches increase with the complexity and volume of AI-enabled processing.
### Cybersecurity Threats
AI lowers barriers for sophisticated cybercrime, enabling attacks at scale. Financial firms must defend against AI-powered disinformation and identity fraud.
### Regulatory Considerations and Controls
Regulators emphasize the need for appropriate controls, transparency, and ongoing surveillance of AI applications in financial services. Enhanced oversight and monitoring, data governance and security, model explainability and validation, and risk management focus are all critical in ensuring the safe and effective use of AI.
Banks and fintechs adopting generative AI must balance innovation with risk management and regulatory compliance, focusing on strong governance, data integrity, cybersecurity, and operational transparency to mitigate new risks emerging from AI’s transformative capabilities.
Michael Barr, the Federal Reserve's outgoing vice chair for supervision, has warned about increased risks in financial services due to competitive pressure around generative AI. He emphasized the importance of AI governance, suggesting that we should ensure that AI enhances, not replaces, humans, and set up best practices and cultural norms to that end. He also called for monitoring of how generative AI's introduction alters banking, citing concerns about increasing market volatility and stoking asset bubbles and crashes.
Fintechs and nonbanks face fewer regulatory constraints than lenders, and some fintechs have moved quickly to incorporate generative AI in customer-facing functions. However, every bank should consider the technology's limitations and humans' best position to remain in the loop when incorporating generative AI.
Regulators should approach the fast-moving AI landscape with agility and flexibility to ensure the safety and stability of the financial sector while harnessing the benefits of AI.
Generative AI, applied within the fintech and business sectors, could potentially bolster their productivity and financial performance. However, the increased use of this technology also presents unique risks, such as the amplification of systemic risks due to reliance on a few dominant AI providers, potential market disruptions from cyberattacks, and sophisticated fraud methods. The introduction of generative AI in the industry might require balancing innovation with effective risk management and strict regulatory compliance, focusing on areas like strong governance, data integrity, cybersecurity, and operational transparency to mitigate the new risks emerging from AI's transformative capabilities.