Finance

January Global Regulatory Brief: Digital finance | Insights


MAS publishes information paper on good practices for AI and generative AI model risk management for banks

The Monetary Authority of Singapore published an information paper on good practices for AI and GenAI model risk management for banks. The info paper was published after MAS’ thematic review of selected banks’ AI model risk management practices this year. The good practices highlighted relate to governance and oversight, key risk management systems and processes, and development and deployment of AI. All financial institutions in Singapore are encouraged to reference these good practices when developing and deploying AI.

Governance and oversight: Most banks reviewed have updated governance structures, roles and responsibilities, as well as policies and processes to address AI risks and keep pace with AI developments. Good practices include:

  • establishing cross-functional oversight forums to avoid gaps in AI risk management; 
  • updating control standards, policies and procedures, and clearly setting out roles and responsibilities to address AI risks;
  • developing clear statements and guidelines to govern areas such as fair, ethical, accountable and transparent use of AI across the bank; and 
  • building capabilities in AI across the bank to support both innovation and risk management

Risk management systems and processes: Most banks have recognized the need to establish or update key risk management systems and processes for AI, particularly in the following areas:

  • policies and procedures for identifying AI usage and risks across the bank, so that commensurate risk management can be applied;
  • systems and processes to ensure the completeness of AI inventories, which capture the approved scope of use and provide a central view of AI usage to support oversight; and
  • assessment of the risk materiality of AI that covers key risk dimensions, such as AI’s impact on the bank and stakeholders, the complexity of AI used, and the bank’s reliance on AI, so that relevant controls can be applied proportionately.

Development and deployment: Most banks have established standards and processes for development, validation, and deployment of AI to address key risks.

  • For development of AI, key areas that banks paid greater attention to include data management, model selection, robustness and stability, explainability and fairness, as well as reproducibility and auditability. 
  • For validation, banks required independent validations or reviews of AI of higher risk materiality prior to deployment, to ensure that development and deployment standards have been adhered to. For AI of lower risk materiality, most banks conducted peer reviews that are calibrated to the risks posed by the use of AI prior to deployment. 
  • To ensure that AI would behave as intended when deployed and that any data and model drifts are detected and addressed, banks performed pre-deployment checks, closely monitored deployed AI based on appropriate metrics, and applied appropriate change management standards and processes.

Next steps: MAS is considering supervisory guidance (i.e. financial institutions would be highly encouraged to implement the guidance) next year, building upon the focus areas covered in this info paper.

RBI constitutes committee to develop a Framework for Responsible and Ethical Enablement of AI

The Reserve Bank of India (RBI) has constituted a seven member committee on FREE-AI to develop an adaptable AI framework for the financial sector. The committee comprises members from academia, government, and industry experts. 

Objective in detail: The financial sector landscape is witnessing paradigm shifts with the advent of frontier technologies like AI and machine learning (ML). While the said technologies hold transformative potential to bring unprecedented efficiencies, the attendant risks like algorithmic bias, explainability of decisions, data privacy, etc., are also noted to be high. FREE-AI is intended to harness the benefits from such technologies while providing for measures to address the attendant risks early in the adoption cycle. 

The terms of reference of the committee primarily include:

(i) Assessment of current level of adoption of AI in financial services, globally and in India.

(ii) Review of regulatory and supervisory approaches on AI with focus on financial sector globally.

(iii) Identification of potential risks associated with AI, if any, and recommend an evaluation, mitigation and monitoring framework and consequent compliance requirements for financial institutions.

(iv) Recommendation of a framework including governance aspects for responsible, ethical adoption of AI models / applications in the Indian financial sector.

The committee may also invite domain experts, industry representatives and other stakeholders for consultations and / or to participate in deliberations.

Looking forward: The Committee will submit its report to RBI within six months from the date of its first meeting.

Bipartisan House Task Force issues report on Artificial Intelligence

The US House of Representatives’ Bipartisan Task Force on Artificial Intelligence (the “AI Task Force”) issued a 253-page report (the “Report”) containing policy recommendations and guiding principles with the goal of advancing U.S. leadership in AI innovation. While the Report does not carry the weight of law, it will likely serve as a foundation for any potential Congressional action relating to AI in the next Congress.

In more detail: The AI Task Force, established earlier this year in a joint effort between House Speaker Mike Johnson and Democratic Leader Hakeem Jeffries, was tasked with, among other things, considering whether Congress should establish guardrails regarding the development and use of AI and investigating a variety of use cases, both in the public and private sectors.

The Report contains more than 80 policy recommendations for Congress to consider. They relate to, among other things, the use of AI financial services, intellectual property rights, the protection of civil rights and liberties, national security implications, and ensuring content authenticity. Recommendations relating to financial services include:

  • Increasing regulators’ expertise with AI;
  • Maintaining consumer and investor protections in the use of AI;
  • Considering the use of regulatory “sandboxes” to allow for AI experimentation; and
  • Supporting a principles-based regulatory approach that can accommodate rapid technological changes.

Related developments: The Report comes as U.S. regulators continue to engage firms on their use of AI, including a recent CFTC staff advisory on the use of artificial intelligence in CFTC-regulated markets by registered entities. The advisory reminds CFTC-regulated entities of their obligations under the Commodity Exchange Act and the CFTC’s regulations as these entities begin to implement AI. Additionally, the SEC’s Fiscal Year 2025 Examination Priorities notes that SEC-regulated entities may face examination of certain AI-related disclosures and policies during future examinations. 

Israeli taskforce publish report on AI in financial services

The Inter-Agency Task Force in Israel has issued its interim report on the Use of Artificial Intelligence in Israel’s Financial Sector, which is open for public comments until December 15 2024.

In summary: The report review the benefits, risks and current status of AI within the Financial sector. 

  • The report states that whilst adoption is slow it will gather pace and therefore regulation should not be imposed that inhibits innovation.
  • It suggests that a prescriptive approach should be avoided and that this interim report is the first of many reviews to take into account the progression of AI. 

Guiding principles: Whilst appreciating that the integration of AI into financial services is unavoidable and brings many benefits, the task force believes an appropriate level of risk should be allowed but that certain guiding principles should be followed.

  • Regulation should be flexible and adaptable.
  • Regulation should align with international standards.
  • Technical integration and innovation should remove barriers to market development.
  • Regulatory tolls should promote experimentation and learning.
  • Only introduce new regulatory measures where needed (as opposed to a default situation where the use of AI triggers regulatory change).
  • Any regulatory approach should be risk-based. 
  • Consumer protection, social and human rights must be incorporated into the regulation of AI. 
  • Where possible there should be regulatory uniformity across similar services and risks.
  • Where possible regulation should be technologically neutral. 

The report looks further into the challenges and considerations for AI, the right to privacy and personal data protection, bias and discrimination, and liability and accountability. It also looks at governance of AI as well as the risks that could be presented through the use of AI including financial stability and operational risk such as cyber, third party, fraud and disinformation. 

FINMA publish guidance and risk management for firms using AI

The Swiss markets regulator FINMA has published guidance and risk management for financial firms using AI. 

In summary: While there is no AI-specific legislation in Switzerland FINMA expects firms that use AI to ensure effective governance and risk management. To help firms with this, FINMA have provided specific examples of measures to address risks resulting from AI applications at supervised institutions. 

Findings: FINMA finds that the risks from the use of AI are mainly in the area of operational risk, particularly model risk as well as IT and cyber risks. 

  • The growing dependence on third parties is also amplifying these risks. 
  • Legal and reputational risk are also considerations due to the autonomous and difficult-to-explain actions of these systems and scattered responsibilities for AI applications.

Governance: FINMA observed that supervised institutions focus primarily on data protection risks, but less on model risks such as lack of robustness and correctness, bias, lack of stability and explainability. 

  • In addition, the development of AI applications is often decentralized, making it challenging to implement consistent standards and assign responsibilities clearly to staff.
  • In the case of externally purchased applications and services, the supervised institutions sometimes had difficulties determining whether AI is included, which data and methods are used and whether sufficient due diligence exists.
  • FINMA expects firms to have a centrally managed inventory with a risk classification and resulting measures, definition of responsibilities, requirements for model testing and supporting system controls, documentation standards and broad training measures.
  • In the case of outsourcing, FINMA is looking to see whether firms have implemented additional tests, controls and contractual clauses governing responsibilities and liability issues and ensured that the third parties entrusted with the outsourcing had the necessary skills and experience.

Inventory and risk classification: FINMA assessed whether the supervised institutions had a sufficiently broad definition of AI, as traditional applications can also present similar risks and the same risks must be addressed in the same way. 

  • FINMA also considered the existence and completeness of AI inventories and the risk classification of AI applications.

Data quality: FINMA observed that some firms have not defined any requirements or controls to ensure data quality for AI applications.

  • As AI applications often learn from data automatically, data quality is therefore often more important than the selection of the specific model. 
  • In the case of purchased solutions, regulated firms often have no influence on or knowledge of the underlying data which can lead to these not being suitable and increased risk of the unconscious use of deliberately manipulated data. 
  • Since the increased use of AI, more unstructured data such as texts and images are also being analyzed, which can make it difficult to assess quality. 
  • FINMA assessed whether the supervised institutions have defined requirements in their internal rules and directives to ensure that data is complete, correct and of integrity and that the availability of and access to data is secured.

Tests and ongoing monitoring: FINMA observed weaknesses in the selection of performance indicators, tests and ongoing monitoring at some of the supervised institutions.

  • FINMA assessed whether the supervised institutions monitor changes in input data to ensure that models remain applicable in a changing environment (recognition and treatment of data drift).
  • FINMA assessed whether the supervised institutions give prior consideration to recognizing and handling exceptions.

Documentation: FINMA observed that some supervised institutions do not have centralized documentation requirements and that some of the existing documentation is not sufficiently detailed and recipient-oriented.

Explainability: FINMA observed that results often cannot be understood, explained or reproduced and therefore cannot be critically assessed.

  • Explainability includes understanding the drivers of the applications or the behavior under different conditions in order to be able to assess the plausibility and robustness of the results.

Independent review: FINMA has not seen a clear distinction between the development of AI applications and the independent review in all cases.

  • Only a few supervised institutions carry out independent review of the whole model development process by qualified personnel. 

Looking ahead: FINMA will refine its expectations on appropriate governance and risk management by supervised institutions in connection with AI.



Source link

Leave a Reply