TipsMake
Newest

The US has released a guide to managing AI risks for the financial industry.

The United States Department of the Treasury has just released a series of documents aimed at assisting the U.S. financial services industry in managing risks when deploying artificial intelligence (AI). These documents propose a systematic approach for financial institutions to assess and manage AI risks in their operations and policies.

 

At the heart of the document is the Financial Services AI Risk Management Framework (FS AI RMF), accompanied by a Guidebook detailing its application. This framework was developed in collaboration with over 100 financial institutions and industry associations, with input from regulatory bodies and technical organizations.

The goal of the FS AI RMF is to help financial institutions identify, assess, manage, and monitor risks associated with AI systems , while enabling them to continue adopting this technology responsibly.

 

A regulatory framework specifically for the financial industry.

The emergence of AI has created a range of risks that traditional technology governance models have not fully addressed.

These issues include algorithmic bias, lack of transparency in decision-making, cybersecurity vulnerabilities , and complex dependencies between systems, data, and models.

Large-scale language models (LLMs) further exacerbate this concern. Unlike traditional software with clearly defined behavior, AI-generated results can vary depending on the context, making prediction or interpretation more difficult.

The financial industry already operates within a highly regulated environment, and there have been many general guidelines in the past, such as the National Institute of Standards and Technology's AI risk management framework. However, these general frameworks often lack the detail to fully reflect the specific operational characteristics and legal requirements of the financial sector.

 

The FS AI RMF is therefore seen as an extension of the NIST framework , adding specific controls and practical implementation guidance for financial institutions.

The accompanying guide helps businesses assess the maturity level of AI adoption and establish controls to mitigate risks, while promoting consistent and responsible AI adoption across the industry.

The US has released a guide to managing AI risks for the financial industry. Picture 1

The main structure of the management framework

FS AI RMF is designed to connect AI governance with existing governance, risk management, and compliance processes within financial organizations.

 

This framework comprises four main components. The first is an AI adoption assessment questionnaire, which allows organizations to determine the maturity level in using this technology. The second component is a risk and control matrix, which includes risk statements and control targets appropriate to each stage of AI adoption.

The manual provides guidance on implementing the management framework, while the separate reference document offers examples of controls and the types of evidence required to demonstrate compliance.

In total, the FS AI RMF framework identifies 230 control objectives , organized into four main functions, based on the NIST AI risk management framework model: governance, risk mapping, measurement, and management.

Each function is further divided into categories and subcategories that describe the essential elements for building an effective AI risk management and governance system.

A key part of the framework is a questionnaire that helps determine the extent to which an organization is using AI.

Some businesses use traditional predictive models only on a limited scale. Others deploy AI in core business processes, or apply AI in direct customer service.

The assessment helps determine an organization's position within the AI ​​application spectrum, based on factors such as the business impact of AI, governance mechanisms, deployment models, use of third-party AI vendors, organizational goals, and data sensitivity.

From there, organizations are divided into four stages of development.

In its early stages, AI was largely unimplemented in practical operations and was only being considered. At a minimum, AI was used in a few low-risk areas or individual systems.

 

As organizations progress to the development phase, they begin operating more complex AI systems, including applications that process sensitive data or rely on external services. The final stage is when AI is deeply integrated into business operations and plays a crucial role in decision-making.

This classification helps organizations focus on implementing controls that are appropriate to their level of maturity. Early-stage businesses don't need to implement all controls immediately, but as AI becomes more deeply integrated, the management framework will add measures to address increased risk.

Risk control in AI systems

Control objectives at each stage of AI application encompass various aspects such as data governance, monitoring algorithmic bias, system security, transparency in decision-making, and the ability to maintain operational stability.

The manual also provides examples of control measures and the types of evidence organizations can use to demonstrate compliance. However, each business should ultimately choose the measures that best suit its system.

The management framework also recommends developing specific incident response procedures for AI systems, as well as creating a central database to track AI-related incidents. These measures help organizations detect system failures early and improve governance processes over time.

Principles for building reliable AI

FS AI RMF integrates principles of trusted AI, including accuracy and reliability, safety, security and resilience, accountability, transparency, explainability, privacy protection, and fairness.

These principles form the basis for evaluating AI systems throughout their entire lifecycle. Simply put, financial institutions need to ensure that AI-generated results are reliable, systems are protected against cyber threats, and critical decisions can be explained when necessary.

 

For leaders of financial institutions, the FS AI RMF provides a guide to integrating AI into existing risk management systems.

This framework emphasizes that AI governance cannot be the responsibility of a single department. Technology teams, risk management experts, compliance departments, and business units all need to be involved in the AI ​​governance process.

If AI is deployed without strengthening governance systems, organizations may face operational problems, legal risks, or reputational damage. Conversely, businesses that have established clear governance mechanisms will be more confident in deploying this technology.

The handbook also emphasizes that AI risk management is an ongoing process. As technology evolves and regulatory requirements change, organizations need to update their risk governance and assessment methodologies.

For the financial industry, the message is clear: the application of AI must always be accompanied by a corresponding risk management system , and a structured management framework like the FS AI RMF will help organizations share a common language and methodology to adapt to that change.

Discover more
David Pac
Share by David Pac
Update 20 March 2026