Regulation of AI in financial services in 2022

The use of artificial intelligence (AI) or automated decision-making in financial services is prevalent for many services, from evaluating loan applications, to recruiting or fighting fraud. A PYMNTS study shows that 60% of acquiring banks say AI systems are their most important fraud detection tools, making this technology a must-have as digital transaction volumes soar.

As companies use AI to identify and reduce fraud, regulators are working to combat unfair bias and improve the transparency of algorithms and datasets.

Regulators are taking a very permissive approach to the use of AI, but financial services companies could be affected by new regulations in 2022.

United States

Although there are currently no federal AI regulations in the United States, regulators and lawmakers have sent some signs that AI regulations may be coming in 2022.

First, the Algorithmic Accountability Act 2022, introduced in February, aims to bring transparency and oversight to software, algorithms and other systems used to make automated decisions.

The bill, if approved, would require companies to conduct impact assessments on bias, efficiency and other factors, when using automated decision systems to make critical decisions. The bill would also give the Federal Trade Commission (FTC) the power to require companies to comply with this bill and create a public repository of these automated systems.

The bill does not impose prohibitions or tell companies how to use their automated systems, but it does impose reporting and disclosure requirements.

Second, the FTC is considering enacting new regulations that could prohibit certain AI practices. In a blog post and in a letter sent to Sen. Richard Blumenthal, D-Conn., the FTC outlined the risks of AI to consumers. This included discriminatory results as well as a lack of transparency in the decision-making process and how companies collect and use data. Any rule would likely aim to address these issues.

The FTC can use its regulatory power, but to enact new rules, the practice the regulator seeks to combat must be “prevalent” in the country. This requirement may not be so easy to meet. As such, the FTC can still rely on individual investigations against companies that engage in algorithmic discrimination, which can affect financial firms that use AI for loan applications.

Read more: The cost of the US AI bill could outweigh its benefits

European Union

The European Parliament (EP) will soon vote on the AI ​​Act, which creates a risk-based approach to AI in Europe and will impact the use and development of AI systems, including in the financial sector.

The most notable features of the AI ​​Act for financial services companies are:

  • A strict regime and mandatory requirements for “high risk” activities, such as AI systems used for solvency or building credit records. This includes obligations such as monitoring the operation of high-risk AI and maintaining AI-generated logs. Companies will also need to provide human oversight when using AI for recruitment purposes or to make promotion and firing decisions.
  • New transparency requirements for specific types of AI. For example, businesses will need to notify individuals that they are interacting with a chatbot.
  • A ban on certain uses of AI, such as systems that use subliminal techniques to influence consumer behavior.

The AI ​​Bill could still be subject to amendments in Parliament, but there is enough consensus among policymakers to have a common position by the end of 2022, which could lead to final approval. of the bill by 2023.

See also: European Parliament committee urges member states to design AI roadmap

UK

The UK government published its National AI Strategy in autumn 2021, which sets out the government’s proposed timeline for implementing actions, including AI governance and regulation for 2022.

For now, there are no specific regulations on AI. However, government and regulators hint that regulations could be enacted with a light-hearted approach to foster innovation.

The best example comes from the Bank of England and the Financial Conduct Authority. In their report on the Public-Private Forum on AI, they did not suggest specific public policies, but said that too much regulation in this area would be detrimental to the development of AI.

Even so, regulators have warned banks using AI systems to approve loan applications that they must be prepared to explain how their decisions are made to avoid discriminatory outcomes.

Related: UK banks get first glimpse of what AI regulation could look like

——————————

NEW PYMNTS DATA: ACCOUNT OPENING AND LOAN SERVICE IN THE DIGITAL ENVIRONMENT

On: Forty-two percent of US consumers are more likely to open accounts with financial institutions that facilitate automatic sharing of their bank details upon sign-up. The PYMNTS study Account opening and loan management in the digital environmentsurveyed 2,300 consumers to explore how FIs can leverage open banking to engage customers and create a better account opening experience.