The EU Parliament approved the Artificial Intelligence (AI) Act today. Member states agreed upon the regulation in December 2023. Today, members of the European Parliament endorsed the act, with 523 voting in favor, 46 voting against, and 49 abstaining from the vote.
It’s no secret that AI is a double-edged sword. For every positive use case, there are multiple ways humans can use the technology for nefarious purposes. Regulation is generally effective in creating safeguards for the adoption of new technologies. However, delineating the boundaries of AI’s applications and capabilities is challenging. The technology’s vast potential makes it difficult to eliminate negative uses while accommodating positive ones.
Because of this, the European Union’s new Artificial Intelligence Act will have both positive and negative impacts on banks and fintechs. Organizations that learn to adapt and innovate within the boundaries will see the most success when it comes to leveraging AI.
That said here are four major implications the new law will have on banks:
Prohibited AI applications
The new law prohibits the use of AI for emotion recognition in the workplace and schools, social scoring, and predictive policing based solely on profiling. This will impact how banks and fintechs use AI for customer interactions, underwriting, and fraud detection.
Compliance and oversight
The ruling specifically calls out banking as an “essential private and public service” and categorizes it as a high-risk use of AI. Therefore, banks using AI systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. The law states that citizens have two major rights when it comes to the use of AI in their banking platforms. First, they must have the ability to submit complaints, and second, they have the right to receive explanations about decisions made using AI. This will require banks and fintechs to enhance their risk management and update their compliance processes to accommodate for AI-driven services.
Transparency
Banks using AI systems and models for general purposes must meet transparency requirements. This includes complying with EU copyright law and publishing detailed summaries of training content. The transparency reporting will not be one-size-fits-all. According to the European Parliament’s explanation, “The more powerful general purpose AI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.”
Innovation support
The law stipulates that regulatory sandboxes and real-world testing will be available at the national level to help businesses develop and train AI use before it goes live. This could benefit both fintechs and banks for support in testing and launching their new AI use cases.
Overall, the EU AI Act isn’t requiring anything outside of banks’ existing capabilities. Financial institutions already have processes, documentation procedures, and controls in place to comply with existing regulations. The act will, however, require banks and fintechs to either establish or reassess their AI strategies, ensure compliance with new regulations, and adapt to a more transparent and accountable AI ecosystem.