EU AI Act: brief analysis of limitations and challenges with regard to FinTech
By : Gaby Kabue
2024-09-01 15:07:27
AI in FinTech has recently become more prominent due its ability to process a vast amount of data, enabling faster and more efficient decision-making. One of the popular applications of AI in FinTech is the so-called “AI-based credit scoring” used for assessing borrowers’ creditworthiness. However, AI-based credit scoring, as many other AI applications, can introduce algorithms biases, leading to discrimination against certain borrowers on characteristics such as sex, race, color or other people attributes.
Published in the Journal Official of the EU on July 12th, 2024, and entered into force on August 2nd, 2024, the EU AI Act becomes the world’s first comprehensive AI legislation. It introduces a proportionate and effective risk-based approach, classifying risks into four categories: unacceptable risk, high risk, limited risk and minimal risk. In light of the EU AI Act, AI-based credit scoring is classified as “high-risk AI system” due to its potential impacts on fundamental rights. Consequently, the EU AI Act imposes ex ante obligations on deployers of high-risk AI systems, such as AI-based credit scoring, to perform Fundamental Rights Impact Assessment (FRIA) to identify and mitigate the specific harms that high-risk AI systems may pose to the fundamental rights of those persons or groups.
In fact, our analysis reveals that the EU AI Act presents some limitations, ambiguities and challenges that complicate regulatory compliance for FinTech companies. For instance, the Article 27 of the EU AI Act is not yet fully clear on the scope of fundamental rights to be considered during the assessment process. A critical question to ask here is whether all fundamental rights must be subject to assessment, or whether the deployer can limit the assessment to fundamental rights that are more likely affected by AI-based credit scoring. The limitations stem from Article 27(3) of the EU AI Act which stipulates: “Once the assessment performed, the deployer shall notify market surveillance authority of its results...”. This provision critically assigns the responsibility for performing the assessment on the deployer of high-risk AI systems (such as AI-based credit scoring), who has obviously inherent interest in the result. Consequently, this could lead to conflicts of interest, insofar as the deployer may incentivize to downplay or overlook other risks to avoid administrative or monetary sanctions. Furthermore, once the assessment is completed, and notification is made to the market surveillance authority by the deployer, there is no mechanism for the surveillance authority to follow up and confirm or not that the deployer should proceed or not with the launch of the “high-risk AI system” that he assessed.
From a legal standpoint, the current state of the EU AI Act demonstrates that the Regulation does not regulate all aspects of AI. indeed, scholars and legal practitioners continue to identify the limitations and ambiguities of this Regulation, which poses regulatory compliance challenges for FinTech compagnies. A critical question is whether the EU AI Act can effectively regulate AI as a whole. J. R Simpson argues in his book entitled “AI prevails: How to keep yourself and humanity safe”, that an ATTEMPT to regulate AI as a whole would be misguided, since there is no clear definition of AI (it isn’t one thing), and its risks and consideration are truly different in different sectors. J.R. Simpson's perspective is framed within the concept of the "black box” phenomenon wherein even the creators of an AI system are unable to provide the necessary information about the ins and outs of how the AI operates. Consequently, regulating AI as a whole remains a challenge.