The EU AI Act and its impact on (partially) automated credit allocation

Introduction

At the latest since the arrival of AI systems in the mainstream through easily accessible systems such as ChatGPT and Google Gemini, the topic of AI has also reached the (specialist) public. With the Regulation on Artificial Intelligence (AI Act), there is now the world’s first law to regulate artificial intelligence. The stated aim of the regulation is to improve the functioning of the internal market by means of a uniform legal framework in order to promote the introduction of human centric and trustworthy AI, ensure a high level of protection, guarantee protection against harmful effects and support innovation (see recital 1 of the AI Regulation, Article 1 (1) of the AI Act).

In implementing this objective, the AI Act takes a risk-based approach. It divides AI applications into four risk levels with different legal consequences: unacceptable, high, limited and minimal/no risk. While AI applications posing an unacceptable risk are prohibited (Article 5 AI Act), high-risk AI applications are in principle permissible. However, the actors are subject to a regime of obligations, albeit with varying degrees of strictness, some of which are subject to fines.

Automated credit scoring as a high-risk AI system

Principle

Of particular interest to banks is the classification of AI systems used to make decisions about the creditworthiness of customers. According to Article 6 (2) in conjunction with Number 5 (b) of Annex III of the AI Act, AI systems – except those used for the detection of financial fraud – are considered high-risk AI systems if their field of use is to assess the creditworthiness and evaluate the solvency of natural persons. AI systems in (non-risk-relevant) retail banking are an obvious example. For example, both partially and fully automated procedures are currently used in the consumer credit business. But AI is also finding its way into corporate lending, for example through the implementation of Natural Language Processing (NLP), which can be used to evaluate companies’ annual reports and to search them for key terms that allow conclusions to be drawn about credit ratings (BaFin, Big Data and Artificial Intelligence, p. 10, english).

Exceptions

Article 6 (3) of the AI Act contains categories of AI systems that, despite being listed in Annex III, shall not be considered to be high-risk where they do not pose a significant risk of harm to health, safety or fundamental rights of natural persons, inter alia, by not significantly influencing the outcome of decision-making (subparagraph 1). In addition, one of the conditions of subparagraph 2 must be met, of which, in the context under consideration here, points (a) and (d) should be emphasized. According to these, the exception applies to AI systems that perform a narrow procedural task (point (a)) or perform a preparatory task to an assessment (point (b)). According to Subparagraph 3, this does not apply if a natural person is profiled (as defined in Article 3 (52) of the AI Act in conjunction with Article 4 (4) of the GDPR). As a result, the retail sector is excluded. However, an area of application may be in corporate banking. The decisive factor here is the extent of the AI support in interaction with the person responsible for granting the loan. Within this exception it is not only important to check the AI involvement cleanly with regard to the indeterminate legal concept of a significant risk, but to also document it in a legally secure manner in accordance with Article 6 (4) sentence 1 of the AI Act. The provider of such an AI system is subsequently subject to a registration obligation under Article 6 (4) sentence 2 and Article 49 (2) of the AI Act. In addition, the actual application must also be kept in mind. It is possible that the AI-supported task increasingly comes to the fore in the decision-making process, perhaps because over time the decision-makers attach greater importance to those circumstance. As a result, the loss of the aforementioned exception with extensive obligations (see below) may follow. This circumstance can, of course, also arise if the exception is deleted or changed. Article 6 (7) of the AI Act expressly authorizes the EU Commission to make such changes by means of delegated acts.

Individual actors and their obligations

The distinction between providers and deployers is particularly important for the constellation described here. The AI Act links different obligations to this classification.

Obligated parties

In the context relevant here, the term provider (Article 3 Number 3 AI Act) refers, in summary, to those persons who develop AI systems; deployers (Article 3 Number 4 AI Act) are persons who use AI systems in a professional context. Particular attention should be paid to the fact that one is deemed to be a provider if the AI system is put into service under its own name or trademark, regardless of whether it was developed by the provider itself or by a third party. In addition to making an AI system available for first use directly to a deployer, “putting into service” also includes the case of own use (Article 3 Number 11 AI Act.). Thus, to be a provider within the meaning of the Regulation, it is not necessary – contrary to what one might intuitively understand by the term – for the developed AI system to leave the provider’s own sphere of influence. This should be taken into account in particular when drafting contracts for AI systems developed on behalf of banks, in view of the extensive range of obligations incumbent on providers of high-risk AI systems.

Selected obligations

Whereas providers of high-risk AI systems are subject to extensive obligations (Article 16 (a) to (l) in conjunction with Articles 8 to 15, 17 to 20, 43, 47 to 50, 72 AI Act), those of deployers are comparatively manageable. Their obligations are set out in Articles 26 and 27 of the AI Act.

Article 26 of the AI Act essentially requires them to use AI systems in accordance with the intended purpose, and to monitor this. For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation are deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law (paragraph 5 subparagraph 2). A suitable natural person entrusted with the supervision is to be appointed (paragraph 2) and automatically generated logs are to be kept for an appropriate period, but for at least six months (paragraph 6 subparagraph 1). They further have to inform the natural persons that are subject to the use of the high-risk AI system (paragraph 11).

In addition, the fundamental rights impact assessment is specifically regulated in accordance with Article 27 of the AI Act. This obligation is also to be fulfilled by deployers of AI systems for checking the creditworthiness of natural persons (paragraph 1 sentence 1). Article 27 (1) sentence 2 (a) to (f) of the AI Act lists some substantive requirements that must be met. Those who feel reminded of the data protection impact assessment under Article 35 GDPR and fear double the work have been heard by the EU legislator. If any of the obligations of Article 27 are already met through the data protection impact assessment conducted, the fundamental rights impact assessment shall only complement that data protection impact assessment.

Additionally, Article 27 of the AI Act is the framework for some further simplifications for deployers. For example, the obligation to carry out a fundamental rights impact assessment applies only to the first use of AI systems (Article 27 (2) sentence 1 AI Act). This complements the requirement under Article 27 (1) sentence 1 AI Act to carry out the fundamental rights impact assessment before the system is put into service. Taking into account the definition of “putting into service” (see above), high-risk AI systems already in use are thus generally exempt (cf. in this respect Article 111 (2) sentence 1 AI Act). In addition, the Regulation provides for the possibility of relying on fundamental rights impact assessments in similar cases or on an impact assessment already carried out by the provider (Article 27 (2) sentence 2 AI Act). However, it may be unclear when a case is comparable and whether or to what extent the deployer may follow the provider’s impact assessment without further review.

Hidden obligations

At this point, it should also be noted that deployers become provider as soon as they put a high-risk AI system that they have previously placed on the market or put into service on the market under their own name or trademark, regardless of any contractual arrangements between the provider and the deployer (Article 25 (1) (a) AI Act). In this case, the extensive provider obligations (see above) will apply to them from that moment on. The initial provider, on the other hand, is no longer considered a provider within the meaning of the Regulation (Article 25 (2) sentence 1 AI Act). The aforementioned obligations no longer apply to it. However, new obligations arise (Article 25 (2) sentence 2 AI Act).

While the above is less surprising, caution is advised when a high-risk AI system is put into service that has been classified as non-high-risk by the provider in accordance with Article 6 (3) subparagraph 1, subparagraph 2 (a) of the AI Act (see above, II (b)). The deployer will not be able to invoke an erroneous classification by the provider due to the fact that the deployer’s obligations aim at protecting the persons subject to the use of high-risk AI systems. There is no formal examination or corresponding approval by the authorities (see Article 6 (4) sentence 2 AI Act). The registration to be carried out in accordance with Article 6 (4) sentence 1 in conjunction with Article 49 (2) and Article 71 AI Act does not have the effect of formally confirming the provider’s assessment. In this case, the deployer is subject to the preceding obligations, which may remain undetected. This should not be dismissed lightly, as some of the obligations set out in Article 26 AI Act are subject to fines of up to EUR 15 million or 3 % of the total worldwide annual turnover of the previous financial year (Article 99 (4) (e) AI Act). However, the provider’s incorrect classification cannot be ignored when determining the specific amount of the fine.

Conclusion

The mere existence of the AI Act is already a milestone in the regulation of artificial intelligence. Whether the obligations imposed on the actors are practicable, however, remains to be seen. Despite the fact that the obligations described with regard to high-risk AI systems will not apply until August 2, 2027 (Article 113 subparagraph 3 (c) AI Act), they should not be underestimated. Banks are well advised to monitor further developments and to constantly review their workflows in this regard.

Weitere Artikel