Artificial Intelligence Regulation
On 13 March 2024, the European Parliament adopted the text of the Regulation on Artificial Intelligence, which sets out a number of rules for entities whose activities are in any way related to the use of artificial intelligence.
In particular, the regulation aims to protect the health, safety and rights of individuals whose sphere of influence may be affected by AI.
The Regulation classifies different types of AI according to how they are used, with the greatest attention being paid to so-called high-risk AI systems, which are subject to a number of obligations, such as the need to develop a risk management system, transparency requirements and the need for human supervision.
The second important group is the so-called unacceptable risk systems, the use of which is prohibited (with certain exceptions for law enforcement, e.g. in the event of a terrorist attack). Unacceptable risk systems include features such as ‘social credit’, facial recognition in the workplace or blanket biometric screening.
Many institutions and service providers will be affected by this Regulation, including the financial sector. They will be obliged to implement the requirements of the Regulation, even if they do not develop AI systems, but ‘only’ implement them in their processes.