The limitations of the European AI regulation
Given the increasing use of artificial intelligence systems, the European Union has set out to regulate the subject and set limits on the use of these systems.
As early as April 2021, the European Commission proposed the EUROPEAN REGULATION ON ARTIFICIAL INTELLIGENCE(Artificial Intelligence Act) and recently – precisely on 14.06.2023 – the EU Parliament approved the text. The adoption of the final version is expected in the coming months.
The risk levels of Artificial Intelligence.
Under the approved text, artificial intelligence systems are ranked according to the risk they pose to users.
Of course, the greater the risk, the greater the limitations.
In fact, all those artificial intelligence systems defined as unacceptable risk, that is, those that are contrary to EU values and violate fundamental rights, are banned.
Basically, all those systems that:
Then there are the high-risk systems, i.e., those that can cause harm to the environment, health, safety or fundamental rights of people.
These systems will not be banned, but they will have to meet very strict requirements and obligations, and in addition to having to be evaluated before they are put on the market, they will also have to be evaluated throughout their life cycle.
Finally, there are systems that are considered limited risk (such as chatbots) and those with little or no risk (such as video game enabled or spam filters).
As for those with limited risk, only minimum transparency requirements will have to be met.
For those with minimal or no risk, no limitation is provided and they can be used freely.