AI Act: A bureaucratic attempt to regulate AI
The European community in recent years has been at the forefront of regulating the digital world
After the successful (at least in terms of its reception) regulation of personal data protection, which prompted the US, China and many other countries to follow the EC proceeded with a series of regulations such as the DMA, DSA and the highly ambitious AI Act
The AI Act regulates the use of AI in the EU with the aim of protecting the fundamental rights of European citizens. Regulation is necessary as AI methods are increasingly used in decision-making concerning productive activity, but also human (e.g., fraud detection, solvency, and even pre-trial detention in the USA)
The regulatory initiative was accompanied by the expectation of a clear framework and limits for the application of AI methods in decision-making that would define citizens' rights. Unfortunately, the proposed regulation creates a large bureaucratic framework for recording the capabilities of an AI system but does not manage to effectively delimit its use.
The basic idea of the approach lies in the categorization of artificial methods into four risk categories: unacceptable risk, high risk, limited risk, and low risk. Applications that fall into the category of unacceptable risk, e.g. social credit applications, such as the system developed to a limited extent in the PRC 2 for evaluating the (mainly economic) behavior of citizens), are not allowed. The high-risk category includes apps that identify and categorize people, as well as apps that rule on social benefits. (semi-)automated recruitment systems also fall into this category. For high-risk applications, the draft regulation provides for additional safeguards, such as keeping logs of the operation of systems, transparency and information to users, human oversight and guarantees for the accuracy, robustness and cybersecurity of the system.
It can be observed that guarantees for high-risk systems are roughly the guarantees required by any information system that performs a critical task. The most specific provision is that of human supervision. Human supervision means that there must be an expert who has an in-depth knowledge of a system and who can intervene when he thinks that the system does not respond properly. Unfortunately, this prediction is also practically impossible, especially when it comes to AI's most popular technique, machine learning.
The guarantees for the protection of citizens' rights provided by the AI Act also reveal its biggest weakness: the lack of a clear understanding of the risks posed by AI to citizens' rights. My view is that the most problematic point is that it allows decisions with a significant impact on people's lives and their rights to be taken by decision-making systems that are not based on clear rules, so their criteria remain difficult to understand or even incomprehensible. The widespread adoption of AI systems in decision-making (even in assisting) will result in decisions with a significant societal impact being made, without the criteria being consulted. This risk is not addressed in the AI Act as it refers to any problem with a series of good practices for the design and operation of the information systems they can address malfunctions in the system, but not weaknesses of the methods themselves. The AI Act also characterizes AI according to its application rather than its techniques. The techniques significantly differentiate the risk of automation: rules-based systems can be understood by a wide public and their decision can be the subject of consultation but also allow for effective human oversight. In machine learning systems, despite all efforts to achieve explainability and algorithmic fairness, this is practically impossible.
The need to regulate AI is undeniable and the EC initiative is positive for the continuity of the rule of law also in the digital world. But we need more determination and a braver vision when it comes to citizens' rights. The current approach tries to strike a balance between protecting rights and seamless production in AI, so it achieves neither: it creates a very weak framework for protecting rights and at the same time places heavy bureaucratic burdens on production
* Iris Efthymiou is a Writer, President Interdisciplinary Council
*Manolis Terrovitis is a Researcher at the Institute of Information Systems of the Athena Research Center
A big thank you to Manolis Terrovitis for the wonderful cooperation!
#ai #aiact #eu #artificialintelligence
Building Digital Businesses That Go Beyond Technology - General Manager @ MOVE Estrella Galicia Digital | ExAmazon & International TopVoice +250K
1yThe need for AI regulation is crucial to protect citizens' rights. Let's hope for a bolder vision in the future! 🌐 Dr. Iris Efthymiou
Chief Medical Officer at CLEARA Biotech BV, leading clinical development, translational science and medical affairs in oncology projects
1yOf interest
Chief Medical Officer at CLEARA Biotech BV, leading clinical development, translational science and medical affairs in oncology projects
1yHi Iris, along the lines of governmental regulation, please see below the weblink for the report by the McDermott group on AI applications in healthcare (in the US) and the view from the ONC (Office of the National Coordinator for Health Information Technology; US Department of Health and Human Services/ DHHS). You may find it of use in your projects/ articles. Happy to discuss 1-1. https://www.mwe.com/insights/understanding-oncs-health-ai-transparency-and-risk-management-regulatory-framework