AI is soon to be restricted by law

EU council announced that now AI is restricted for security reasons and more.

On April 21, 2021, the EU proposed a comprehensive set of regulations called the Artificial Intelligence Act, which aimed to govern the development and use of AI systems within the European Union. The EU's Artificial Intelligence Act aimed to regulate AI systems based on their risk levels, establishing different requirements for different types of AI applications.

On May 21, 2024, the Council of EU published a press release about this proposal.

The Council approved a ground-breaking law aiming to harmonize rules on artificial intelligence, the so-called Artificial Intelligence Act. The flagship legislation follows a ‘risk-based’ approach, which means the higher the risk of causing harm to society, the stricter the rules. It is the first of its kind in the world and can set a global standard for AI regulation. The new law aims to foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors. At the same time, it aims to ensure respect for the fundamental rights of EU citizens and stimulate investment and innovation in artificial intelligence in Europe. The AI act applies only to areas within EU law and provides exemptions such as for systems used exclusively for military and defense as well as for research purposes. 

AI Law categorizes AI based on its risks: Unacceptable risks are prohibited (e.g., social ranking systems and manipulative AI). Most EU documents focus on high-risk and regulated AI systems. A smaller portion manages limited-risk AI systems, subject to lighter transparency obligations: developers and implementers must ensure that end users know that they are interacting with AI (chatbots and deepfakes). Minimal unregulated risk (including the majority of AI applications currently available in the EU single market, such as AI-enabled video games and spam filters – at least in 2021; this This is changing with synthetic AI).

AI Law categorizes AI based on its risks: Unacceptable risks are prohibited (e.g., social ranking systems and manipulative AI). Most EU documents focus on high-risk and regulated AI systems. A smaller portion manages limited-risk AI systems, subject to lighter transparency obligations: developers and implementers must ensure that end users know that they are interacting with AI (chatbots and deepfakes). Minimal unregulated risk (including the majority of AI applications currently available in the EU single market, such as AI-enabled video games and spam filters – at least in 2021; this is changing with synthetic AI). AI systems are still considered high risk if they profile individuals, i.e. automatically process personal data to evaluate various aspects of a person's life, such as performance productivity at work, economic situation, health, preferences, interests, reliability, behavior, location, or movements. Vendors who believe their AI system falls short of Annex III requirements and do not pose a high risk must document that assessment before placing it on the market or putting it into use. The following types of AI systems are "prohibited" under the AI ​​Act.

- deploying covert, manipulative, or deceptive techniques to distort behavior and impede informed decision-making, causing significant harm.

- Exploiting weaknesses related to age, disability, or socioeconomic circumstances to distort behavior, causing significant harm.

– biometric classification system that infers sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except for labeling or filtering of legally obtained biometric data sets or when law enforcement is classifying biometric data.

- social evaluation, i.e. the evaluation or classification of individuals or groups based on their social behavior or personal characteristics, causing unfavorable or prejudiced treatment towards those individuals there.

-The use of AI-based real-time RBI is only permitted where failure to use the tool would cause significant harm and must take into account the rights and freedoms of the data subject.

- synthesis of a facial recognition database by untargeted retrieval of facial images from the Internet or video surveillance footage.

-causing emotions in the workplace or educational institution, except for medical or security reasons.

-“Real-time” remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when searching for missing persons, kidnapping victims, and victims of human trafficking or sexual exploitation; prevent a serious and imminent threat to life or a foreseeable terrorist attack; or identify suspects in serious crimes (e.g. murder, rape, armed robbery, illegal drug, and weapon trafficking, organized crime and environmental crime, etc.).

The AI ​​Act also addresses the use of general-purpose AI (GPAI) models. What we, as regular users, should know is that a general-purpose AI model is capable of serving a variety of purposes, both for direct use and for integration into other AI systems. According to the EU Council, GPAI models without systemic risk will be subject to certain restrictive requirements, such as transparency, but those with systemic risk will be subject to stricter regulations. All GPAI model providers must: -Draw and establish technical documentation, including training, testing, and evaluation results.

-Develop information and documentation to provide to downstream vendors intending to integrate the GPAI model into their own AI systems so that they understand the capabilities and limitations and can deploy in compliance to defend it.

-Establish a policy for compliance with the Copyright Directive.

-Published a fully detailed summary of the content used to train the GPAI model.

When this law comes into effect, it will prevent and reduce cybersecurity crimes as well as ethical issues such as property rights created by AI. The EU said codes of practice must be ready 9 months after entry into force.