Advancements in the AI Legislation: Battlefields, Developments, and Future Actions
The European Artificial Intelligence Act (AIA) is progressing through its implementation phase, with key provisions set to take effect from 2025 onwards. This regulatory framework, aimed at creating a global standard for AI, was first published by the European Commission in April 2021 [1].
Defining and Classifying AI
The AIA defines AI systems broadly, encompassing various applications across sectors while exempting military, national security, research, and non-professional uses [4]. AI systems are categorised by risk levels, including unacceptable risk, high-risk, limited risk, minimal risk, and a special category for general-purpose AI (GPAI) [2][4].
High-risk AI systems, which pose significant risks to health, safety, or fundamental rights, are subject to stringent requirements such as transparency, data governance, quality, risk management, and conformity assessments before market placement [2]. GPAI systems, such as large language models, face fewer restrictions but are subject to transparency and risk mitigation obligations [1][3][4].
Key Obligations for High-Risk AI
From August 2, 2026, a comprehensive compliance framework for high-risk AI will be in effect, requiring conformity assessments and ongoing monitoring [1]. Obligations include ensuring safety, respecting fundamental rights, and maintaining documented risk management throughout the AI system lifecycle [2]. The Act also imposes AI literacy requirements as of February 2025 to ensure users and deployers understand AI system functioning, limitations, biases, and risks [3].
Prohibited Practices and Facial Recognition
The AIA explicitly prohibits several AI practices considered unacceptable risk, notably real-time biometric identification, including facial recognition in public spaces, and AI systems that manipulate behaviour or exploit vulnerabilities, particularly involving subliminal techniques and targeting vulnerable groups such as children [3][2].
Governance and Enforcement
The AIA establishes the AI Office within the European Commission to oversee implementation and enforcement, particularly for GPAI providers [1]. A European Artificial Intelligence Board coordinates cooperation among member states for consistent enforcement and guidance [2][4]. The regulation rollout began with bans and literacy obligations in February 2025, governance rules and GPAI provider duties from August 2025, and high-risk AI compliance starting in August 2026 [1][5].
As the AIA progresses through various parliamentary committees, key debates revolve around the definition of AI, what activities and sectors are classified as "high risk", and the use of facial recognition technologies. The final vote for the AIA is expected in November, but the timeline seems ambitious due to the involvement of multiple committees [6].
References:
[1] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12623-Proposal-for-a-Regulation-of-the-European-Parliament-and-of-the-Council-on-Artificial-Intelligence-Artificial-Intelligence-Act_en
[2] European Parliament. (2022). Joint report on the proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act) (COM(2021) 206 final - C8-0431/2021 - 2021/0135 (COD)). Retrieved from https://www.europarl.europa.eu/doceo/document/A-9-2022-0001_EN.html
[3] European Parliament. (2022). Report on the proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act) (COM(2021) 206 final - C8-0431/2021 - 2021/0135 (COD)). Retrieved from https://www.europarl.europa.eu/doceo/document/A-9-2022-0000_EN.html
[4] OECD. (2019). Artificial Intelligence: Expert Group on Artificial Intelligence. Retrieved from https://www.oecd.org/ai/expert-group-on-artificial-intelligence/
[5] European Parliament. (2021). Legislative train schedule. Retrieved from https://www.europarl.europa.eu/legislative-train/legislative-train-schedule.html?id=12623
[6] European Parliament. (2022). AIA timeline. Retrieved from https://www.europarl.europa.eu/legislative-train/legislative-train-schedule.html?id=12623&phase=3
- The European Artificial Intelligence Act (AIA) will establish a European AI Office to oversee implementation and enforcement, particularly for General-Purpose AI (GPAI) providers.
- From August 2026, a comprehensive compliance framework for high-risk AI systems will require conformity assessments and ongoing monitoring, ensuring safety, respect for fundamental rights, and documented risk management throughout the AI system lifecycle.
- As of February 2025, the AIA will impose AI literacy requirements for users and deployers to ensure they understand AI system functioning, limitations, biases, and risks.
- The AIA prohibits real-time biometric identification, including facial recognition in public spaces, and AI systems that manipulate behavior or exploit vulnerabilities, particularly involving subliminal techniques and targeting vulnerable groups such as children.
- Key provisions of the AIA, aimed at creating a global standard for AI, are set to take effect from 2025 onwards, with strict regulations for high-risk AI systems and less restrictive obligations for GPAI systems like large language models, which include transparency and risk mitigation requirements.