The Artificial Intelligence Act is the world’s first comprehensive legal framework for the use of AI technologies within the European Union. It was first published on July 12, 2024, and recently entered into force on August 1, presenting significant implications for organizations within the scope of the regulation. Let’s dive into the details and requirements of the legislation so you can start your journey to compliance as early as now. Read on to learn more:
What is the Artificial Intelligence Act?
As part of the EU’s digital resilience strategy, the AI Act aims to strengthen AI governance and ensure the trustworthy development and implementation of AI systems by addressing the risks of AI models that have a powerful impact on society and the economy. It harmonizes all rules on AI by classifying AI systems according to the different levels of risk they pose, mandating detailed obligations for the providers and users of these technologies to prioritize ethical principles.
With its risk-based approach, the AI Act identifies four levels of risk for AI systems:
- Unacceptable risk – Refers to the risk arising from AI systems that can manipulate the behavior of end users, remotely and instantly identify persons using biometric data, and enable harmful profiling and predictions. AI systems with unacceptable risks are considered threats and are therefore prohibited under the law.
- High risk – Refers to the risk of AI systems compromising people’s health, safety, and fundamental rights. High-risk AI systems such as those used as a safety component or for critical infrastructure, are permitted but regulated under specific mandatory requirements and conformity assessment procedures.
- Limited risk – Refers to the risk related to general purpose AI (GPAI) systems that can generate images, videos, or other media that closely resemble authentic content or existing persons, objects, entities, and events. Chatbots and deepfakes are examples of these limited-risk AI systems, for which minimum transparency obligations are imposed. Organizations using GPAI systems must ensure that end users are aware that they are interacting with AI or are consuming artificially generated content.
- Minimal risk – Refers to the risk associated with the vast majority of AI-enabled products and technologies currently available publicly, such as video games and spam filters. These AI systems present minimal to no risk and are therefore unregulated.
Scope
In addition to the classification of risks produced by AI systems, the regulation also categorizes the entities subject to compliance into four groups:
- Providers – Refer to developers of AI systems and GPAI models and product manufacturers that provide and distribute AI systems under their own name or trademark. Small enterprises or small-scale providers are also included.
- Users – Refer to entities that use AI systems in a professional capacity and deploy AI systems or AI-generated output for end users
- Distributors – Refer to entities apart from providers and manufacturers that supply AI systems to users
- Importers – Refer to EU-based entities deploying AI systems that bear the name or trademark of another entity established outside the EU
One of the distinct features of the AI Act is that it also applies to organizations in third-world countries developing and deploying AI systems that are used within the EU. This excludes AI systems that are developed or used exclusively for military purposes and law enforcement under international agreements.
Components
Based on the rules previously discussed, the AI Act has 3 major provisions, particularly concerning:
- Prohibited AI systems – These include:
- Manipulative AI: AI systems that distort the behavior and impair the decision-making capabilities of end users or those that exploit the vulnerabilities of marginalized groups such as children and persons with disabilities
- Biometric categorization systems: AI systems that process biometric data to assign sensitive identifiers such as sexual orientation, political orientation, religion, and personality traits to individuals or groups
- Social scoring systems: AI systems that use social behavior or personal traits to categorize individuals or groups, resulting in biased or unfavorable treatment
- Predictive policing tools: AI systems that predict the likelihood of criminal activity based solely on profiling or an analysis of personality traits
- Facial recognition scraping tools: AI systems that conduct facial recognition through untargeted scraping of databases such as the internet or CCTV footage
- Emotion recognition systems: AI systems that interpret human emotions based on biometric data
- Remote biometric identification systems: Including real-time remote biometric identification systems that can identify individuals without their active involvement by capturing biometric data from a distance and comparing it with data in external databases. These AI systems are prohibited unless used for law enforcement purposes involving serious or life-threatening cases.
- High-risk AI systems – These are primarily classified into:
- AI systems that are used as a safety component of products detailed in Annex I of the AI Act, which include machinery, toys, medical devices, and more
- AI systems designed for use cases outlined in Annex III, which include:
-
- Non-banned biometrics such as biometric verification
- Critical infrastructure such as transport, energy, and digital infrastructure
- Education
- Employment
- Essential public and private services such as emergency and medical aid, health and life insurance, and credit evaluation
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
- Limited-risk AI systems – These are comprised of:
- GPAI systems, which are built on AI models and can be used as high-risk AI systems or integrated with them to fulfill various functions
- GPAI models, which are trained in massive amounts of data and can be integrated with other systems or applications and perform a variety of tasks
What are the requirements of the AI Act?
Most of the requirements of the EU AI Act fall on providers of high-risk AI systems. Before putting their AI systems into service, high-risk AI providers must undergo a third-party conformity assessment and implement the following measures:
- Risk management system – Providers must establish a risk management system, which involves identifying, analyzing, and evaluating risks and implementing risk management measures throughout the entire lifecycle of the AI system.
- Data governance – Providers must ensure that training, validation, and testing datasets are accurate, relevant, and adhere to data collection, data preparation, and other data governance and management standards.
- Technical documentation – Providers must prepare technical documentation of their AI systems before deployment to verify their compliance with the requirements set by the regulation.
- Record-keeping – Providers must design their AI systems to automatically log significant events, allowing for continuous monitoring of their operation and traceability of all modifications and improvements throughout the system's lifecycle.
- Transparency – Providers must build AI systems that users can understand without difficulty and support them with clear and comprehensive instructions to ensure easy operation.
- Human oversight – Providers must develop user-friendly interfaces for their AI systems and enable human supervision of their operation to prevent or mitigate risks.
- Accuracy, robustness, and cybersecurity – Providers must maintain the resilience of their AI systems against errors, vulnerabilities, and attacks by implementing technical solutions, security controls, and mitigation measures.
- Quality management system – Providers must put in place quality management systems consisting of policies and procedures to ensure their AI systems comply with the regulation.
Meanwhile, users of high-risk AI systems have the following obligations:
- Operation of the AI system in accordance with its provided instructions
- Management of resources and activities needed to implement the human oversight measures specified by the provider
- Ensuring that all data entered into the AI system is appropriate and relevant for its intended purpose
- Monitoring the AI system, including suspending its use and notifying the provider in the event of significant threats or incidents
- Maintaining the AI system’s automatically generated logs
- Conducting a data protection impact assessment using product information supplied by the provider to identify and mitigate risks to data security
Lastly, with the transparency obligations set by the AI Act, providers of GPAI models must:
- Produce technical documentation detailing the AI model’s training and testing processes and evaluation results
- Develop product information and documentation to be used by providers integrating the AI model into their own AI systems
- Establish a policy in accordance with the rules of the EU Copyright Directive
- Publish a comprehensive summary of the content used for training the AI model
For providers of free and open-license GPAI models, only requirements 3 and 4 apply. However, for providers of GPAI models with high-impact or systemic risks, additional obligations are mandated:
- Performing model evaluation, including adversarial testing to identify and mitigate systemic risk
- Assessing and mitigating systemic risks and their sources
- Monitoring, documenting, and reporting significant incidents to relevant national authorities
- Implementing appropriate cybersecurity measures
By August 2, 2026, providers and users of high-risk AI systems under Annex III, such as those used for critical infrastructure, are required to comply. For providers and users of high-risk AI systems under Annex I, such as those used as a safety component of specialized products like medical devices, the deadline for compliance comes a year after, which is August 2, 2027.
On the other hand, providers and users of GPAI models and systems must secure their compliance with the AI Act as early as August 2, 2025. Prohibited AI systems will officially be banned within the EU by February 2, 2025.
The 6clicks solution to AI risk management
To help you get started, the 6clicks platform’s integrated risk management and security compliance solutions can provide you with the tools you need to achieve compliance with the requirements of the AI Act.
Leveraging our Responsible AI solution, identify your AI systems and conduct an impact assessment using our AI system impact assessment templates. Then, utilize our tailored AI risk libraries to streamline your risk identification. The 6clicks platform’s built-in risk registers and task management functionalities enable you to conduct risk assessments and create risk treatment plans to mitigate risks.
Finally, put risk mitigation measures and security controls in place and monitor their performance with our turnkey AI control sets and control management functionality.
Discover the powerful features of 6clicks.