Skip to content

EU Artificial Intelligence Act: A comprehensive guide

Louis Strauss |

August 20, 2024
EU Artificial Intelligence Act: A comprehensive guide

Audio version

EU Artificial Intelligence Act: A comprehensive guide
11:39

Contents

The Artificial Intelligence Act is the world’s first comprehensive legal framework for the use of AI technologies within the European Union. It was first published on July 12, 2024, and recently entered into force on August 1, presenting significant implications for organizations within the scope of the regulation. Let’s dive into the details and requirements of the legislation so you can start your journey to compliance as early as now. Read on to learn more:

What is the Artificial Intelligence Act?

As part of the EU’s digital resilience strategy, the AI Act aims to strengthen AI governance and ensure the trustworthy development and implementation of AI systems by addressing the risks of AI models that have a powerful impact on society and the economy. It harmonizes all rules on AI by classifying AI systems according to the different levels of risk they pose, mandating detailed obligations for the providers and users of these technologies to prioritize ethical principles.

With its risk-based approach, the AI Act identifies four levels of risk for AI systems:

1 (1)

  • Unacceptable risk – Refers to the risk arising from AI systems that can manipulate the behavior of end users, remotely and instantly identify persons using biometric data, and enable harmful profiling and predictions. AI systems with unacceptable risks are considered threats and are therefore prohibited under the law.
  • High risk – Refers to the risk of AI systems compromising people’s health, safety, and fundamental rights. High-risk AI systems such as those used as a safety component or for critical infrastructure, are permitted but regulated under specific mandatory requirements and conformity assessment procedures.
  • Limited risk – Refers to the risk related to general purpose AI (GPAI) systems that can generate images, videos, or other media that closely resemble authentic content or existing persons, objects, entities, and events. Chatbots and deepfakes are examples of these limited-risk AI systems, for which minimum transparency obligations are imposed. Organizations using GPAI systems must ensure that end users are aware that they are interacting with AI or are consuming artificially generated content.
  • Minimal risk – Refers to the risk associated with the vast majority of AI-enabled products and technologies currently available publicly, such as video games and spam filters. These AI systems present minimal to no risk and are therefore unregulated.

Scope

In addition to the classification of risks produced by AI systems, the regulation also categorizes the entities subject to compliance into four groups:

  • Providers – Refer to developers of AI systems and GPAI models and product manufacturers that provide and distribute AI systems under their own name or trademark. Small enterprises or small-scale providers are also included.
  • Users – Refer to entities that use AI systems in a professional capacity and deploy AI systems or AI-generated output for end users
  • Distributors – Refer to entities apart from providers and manufacturers that supply AI systems to users
  • Importers – Refer to EU-based entities deploying AI systems that bear the name or trademark of another entity established outside the EU

One of the distinct features of the AI Act is that it also applies to organizations in third-world countries developing and deploying AI systems that are used within the EU. This excludes AI systems that are developed or used exclusively for military purposes and law enforcement under international agreements.

Components

Based on the rules previously discussed, the AI Act has 3 major provisions, particularly concerning:

2 (1)

  • Prohibited AI systems – These include:
    • Manipulative AI: AI systems that distort the behavior and impair the decision-making capabilities of end users or those that exploit the vulnerabilities of marginalized groups such as children and persons with disabilities
    • Biometric categorization systems: AI systems that process biometric data to assign sensitive identifiers such as sexual orientation, political orientation, religion, and personality traits to individuals or groups
    • Social scoring systems: AI systems that use social behavior or personal traits to categorize individuals or groups, resulting in biased or unfavorable treatment
    • Predictive policing tools: AI systems that predict the likelihood of criminal activity based solely on profiling or an analysis of personality traits
    • Facial recognition scraping tools: AI systems that conduct facial recognition through untargeted scraping of databases such as the internet or CCTV footage
    • Emotion recognition systems: AI systems that interpret human emotions based on biometric data
    • Remote biometric identification systems: Including real-time remote biometric identification systems that can identify individuals without their active involvement by capturing biometric data from a distance and comparing it with data in external databases. These AI systems are prohibited unless used for law enforcement purposes involving serious or life-threatening cases.
  • High-risk AI systems – These are primarily classified into:
    • AI systems that are used as a safety component of products detailed in Annex I of the AI Act, which include machinery, toys, medical devices, and more
    • AI systems designed for use cases outlined in Annex III, which include:
        • Non-banned biometrics such as biometric verification
        • Critical infrastructure such as transport, energy, and digital infrastructure
        • Education
        • Employment
        • Essential public and private services such as emergency and medical aid, health and life insurance, and credit evaluation
        • Law enforcement
        • Migration, asylum, and border control management
        • Administration of justice and democratic processes
  •  Limited-risk AI systems – These are comprised of:
    • GPAI systems, which are built on AI models and can be used as high-risk AI systems or integrated with them to fulfill various functions
    • GPAI models, which are trained in massive amounts of data and can be integrated with other systems or applications and perform a variety of tasks

What are the requirements of the AI Act?

Most of the requirements of the EU AI Act fall on providers of high-risk AI systems. Before putting their AI systems into service, high-risk AI providers must undergo a third-party conformity assessment and implement the following measures:

3 (1)

  1. Risk management system – Providers must establish a risk management system, which involves identifying, analyzing, and evaluating risks and implementing risk management measures throughout the entire lifecycle of the AI system.
  2. Data governance – Providers must ensure that training, validation, and testing datasets are accurate, relevant, and adhere to data collection, data preparation, and other data governance and management standards.
  3. Technical documentation – Providers must prepare technical documentation of their AI systems before deployment to verify their compliance with the requirements set by the regulation.
  4. Record-keeping – Providers must design their AI systems to automatically log significant events, allowing for continuous monitoring of their operation and traceability of all modifications and improvements throughout the system's lifecycle.
  5. Transparency – Providers must build AI systems that users can understand without difficulty and support them with clear and comprehensive instructions to ensure easy operation.
  6. Human oversight – Providers must develop user-friendly interfaces for their AI systems and enable human supervision of their operation to prevent or mitigate risks.
  7. Accuracy, robustness, and cybersecurity – Providers must maintain the resilience of their AI systems against errors, vulnerabilities, and attacks by implementing technical solutions, security controls, and mitigation measures.
  8. Quality management system – Providers must put in place quality management systems consisting of policies and procedures to ensure their AI systems comply with the regulation.
Meanwhile, users of high-risk AI systems have the following obligations:
  1. Operation of the AI system in accordance with its provided instructions
  2. Management of resources and activities needed to implement the human oversight measures specified by the provider
  3. Ensuring that all data entered into the AI system is appropriate and relevant for its intended purpose
  4. Monitoring the AI system, including suspending its use and notifying the provider in the event of significant threats or incidents
  5. Maintaining the AI system’s automatically generated logs
  6. Conducting a data protection impact assessment using product information supplied by the provider to identify and mitigate risks to data security

Lastly, with the transparency obligations set by the AI Act, providers of GPAI models must:

  1. Produce technical documentation detailing the AI model’s training and testing processes and evaluation results
  2. Develop product information and documentation to be used by providers integrating the AI model into their own AI systems
  3. Establish a policy in accordance with the rules of the EU Copyright Directive
  4. Publish a comprehensive summary of the content used for training the AI model

For providers of free and open-license GPAI models, only requirements 3 and 4 apply.  However, for providers of GPAI models with high-impact or systemic risks, additional obligations are mandated:

  1. Performing model evaluation, including adversarial testing to identify and mitigate systemic risk
  2. Assessing and mitigating systemic risks and their sources
  3. Monitoring, documenting, and reporting significant incidents to relevant national authorities
  4. Implementing appropriate cybersecurity measures

By August 2, 2026, providers and users of high-risk AI systems under Annex III, such as those used for critical infrastructure, are required to comply. For providers and users of high-risk AI systems under Annex I, such as those used as a safety component of specialized products like medical devices, the deadline for compliance comes a year after, which is August 2, 2027.

On the other hand, providers and users of GPAI models and systems must secure their compliance with the AI Act as early as August 2, 2025. Prohibited AI systems will officially be banned within the EU by February 2, 2025.

The 6clicks solution to AI risk management

To help you get started, the 6clicks platform’s integrated risk management and security compliance solutions can provide you with the tools you need to achieve compliance with the requirements of the AI Act.

Leveraging our Responsible AI solution, identify your AI systems and conduct an impact assessment using our AI system impact assessment templates. Then, utilize our tailored AI risk libraries to streamline your risk identification. The 6clicks platform’s built-in risk registers and task management functionalities enable you to conduct risk assessments and create risk treatment plans to mitigate risks.

Finally, put risk mitigation measures and security controls in place and monitor their performance with our turnkey AI control sets and control management functionality.

Discover the powerful features of 6clicks.



Frequently asked questions

What are the main components of the EU AI Act?

The EU Artificial Intelligence Act contains provisions on prohibited AI systems such as manipulative AI and social scoring systems; high-risk AI systems such as those used for critical infrastructure like transport and energy and those used as a safety component of specialized products like medical devices; and limited-risk AI systems such as general purpose AI systems (GPAI) like chatbots and deepfakes.

Which entities does the AI Act apply to?

Organizations that develop and use high-risk AI systems, GPAI models, and GPAI systems within the 27 Member States of the EU are required to comply with the requirements mandated by the AI Act, which includes undergoing a conformity assessment and implementing risk management, transparency, record-keeping, and cybersecurity measures among others.

When is the deadline for implementing the requirements of the AI Act?

By February 2, 2025, provisions for prohibited AI systems will take effect, while providers and users of GPAI models and systems have until August 2, 2025, to comply with the AI Act. Providers and users of high-risk AI systems under Annex III must fully implement the requirements of the AI Act by August 2, 2026. For providers and users of high-risk AI systems under Annex I, the deadline for compliance is August 2, 2027.



Louis Strauss

Written by Louis Strauss

Louis is the Co-founder and Chief Product Marketing Officer (CPMO) at 6clicks, where he spearheads collaboration among product, marketing, engineering, and sales teams. With a deep-seated passion for innovation, Louis drives the development of elegant AI-powered solutions tailored to address the intricate challenges CISOs, InfoSec teams, and GRC professionals face. Beyond cyber GRC, Louis enjoys reading and spending time with his friends and family.