AI Act · Classification · Obligations

AI Act 2026: how to classify your AI systems and meet obligations by risk level

The AI Act is not just another regulation that sits on paper. It is the world’s first comprehensive law on artificial intelligence, with global extraterritorial reach, sanctions of up to 7% of turnover and a real impact on how to design, deploy and operate AI systems. This guide explains, without legal jargon, how to classify your systems, what to do for each risk level and how to integrate it with NIS2, DORA and GDPR without duplicating work.

1. What the AI Act actually is

The AI Act is Regulation (EU) 2024/1689, in force since August 1, 2024, with phased application until August 2027. It is the first horizontal law in the world that regulates AI systems by their risk to fundamental rights, health and safety, with sanctions of up to €35M or 7% of global annual turnover for prohibited practices.

It applies to providers (those who develop or have an AI system developed and place it on the market in the EU), deployers (those who use it in a professional capacity) and to distributors, importers and authorized representatives. It also applies to non-EU providers if the system’s output is used in the EU, which gives it global extraterritorial reach similar to GDPR.

What it is not: a generic AI ethics code, a ban on AI in business, nor a regulation that only affects big tech. It is a risk-based legal framework that requires technical documentation, human oversight and reporting whenever there is impact on legal rights or essential services.

2. The 4 risk levels: how to classify your AI

The Regulation classifies AI systems into four levels. Most enterprise systems fall under limited or minimal risk, but the obligations grow exponentially as you move up the pyramid.

2.1 Unacceptable risk — prohibited

Banned outright since February 2, 2025. Includes social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement), emotion recognition at work and in education, untargeted scraping of facial images and AI that exploits vulnerabilities of specific groups. If your project falls here, you stop. Period.

2.2 High risk — the bulk of obligations

The Regulation defines two paths to high risk. Annex I: AI as a safety component of products already regulated (medical devices, machinery, vehicles, toys, lifts). Annex III: eight use case areas — biometrics, critical infrastructure, education and vocational training, employment and HR, access to essential services (credit, insurance, emergency dispatch), law enforcement, migration and border control, justice and democratic processes.

If your system fits any of these, you must comply with the full chapter of obligations: risk management system, data governance, technical documentation, automatic logging, transparency, human oversight, accuracy, robustness and cybersecurity, conformity assessment, EU declaration and CE marking, post-market monitoring and incident reporting.

2.3 Limited risk — transparency obligations

Systems that interact with people (chatbots), generate or manipulate content (deepfakes, synthetic media) or perform emotion recognition outside prohibited contexts. The obligation is essentially to inform the user that they are interacting with AI or that the content is AI-generated. Generative AI outputs must be detectable as such using machine-readable marking.

2.4 Minimal risk — voluntary codes

Everything else: spam filters, AI in video games, recommender systems with no significant impact on legal rights, productivity assistants. No specific obligation, beyond voluntarily following codes of conduct. Most business AI lives here.

3. High-risk systems: obligations in detail

This is where 80% of operational effort concentrates. Obligations are split between provider and deployer, but in many projects the deployer ends up assuming partial provider responsibilities (e.g., when fine-tuning a model with internal data, the deployer can become a provider for that specific use).

3.1 Obligations for providers

  • Risk management system: continuous identification, evaluation and mitigation of risks throughout the system’s lifecycle.
  • Data governance: training, validation and test datasets must be relevant, sufficiently representative, free of errors and complete in light of the intended purpose, with bias examination.
  • Technical documentation: detailed and updated, in line with Annex IV of the Regulation. Must be available to authorities for at least 10 years after market placement.
  • Automatic logging: events that allow traceability of operation, identification of risks and post-market monitoring.
  • Transparency: instructions for use that allow deployers to interpret outputs and use them properly.
  • Human oversight: technical and design measures so that natural persons can effectively supervise the system in operation.
  • Accuracy, robustness, cybersecurity: appropriate levels declared in instructions; resilient against errors, faults and adversarial attacks. Article 15 of the AI Act lifts cybersecurity directly to legal requirement.
  • Conformity assessment: internal procedure (Annex VI) or through notified body (Annex VII) before placing on the market.
  • CE marking and EU declaration: visible, legible and indelible on the system or its documentation.
  • Registration in the EU database: before placing on the market, providers register the system in the European database managed by the Commission.
  • Post-market monitoring and incident reporting: active monitoring system and reporting of serious incidents to the competent authority within 15 days (less for catastrophic incidents).

3.2 Obligations for deployers

  • Use the system according to the provider’s instructions.
  • Implement human oversight measures with technically competent staff.
  • Ensure that input data are relevant and sufficiently representative for the intended purpose.
  • Monitor operation and notify the provider or distributor of any risk or incident detected.
  • Keep automatic logs generated by the system for at least 6 months when under their control.
  • Inform affected persons when decisions about them are made or assisted by the system.
  • Carry out a FRIA when applicable (see next subsection).
  • Register their use in the EU database when they are public authorities.

3.3 FRIA: Fundamental Rights Impact Assessment

The FRIA is one of the AI Act’s genuine novelties versus other regulations. It is mandatory for deployers of high-risk systems that are public bodies or private entities providing services of general interest (energy, water, banking, healthcare, education). It must include: description of the use of the system, period and frequency of use, categories of natural persons likely to be affected, foreseeable specific risks of harm to those groups, human oversight measures and risk mitigation measures if those risks materialize.

It is not just a DPIA renamed. The DPIA (GDPR) focuses on personal data protection; the FRIA expands the scope to the full set of fundamental rights of the EU Charter (dignity, non-discrimination, freedom of expression, due process, social rights). The two can — and should — be coordinated, but they are not the same document.

4. GPAI: obligations for general-purpose models

GPAI models (General-Purpose AI) are foundation models trained on large amounts of data with self-supervision, capable of performing a wide range of tasks (LLMs, vision-language, code generation). The Regulation imposes specific obligations on them, applicable since August 2, 2025.

4.1 Obligations applicable to all GPAI

  • Updated technical documentation on the model: training architecture, processes, evaluation results.
  • Information for downstream providers that integrate the model into their AI systems.
  • Copyright policy respecting reservation of text and data mining rights.
  • Public summary of training data used (model card with audited sources).

4.2 Additional obligations for systemic-risk GPAI

A GPAI is considered systemic risk when it exceeds 10^25 FLOPs of training compute or when the European AI Office designates it as such for impact reasons. They add: model evaluations and adversarial testing, evaluation and mitigation of systemic risks at EU level, reporting serious incidents to the AI Office within established timeframes and reinforced cybersecurity of the model and its physical infrastructure.

In practice, only foundation models from the leading global labs reach systemic risk threshold. For 99% of European companies, the relevant obligation is the downstream provider position: if you integrate a GPAI model into a system you place on the market, you assume specific transparency obligations toward your users.

5. Timeline 2024-2027 and AESIA in Spain

The application is phased and you need to know it. Marking these dates on your roadmap avoids unpleasant surprises.

  • August 1, 2024: entry into force of the Regulation.
  • February 2, 2025: prohibitions apply (banned practices) and AI literacy obligations for providers and deployers.
  • August 2, 2025: obligations applicable to GPAI models; designation of national authorities; sanctions regime.
  • August 2, 2026: bulk of obligations for high-risk systems (Annex III); transparency rules; codes of conduct.
  • August 2, 2027: high-risk systems covered by Annex I (safety components of regulated products).

In Spain, the competent authority is AESIA (Spanish AI Supervision Agency), headquartered in A Coruña, operational since 2024. It supervises providers, deployers, importers and distributors of AI systems in Spanish territory. At European level, the European AI Office at the Commission directly supervises GPAI models with systemic risk and coordinates national authorities.

Spain has historically been a regulatory pioneer in this area: AESIA was created before the Regulation was even definitively published, and the Spanish AI Regulatory Sandbox was the first official EU pilot for testing AI Act compliance.

6. Convergence with NIS2, DORA and GDPR: do not duplicate work

The AI Act does not live in a vacuum. If you are already an entity in scope of NIS2 (essential or important) or DORA (financial entity), much of the cybersecurity, risk management and incident reporting work is reusable.

6.1 AI Act ↔ NIS2

Article 15 AI Act requires that high-risk systems have appropriate cybersecurity, considering known and unknown threats. NIS2 already requires 10 mandatory cybersecurity measures (risk analysis, incident handling, business continuity, supply chain, encryption, MFA…). Practical mapping: if you already have a NIS2 ISMS, document how it covers each of the AI Act’s technical cybersecurity requirements. You will need to add specific controls for AI (defense against adversarial attacks, training data poisoning, model extraction), but the management framework is the same.

6.2 AI Act ↔ DORA

If you are a bank, insurer or fintech, DORA requires resilience testing, ICT third-party management and reporting of major incidents to the supervisor within tight timeframes. The AI Act adds incident reporting to AESIA for high-risk systems. Coordination: define a single incident pipeline with classification at the entry and parallel notification routes (AESIA / Bank of Spain / DGSFP / CNMV depending on incident type). The 15-day deadline for serious AI Act incidents is more generous than DORA, but if it coincides with a major ICT incident, both timelines run in parallel.

6.3 AI Act ↔ GDPR

The intersection is enormous: any high-risk AI processing personal data must comply with both. Practical points: legal basis for the training (legitimate interest with LIA, explicit consent, or contractual basis depending on the case); coordinated FRIA + DPIA when both apply (different but linkable documents); Article 22 GDPR (automated decisions with legal effects) which adds the right to human review beyond what the AI Act requires for human oversight; data minimization in training versus the AI Act’s requirement of sufficiently representative datasets — finding the balance is the art.

7. Operational checklist: first 90 days

This is the plan I follow when accompanying a company’s AI Act adoption. It does not pretend to be exhaustive, but if you complete it in the first 90 days, you will be in a good initial position.

Month 1: inventory and classification

  • Identify all AI systems in use, in development, or planned (including those embedded in third-party SaaS).
  • Classify each one by risk level (prohibited / high / limited / minimal) and document the rationale.
  • Determine your role in each system: provider, deployer, distributor, importer.
  • Identify dependencies with GPAI models and the contractual position of the supplier.
  • Map intersection with NIS2, DORA, GDPR.

Month 2: gap analysis and governance

  • For each high-risk system, gap analysis versus the 11 chapters of obligations.
  • Define AI governance: AI committee, accountable functions, decision-making process, escalation.
  • Establish AI literacy program (Article 4 AI Act) for the team operating systems.
  • Coordinate FRIA + DPIA for high-risk systems with impact on rights.
  • Open the conversation with the cybersecurity area to add controls specific to AI.

Month 3: action plan and quick wins

  • Prioritized action plan by risk, calendar of AI Act key dates and budget.
  • Update of contracts with AI suppliers (responsibilities, audit rights, incident notification).
  • Implementation of technical logging for high-risk systems already in operation.
  • Communication and training to the deployer team: how to use systems within instructions and oversee them.
  • Definition of incident reporting procedure aligned with NIS2 / DORA / GDPR.

8. Frequent mistakes to avoid

Seven recurring patterns I see in companies starting to face the AI Act.

  1. Treating it as pure legal. Most of the work is technical: documentation, logs, evaluations, oversight. Legal alone cannot solve it.
  2. Assuming all your AI is minimal risk. The Annex III scope is broader than it seems: HR systems, credit scoring, fraud detection systems with impact on customers… are high risk.
  3. Ignoring the deployer role. If you use a high-risk SaaS, you have your own obligations (instructions, oversight, FRIA, logging) that you cannot delegate to the provider.
  4. Duplicating FRIA and DPIA without coordination. Two parallel teams writing similar documents that contradict each other. Define a single methodology with two outputs.
  5. Underestimating documentation. Annex IV requires technical documentation that, if not generated as you develop, is very expensive to reconstruct ex post.
  6. Mounting AI governance separately from existing one. The AI committee must connect with risk, cybersecurity, privacy and compliance committees. Otherwise it becomes irrelevant.
  7. Waiting until August 2026 to start. The obligations applicable since 2025 (prohibitions, AI literacy, GPAI) are already in force. And building a documentation system in 3 months versus in 24 has nothing to do with quality and cost.

9. AI Act frequently asked questions

When does the AI Act become applicable?

The Regulation entered into force on August 1, 2024. Prohibitions apply from February 2, 2025; GPAI obligations from August 2, 2025; high-risk and most obligations from August 2, 2026; and the high-risk Annex I systems requirement from August 2, 2027.

How do I know if my AI system is high-risk?

It is high-risk if it falls in any Annex III use case (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice and democratic processes) or if it is a safety component of a product regulated under Annex I. If you only use AI as a productivity assistant with no impact on legal rights, you are most likely outside scope.

What is GPAI and what does it require?

GPAI (General-Purpose AI) refers to foundation models. They require transparency, technical documentation, copyright policy and a public training data summary. Systemic-risk GPAI (massive compute or significant impact) also require model evaluations, incident reporting and reinforced cybersecurity.

What is a FRIA and who has to perform it?

The Fundamental Rights Impact Assessment is mandatory for deployers of high-risk AI systems that are public bodies or private entities providing services of general interest. It analyses target groups, foreseeable risks, harm severity, mitigations and human oversight measures before deployment.

What relationship does it have with NIS2 and DORA?

They converge in cybersecurity (Art. 15 AI Act) and incident reporting. If you are already an essential or important entity under NIS2 or a financial entity under DORA, you can reuse risk management, vulnerability handling, supplier oversight and continuity controls. The trick is mapping once and reporting separately to each authority.

Who is the supervisory authority in Spain?

AESIA (Spanish AI Supervision Agency) headquartered in A Coruña. It supervises providers, deployers, importers and distributors of AI systems. The European AI Office at the Commission supervises GPAI models with systemic risk.

What sanctions does non-compliance carry?

Up to €35M or 7% of global annual turnover for prohibited practices; up to €15M or 3% for other infringements; and up to €7.5M or 1% for providing incorrect information to authorities. Small and medium enterprises face proportionally lower amounts.

Do I need to register my AI system in any database?

Yes, providers of high-risk AI systems must register before placing the system on the market in the EU AI database managed by the Commission. Deployers that are public authorities must also register their use of these systems.

Closing: AI Act is not a legal project, it is an operating model

The AI Act will transform how AI is designed, deployed and operated in Europe. Companies treating it as a paperwork drill will lose months and money. Those approaching it as a chance to professionalize their AI governance — connected with NIS2 cybersecurity, GDPR privacy and DORA risk management — will come out reinforced.

The good news: most controls already exist if you have a serious ISMS, a robust DPIA program and a defined risk management framework. The new piece is the AI-specific layer: technical documentation per system, FRIA, AI logging, oversight by competent humans and reporting to AESIA. None of it is impossible. It is engineering work, regulatory criterion and clear governance.

If you want to apply this to your reality and design an AI Act adoption plan tailored to your sector, your governance maturity and your existing compliance stack, let’s talk. I work with manufacturers, financial entities and operators of essential services across the EU on AI compliance with a B2B approach: less PowerPoint, more spreadsheet that gets things done.

Talk about AI Act for your company