EU AI Act Mapping for Engineering Teams

Security and Trust eu-ai-act, compliance

What the EU AI Act actually requires of an engineering team: the four risk tiers, the documentation burden, and the timeline that already started in 2025.

  • By Orzed Team
  • 7 min read
Key takeaways
  • Risk tier is determined by the AI system's use case, not by the model architecture or vendor.
  • Most B2B SaaS AI features land in limited-risk; most public-facing decision systems land in high-risk.
  • High-risk obligations include risk management, data governance, technical documentation and post-market monitoring.
  • General-purpose AI model providers have separate obligations; integrators inherit responsibilities through contract.

The EU AI Act is the first major regulation that treats AI systems as a category needing dedicated rules. It entered into force in August 2024, with obligations phasing in across 2025, 2026 and 2027. A meaningful number of engineering teams we work with first encountered it as a customer procurement question (“are you AI Act compliant?”) and only then realised the scope of what compliance actually means.

This piece is the practical version. The four risk tiers, how to know which one a product lands in, and the engineering work that each tier actually requires.

The four risk tiers, briefly

The Act categorises AI systems into four risk levels:

Unacceptable risk: prohibited. Social scoring by public authorities, emotion inference at workplaces or educational institutions, biometric categorisation by sensitive characteristics, untargeted facial image scraping. These are banned outright and have been since February 2025. If your product does any of these, the answer is not compliance, it is to stop doing them.

High risk. Annex III lists eight categories: biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential services (credit, insurance, housing, public services), law enforcement, migration and border control, justice and democratic processes. AI systems used materially in any of these are high-risk and have substantial obligations.

Limited risk. AI systems that interact with humans (chatbots), generate or manipulate content (deepfakes), or perform emotion recognition or biometric categorisation outside high-risk contexts. These have transparency obligations: users must be informed they are interacting with AI, and AI-generated content must be labelled.

Minimal risk. Everything else. Most enterprise software AI features (recommendation systems, automated email triage, search ranking) fall here. No specific obligations under the Act, though general data protection law (GDPR) still applies.

The classification is by use case, not by technology. The same model can be in different tiers depending on how it is used. A face-recognition model used for unlocking a personal phone is minimal risk; the same model used for hiring decisions is high-risk.

How to classify your system

Walk the questions in order:

  1. Does the system do anything in the prohibited list? If yes, stop. The system cannot be sold in the EU.
  2. Is the use case in Annex III? If yes, the system is high-risk. The remaining questions are about which specific obligations apply.
  3. Does the system interact with humans, generate content or do emotion/biometric recognition? If yes, transparency obligations apply.
  4. Otherwise, the system is minimal risk. No specific Act obligations.

A common confusion: products that use AI components but where the AI is not material to the use case decision. For example, a CRM that uses AI to suggest email subject lines is not high-risk, even if the CRM is used in employment contexts. The AI must be materially involved in the consequential decision for the use case to fall under Annex III.

The honest classification matrix:

Use caseRisk tierEngineering implication
Customer-support chatbotLimitedDisclosure that user is talking to AI
Resume-ranking for hiringHighFull Article 9-15 compliance
Code completion assistantMinimalNone specific
Loan eligibility scoringHighFull Article 9-15 compliance
Email subject suggestionMinimalNone specific
Medical triage AIHighFull + sectoral medical device regulations
Marketing content generatorLimitedOutput labelling for AI-generated content
Internal knowledge searchMinimalNone specific
Biometric loginLimited (if not for sensitive characteristics)Disclosure
Surveillance / public IDEither prohibited or high-riskStop or full compliance

High-risk obligations: what engineering actually has to do

If the product lands in high-risk, the obligations are non-trivial. Articles 9 through 15 of the Act spell them out. The engineering-relevant pieces:

Risk management system (Article 9). A continuous process across the entire lifecycle that identifies known and reasonably foreseeable risks, evaluates them, and adopts mitigation measures. This is operational, not a one-time document.

Data and data governance (Article 10). Training, validation and test datasets must be relevant, sufficiently representative, free of errors, and complete to the extent possible. Bias examination is required. Datasets must consider the geographical, behavioural and functional context of the intended use.

Technical documentation (Article 11). A document that demonstrates the system’s compliance, kept up to date. The structure is in Annex IV: general description, detailed description of components, monitoring and control mechanisms, performance metrics, risk management documentation, post-market monitoring plan, list of harmonised standards applied.

Record-keeping (Article 12). Automatic logging of events over the lifetime of the system, with retention appropriate to the system’s purpose. The logs must enable identification of issues, post-market monitoring, and forensic analysis.

Transparency (Article 13). Users must be informed about the AI’s capabilities, limitations, expected level of accuracy, and the human oversight measures in place.

Human oversight (Article 14). Design choices that enable human operators to oversee the system, understand its outputs, override decisions, and stop the system when needed.

Accuracy, robustness and cybersecurity (Article 15). Appropriate levels for the use case, with measures against bias, drift, adversarial attacks, and unauthorised access.

This is real work. For a high-risk system, expect three to six engineer-months of compliance scaffolding before the first launch, plus ongoing operational cost. Teams that try to retrofit compliance after a high-risk product is already in market typically find it costs significantly more than building it in.

Limited-risk obligations: lighter but real

Limited-risk obligations are mostly about disclosure:

  • Conversational AI: users must be informed they are interacting with AI, unless it is obvious from context.
  • AI-generated content: deepfakes and synthetic media must be labelled as such (with carve-outs for art, satire, and similar).
  • Emotion recognition or biometric categorisation: users must be informed when these systems are operating on them.

The technical work is mostly UI changes (a clear “this is an AI assistant” label, a content provenance metadata field) and documentation. For most product teams, this is a week of work, not a quarter.

General-purpose AI model obligations

The Act has a separate set of obligations for providers of general-purpose AI models (the foundation models): technical documentation of the model, information for downstream users, copyright policy, and energy/resource disclosure. For models with systemic risk (currently those above 10^25 FLOPs of training compute), additional obligations apply: model evaluations, adversarial testing, incident reporting.

Engineering teams integrating with hosted models inherit some responsibilities through contract. The model provider documents the model; the integrator documents how it is used. Make sure the contract with the provider covers the documentation you will need for your own compliance.

The timeline that already started

Phased applicability:

DateWhat applies
August 2024Act enters into force
February 2025Prohibited AI practices in force; AI literacy obligations on staff
August 2025General-purpose AI obligations in force (with grace period for older models)
August 2026High-risk AI system obligations in force
August 2027Full applicability of all Act provisions

In 2026, prohibited practices, GPAI obligations and the foundation of high-risk obligations are all binding. The full high-risk regime activates in August. Teams that have not started the compliance work by mid-2026 are already behind.

Penalties

The Act sets meaningful penalties:

  • Prohibited practices: up to 35 million euros or 7% of global annual turnover, whichever is higher
  • Other obligations: up to 15 million euros or 3% of global annual turnover
  • Supplying incorrect information: up to 7.5 million euros or 1.5% of global annual turnover

For mid-sized SaaS companies, the percentage-of-turnover number is the relevant one and it is large.

What we install on engagements

For a team building anything in or near Annex III, we walk through:

  1. Classification audit (one to three days)
  2. Gap analysis against the relevant obligations (one to two weeks)
  3. Technical documentation skeleton matching Annex IV
  4. Risk management process design (lightweight if proportionate to the use case)
  5. Logging and monitoring design that meets Article 12 requirements
  6. Human-oversight design that satisfies Article 14
  7. Pre-market conformity assessment plan

For a team in limited risk, the work is much smaller: classification confirmation, the disclosure UI changes, and a brief documentation pass. Often one engineer-week.

For a team in minimal risk, the work is to know that they are in minimal risk and document why. One day.

The Act is a real piece of regulation with real obligations. It is not a checkbox exercise and it is not a vague aspiration. The teams that engage with it deliberately, classify honestly, and ship the proportionate compliance posture for their tier do so without drama. The teams that hope it does not apply to them, or that conflate hosted-model assurances with their own integrator obligations, will spend significantly more time and money on it later, often in the middle of a customer procurement they would have closed faster with the documentation already in place.

Frequently asked

Questions teams ask

Does the AI Act apply to a US-based company with no EU office?

Yes if the AI system or its output is used by people in the EU. The Act has extraterritorial reach similar to GDPR. A US SaaS with EU users falls under the Act for those users. The penalties for non-compliance scale with global revenue, not EU-only revenue.

What counts as a high-risk AI system?

Annex III lists categories: biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential public/private services, law enforcement, migration and border control, justice. Any system materially used for decisions in one of these areas is high-risk regardless of how it is built.

When did obligations start?

The Act entered into force in August 2024 with phased applicability. Prohibited practices and AI literacy obligations applied from February 2025. General-purpose AI obligations from August 2025. High-risk system obligations apply from August 2026, with full applicability from August 2027. Most obligations are already binding now in 2026.