Topic Hub

EU AI Act Compliance

Everything technology companies need to know about the EU AI Act — risk classification, compliance deadlines, and practical steps to prepare before the August 2026 enforcement date.

Overview

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing binding obligations for companies developing or deploying AI systems in the European market. With the high-risk system deadline of 2 August 2026 now less than four months away, technology companies face a closing window to assess their AI systems, implement compliance programmes, and establish governance frameworks.

This hub brings together our practical guides, regulatory analysis, and free resources to help your team navigate EU AI Act compliance — whether you are a provider building AI systems or a deployer integrating AI into your products and operations.

What You'll Find Here

1

Practical guides on EU AI Act risk classification and obligations

2

Provider vs deployer compliance requirements explained

3

Key deadlines and enforcement timeline

4

Sector-specific guidance for SaaS, fintech, and healthtech

5

Downloadable checklists and compliance playbooks

6

GPAI and general-purpose AI model obligations

Key Dates

Enforcement timeline and deadlines.

2 Feb 2025

Prohibited AI practices provisions became applicable.

2 Aug 2025

General-purpose AI (GPAI) obligations, governance structure, and penalties take effect.

2 Aug 2026

High-risk AI system obligations become enforceable — the primary compliance deadline for most companies.

2 Aug 2027

Obligations for high-risk AI systems that are safety components of products covered by existing EU legislation.

Frequently Asked Questions

What is the EU AI Act?

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based classification system that imposes different obligations depending on how an AI system is used, ranging from outright prohibitions on certain practices to transparency requirements for lower-risk systems.

Who does the EU AI Act apply to?

The Act applies to providers (developers) and deployers (users) of AI systems that are placed on the EU market or whose outputs are used in the EU — regardless of where the company is based. If your AI system affects people in the EU, you likely have obligations under the Act.

What are the penalties for non-compliance?

Penalties are tiered based on the severity of the violation. The highest fines — up to €35 million or 7% of global annual turnover — apply to prohibited AI practices. High-risk system violations can attract fines of up to €15 million or 3% of turnover. Providing incorrect information to authorities can result in fines up to €7.5 million or 1% of turnover.

How do I know if my AI system is high-risk?

The EU AI Act defines high-risk AI systems in two ways: (1) AI used as a safety component of products already covered by EU product safety legislation, and (2) AI used in specific use cases listed in Annex III, including biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. A self-assessment against these categories is the starting point.

When do companies need to be compliant?

The Act is being phased in. Prohibited AI practices became enforceable in February 2025. GPAI rules apply from August 2025. The main deadline — obligations for high-risk AI systems — is 2 August 2026. Companies should already be working towards compliance given the complexity of the requirements.

Does the EU AI Act apply to companies outside the EU?

Yes. The Act has extraterritorial reach, similar to the GDPR. It applies to any provider placing an AI system on the EU market and any deployer using an AI system whose output is used within the EU, regardless of where the provider or deployer is established.

Need help with eu ai act compliance?

30 minutes. No preparation. No obligation.

Free 30-Min Consultation
Free 30-Min Consultation