All Insights
AI Governance 12 min read

EU AI Act Penalties: What Technology Companies Need to Know Before August 2026

The EU AI Act carries fines of up to EUR 35 million or 7% of global turnover. With the high-risk system deadline four months away, this guide breaks down the penalty structure, enforcement signals, and practical steps to prepare.

Rini Mathew

Rini Mathew

6 April 2026

Why This Matters Now

On 2 August 2026, the most consequential tranche of the EU AI Act becomes enforceable: the obligations for high-risk AI systems. This deadline covers the vast majority of enterprise AI deployments, from automated hiring tools and credit-scoring models to medical device software and critical infrastructure management.

The stakes are not theoretical. The European Commission has already opened investigations and issued data retention orders. National authorities are standing up enforcement teams. And the penalties dwarf anything the technology sector has faced under previous EU regulation, including the GDPR.

Yet according to industry surveys, roughly 78% of enterprises deploying AI systems have not completed the compliance assessments required by the Act. Many have not started.

This article provides a detailed, practical guide to the EU AI Act's penalty regime: how fines are calculated, who enforces them, what early enforcement actions signal about regulatory intent, and what technology companies should do in the four months that remain.


The Three-Tier Penalty Structure

Article 99 of the EU AI Act establishes a tiered penalty framework that scales with the severity of the violation. Unlike the GDPR's two-tier system, the AI Act introduces three distinct tiers, each targeting a different category of non-compliance.

TierViolation TypeMaximum FineKey Articles
Tier 1Prohibited AI practicesEUR 35 million or 7% of global annual turnover (whichever is higher)Article 5
Tier 2High-risk system obligationsEUR 15 million or 3% of global annual turnover (whichever is higher)Articles 6-49
Tier 3Supplying incorrect information to authoritiesEUR 7.5 million or 1.5% of global annual turnover (whichever is higher)Article 99(4)

Tier 1: Prohibited AI Practices (Up to EUR 35M / 7%)

The highest penalties are reserved for deploying AI systems that the Act categorically bans under Article 5. These include social scoring systems by public authorities, AI that exploits vulnerabilities of specific groups (children, disabled persons), certain forms of real-time biometric identification in public spaces, and AI designed to manipulate persons through subliminal techniques. This tier has been enforceable since 2 February 2025.

Tier 2: High-Risk System Violations (Up to EUR 15M / 3%)

This tier covers the bulk of enterprise compliance obligations: risk management systems, data governance, technical documentation, record-keeping, transparency and human oversight requirements, accuracy, robustness, and cybersecurity standards. For most technology companies, this is the tier that demands immediate attention. Enforcement begins 2 August 2026.

Tier 3: Incorrect Information to Authorities (Up to EUR 7.5M / 1.5%)

Providing incorrect, incomplete, or misleading information to national competent authorities or notified bodies attracts a separate penalty. This is not a minor technicality. Regulators treat information integrity as foundational to the supervisory relationship, and companies that undermine it face both financial penalties and reputational consequences that can complicate future regulatory interactions.


Special Provisions for SMEs and Startups

The EU AI Act includes a notable concession for smaller operators. Under Article 99(6), fines for SMEs, including startups, are capped at the lower of the fixed amount or the percentage of global annual turnover. This inverts the calculation applied to large enterprises, where the higher figure applies.

The practical effect is significant:

Worked Examples

Company TypeAnnual RevenueTier 1 FixedTier 1 Percentage (7%)Actual Maximum Fine
StartupEUR 500,000EUR 35,000,000EUR 35,000EUR 35,000 (lower of the two)
Scale-upEUR 10,000,000EUR 35,000,000EUR 700,000EUR 700,000 (lower of the two)
Large enterpriseEUR 2,000,000,000EUR 35,000,000EUR 140,000,000EUR 140,000,000 (higher of the two)

This proportionality mechanism is a deliberate policy choice: the EU wants to avoid crushing early-stage companies while maintaining meaningful deterrence for large technology firms. However, it does not exempt SMEs from the underlying compliance obligations. A startup deploying a high-risk AI system must still meet every requirement in the Act. The reduced fine ceiling simply means the financial consequences of failure are proportionate to the company's size.


The Enforcement Timeline: What Is Already Enforceable

A common misconception is that the EU AI Act is not yet in force. In reality, enforcement has already begun across multiple categories:

DateMilestoneStatus
2 February 2025Prohibited AI practices (Article 5) become enforceableActive
2 August 2025General-purpose AI (GPAI) model obligations applyActive
July 2025Final GPAI Code of Practice publishedPublished
2 August 2026High-risk AI system obligations enforceableFour months away
2 August 2027All remaining obligations enforceablePending

The phased rollout means that companies operating GPAI models are already subject to compliance requirements and penalties. And companies deploying prohibited AI systems have been exposed to Tier 1 fines for over a year. The August 2026 deadline is not the beginning of enforcement; it is the point at which enforcement reaches its broadest scope.


Who Enforces the EU AI Act?

Enforcement responsibility is divided between EU-level and national-level bodies, a structure that will be familiar to companies that have navigated GDPR compliance across multiple jurisdictions.

The EU AI Office

The EU AI Office, housed within the European Commission, has exclusive competence over the enforcement of GPAI model obligations. This is a notable departure from the GDPR's decentralised model. The AI Office can directly investigate GPAI providers, request information, conduct evaluations, and impose fines from August 2026 onward. Its centralised authority means there is no forum-shopping opportunity for GPAI providers: the AI Office is the sole regulator.

National Competent Authorities

For all other AI Act obligations, including high-risk system requirements, each EU member state must designate at least one national competent authority and at least one market surveillance authority. These bodies will investigate complaints, conduct inspections, and impose penalties within their jurisdictions.

As of early 2026, however, only 3 of 27 member states have fully designated their national authorities. This fragmented readiness is a source of both risk and uncertainty:

  • Ireland is the most advanced, with 15 designated authorities covering different sectors
  • Spain has established AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) as its dedicated AI supervisory agency
  • Germany has designated the Bundesnetzagentur (Federal Network Agency) as its lead authority

For technology companies with operations across multiple member states, the uneven pace of national implementation creates a period of regulatory ambiguity. But this should not be mistaken for a grace period. When authorities are fully operational, they will enforce against non-compliance that predates their designation.


Early Enforcement Signals: X, Meta, and the GPAI Code of Practice

The European Commission has not waited for the full enforcement infrastructure to mature. Early enforcement actions provide a clear signal of regulatory intent.

The X (Twitter) Investigation

In January 2026, the Commission issued a data retention order to X (formerly Twitter) concerning its Grok AI chatbot. This was followed by the opening of formal infringement proceedings against X, the first such action under the AI Act's GPAI provisions. The Grok investigation centres on whether X's AI model meets the transparency and documentation obligations required under the GPAI framework.

The Meta Investigation

The investigation was subsequently expanded to Meta, which had notably refused to sign the GPAI Code of Practice published in July 2025. The Code of Practice, while technically voluntary, serves as the primary benchmark for demonstrating compliance with GPAI obligations. Declining to adopt it invites heightened regulatory scrutiny and removes a key safe harbour.

These early actions carry two lessons for the broader technology sector. First, the Commission is willing to pursue major global platforms aggressively and publicly. Second, the GPAI Code of Practice functions as a de facto compliance standard: companies that do not adopt it will need to demonstrate equivalent compliance through alternative means, a significantly harder path.


How AI Act Penalties Compare to GDPR Fines

The GDPR provides the most relevant precedent for understanding how the EU AI Act's penalty regime is likely to develop. The comparison is instructive, and in several respects, alarming.

DimensionGDPREU AI Act
Maximum fine (fixed)EUR 20 millionEUR 35 million
Maximum fine (turnover %)4% of global annual turnover7% of global annual turnover
Fine calculation basisHigher of fixed or percentageHigher of fixed or percentage (lower for SMEs)
Cumulative fines since enforcementOver EUR 7.1 billion (2018-2026)Enforcement beginning
Early enforcement paceSlow (2018-2020), then acceleratedExpected to accelerate faster

The GDPR's enforcement trajectory is particularly instructive. In its first two years, penalties were relatively modest and infrequent. Regulators were building capacity, developing guidance, and establishing precedent. From 2020 onward, enforcement accelerated dramatically: the Irish Data Protection Commission alone issued over EUR 3 billion in fines against major technology companies.

AI Act enforcement is expected to follow a compressed version of this pattern. EU regulators have absorbed the lessons of GDPR enforcement. Institutional capacity, legal precedent for extraterritorial enforcement, and cross-border cooperation mechanisms are already in place. The ramp-up period is likely to be shorter and steeper.

For technology companies, the implication is clear: the window between the August 2026 enforcement date and the first significant fines may be considerably narrower than the equivalent GDPR window was.


Factors That Determine Fine Amounts

Article 99 of the AI Act specifies a detailed set of factors that authorities must consider when calculating penalties. Understanding these factors is essential for both compliance planning and enforcement response.

  • Nature, gravity, and duration of the infringement: A systemic failure maintained over years will attract higher penalties than a short-lived technical non-conformity.
  • Number of affected persons: AI systems that process decisions affecting millions (e.g., content recommendation, credit scoring) face inherently higher exposure than niche applications.
  • Intentional or negligent character: Deliberate circumvention of requirements is treated far more severely than good-faith errors in implementation.
  • Actions taken to mitigate damage: Companies that identify and remediate issues promptly can expect meaningful fine reductions.
  • Degree of cooperation with authorities: Proactive cooperation, including voluntary disclosure and responsive engagement with investigations, is an explicit mitigating factor.
  • Self-reporting: Organisations that self-report infringements before they are discovered by authorities receive specific mitigation credit.
  • Financial benefits gained from the infringement: If non-compliance generated measurable commercial advantage, authorities will factor this into the penalty to ensure fines are not simply a cost of doing business.
  • Size and market share of the operator: Larger operators with greater resources and market influence face proportionally higher expectations, and proportionally higher penalties.

The inclusion of self-reporting and cooperation as explicit mitigating factors creates a strong incentive for companies to establish internal compliance monitoring and incident response procedures. Companies that detect, report, and remediate their own compliance gaps will fare significantly better than those whose violations are discovered through complaints or market surveillance.


Practical Steps to Prepare Before August 2026

With four months until the high-risk system deadline, technology companies should prioritise the following actions:

1. Inventory and Classify Your AI Systems

Map every AI system your organisation develops, deploys, or distributes. Classify each system against the EU AI Act's risk categories, paying particular attention to the high-risk categories listed in Annex III: biometric systems, critical infrastructure, education and vocational training, employment and worker management, essential services (credit, insurance), law enforcement, migration management, and the administration of justice.

2. Conduct a Gap Assessment

For each high-risk system, assess current compliance against the Act's requirements: risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), and accuracy, robustness, and cybersecurity (Article 15). Document gaps and prioritise remediation by risk severity.

3. Establish Governance Structures

Designate internal responsibility for AI Act compliance. This may involve appointing an AI compliance officer, establishing an AI governance committee, or extending the mandate of existing data protection teams. The key is clear accountability and adequate resources.

4. Prepare Technical Documentation

The documentation requirements for high-risk systems are extensive and specific. Begin preparing conformity assessment documentation now. Retrospective documentation of systems already in production is one of the most time-consuming compliance tasks.

5. Build Incident Response and Self-Reporting Capability

Given the explicit mitigating value of self-reporting under Article 99, invest in internal monitoring systems that can detect compliance failures early and escalate them through a structured response process.

6. Engage Legal Counsel

The EU AI Act interacts with the GDPR, sector-specific regulations, national implementation measures, and contractual obligations across supply chains. Specialist legal guidance is not optional: it is a prerequisite for effective compliance.


How Lawsel Advisory Can Help

Lawsel Advisory provides end-to-end AI governance and EU AI Act compliance support for technology companies. Our services include:

  • AI system classification and risk assessment against the EU AI Act's Annex III categories
  • Gap analysis and compliance roadmap development for high-risk system obligations
  • Technical documentation and conformity assessment preparation
  • AI governance framework design, including board-level oversight structures and internal accountability models
  • GPAI Code of Practice advisory for general-purpose AI model providers
  • Regulatory response support, including cooperation strategy and self-reporting protocols
  • Cross-regulatory alignment across the AI Act, GDPR, DPDPA, and sector-specific frameworks

The August 2026 deadline is four months away. The penalty structure is defined, the enforcement infrastructure is being assembled, and the early enforcement signals are unmistakable. The question is not whether the EU will enforce the AI Act aggressively; it is whether your organisation will be prepared when it does.

Schedule a consultation to discuss your EU AI Act compliance position.

Share this article
Rini Mathew

Rini Mathew · Founder, Lawsel Advisory

All insights

Need guidance on ai governance?

Book a complimentary 30-minute consultation to discuss your specific requirements with Rini.

Book Free Consultation

30 min · No obligation

Or get insights in your inbox:

Free 30-Min Consultation