Compliance

EU AI Act Compliance Checklist for August 2026

March 26, 2026 By TruthVouch Team 14 min

Last updated: March 26, 2026

The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI law, and its most consequential enforcement deadline lands on August 2, 2026. On that date, requirements for high-risk AI systems, transparency obligations, and innovation measures all take effect — backed by penalties of up to EUR 35 million or 7% of global annual turnover, whichever is higher.

This EU AI Act compliance checklist breaks down the 12 areas you must address before the deadline, organized by article number. For each item, you will find what the regulation requires, what evidence you need, and the common gaps that catch organizations off guard. If you are building an AI governance framework from scratch, this checklist is your regulatory starting point.

If your organization places AI systems on the EU market — or if the output of your AI systems reaches EU users — this applies to you, regardless of where you are headquartered. Article 2 of the Act establishes extraterritorial scope modeled on GDPR: if your AI touches the EU, you must comply.

When Does the EU AI Act Take Effect?

The EU AI Act is a regulation that entered into force on August 1, 2024, with obligations rolling out in four phases. Here is where things stand today and what is coming next.

gantt
    title EU AI Act Enforcement Timeline
    dateFormat YYYY-MM-DD
    axisFormat %b %Y

    section Already in Effect
    Prohibited AI practices (Art. 5)           :done, 2025-02-02, 1d
    AI literacy requirements (Art. 4)          :done, 2025-02-02, 1d
    GPAI model obligations (Art. 53)           :done, 2025-08-02, 1d
    National authorities designated            :done, 2025-08-02, 1d

    section August 2, 2026
    High-risk AI rules (Annex III)             :crit, 2026-08-02, 1d
    Transparency obligations (Art. 50)         :crit, 2026-08-02, 1d
    Innovation sandboxes required              :crit, 2026-08-02, 1d

    section August 2, 2027
    High-risk in Annex I products              :2027-08-02, 1d
    Pre-market GPAI compliance                 :2027-08-02, 1d

Figure: EU AI Act phased enforcement timeline. Dates sourced from the European Commission’s implementation timeline.

PhaseDateWhat Applies
Phase 1 (in effect)February 2, 2025Prohibited AI practices banned; AI literacy obligations begin
Phase 2 (in effect)August 2, 2025GPAI model obligations; national competent authorities designated; EU AI Board and Scientific Panel operational
Phase 3 (upcoming)August 2, 2026High-risk AI system rules (Annex III); transparency obligations; regulatory sandboxes required per Member State
Phase 4August 2, 2027High-risk rules for AI in Annex I products (e.g., medical devices, machinery); GPAI models placed on market before Aug 2025 must comply

Sources: EU AI Act implementation timeline, European Parliament implementation overview

What Are the Penalties for EU AI Act Non-Compliance?

Non-compliance carries fines that exceed even GDPR’s ceiling of EUR 20 million or 4% of turnover. The Act establishes a three-tier penalty system under Article 99:

Violation TypeMaximum FineTurnover %
Prohibited AI practices (Art. 5)EUR 35 million7% of global annual turnover
High-risk AI system requirementsEUR 15 million3% of global annual turnover
Incorrect or misleading information to authoritiesEUR 7.5 million1% of global annual turnover

For SMEs and startups, the fine is whichever is lower of the fixed amount or the turnover percentage — a notable concession compared to the standard rule of whichever is higher.

Bottom line: With penalties reaching EUR 35 million or 7% of global turnover, the EU AI Act carries the heaviest financial risk of any AI regulation in the world. Organizations that delay compliance until the August 2026 deadline are betting against their bottom line.

How Do You Classify AI Systems Under the EU AI Act?

Risk classification is the process of determining which obligations apply to a given AI system based on its intended purpose, deployment context, and potential for harm. It is the single most important determination under the EU AI Act because it dictates your entire compliance burden.

The Act defines four risk tiers. There are 4 risk categories under the EU AI Act: (1) prohibited, (2) high-risk, (3) limited risk, and (4) minimal risk. The following decision tree walks through the classification logic.

flowchart TD
    A["Does the AI system fall under\nArticle 5 prohibited practices?"] -->|Yes| B["PROHIBITED\n(Banned outright)"]
    A -->|No| C["Is the AI system listed in\nAnnex III, or is it a safety\ncomponent of an Annex I product?"]
    C -->|Yes, Annex III| D{"Does it pose a significant\nrisk of harm to health,\nsafety, or fundamental rights?"}
    D -->|Yes or Unclear| E["HIGH-RISK\n(Full compliance required)"]
    D -->|"No (must document\nassessment)"| F["NOT HIGH-RISK\n(Document your reasoning)"]
    C -->|Yes, Annex I product| E
    C -->|No| G["Does the AI system interact\ndirectly with people, generate\ncontent, or involve biometrics?"]
    G -->|Yes| H["LIMITED RISK\n(Transparency obligations)"]
    G -->|No| I["MINIMAL RISK\n(No specific obligations)"]

    style B fill:#d32f2f,color:#fff
    style E fill:#f57c00,color:#fff
    style H fill:#fbc02d,color:#000
    style I fill:#388e3c,color:#fff
    style F fill:#388e3c,color:#fff

Figure: EU AI Act risk classification decision tree based on Article 6 and Annex III.

What Are the Eight High-Risk Domains in Annex III?

Annex III lists AI systems that are presumed high-risk in eight domains:

  1. Biometrics — remote biometric identification, biometric categorization, emotion recognition
  2. Critical infrastructure — safety components in water, gas, electricity, heating, road traffic, and digital infrastructure
  3. Education and vocational training — systems determining access to education, evaluating learning outcomes, or monitoring prohibited behavior during exams
  4. Employment — recruitment, screening, hiring decisions, task allocation, performance monitoring, promotion or termination decisions
  5. Access to essential services — credit scoring, insurance risk assessment, emergency dispatch prioritization
  6. Law enforcement — risk assessment of natural persons, polygraphs, evidence reliability evaluation, crime prediction
  7. Migration, asylum, and border control — risk assessments, document authenticity verification, asylum application examination
  8. Administration of justice — influencing judicial decisions, alternative dispute resolution outcomes

An Annex III system may be exempt from high-risk classification if it does not pose a significant risk of harm — but the provider must document this assessment and register it before placing the system on the market. Organizations dealing with shadow AI risks often discover undocumented AI systems that fall squarely into these categories.

What Does the 12-Point EU AI Act Compliance Checklist Cover?

The following checklist covers the 12 core obligations that apply to providers and deployers of high-risk AI systems under the August 2, 2026 deadline, plus GPAI obligations that are already in effect. Each item maps to a specific article in the regulation and includes the evidence you must collect.


1. AI System Inventory and Risk Classification (Art. 6 + Annex III)

What the regulation requires: You must identify every AI system you develop, deploy, or use and classify each one according to the Act’s risk tiers. Providers who determine an Annex III system is not high-risk must document that assessment and make it available to national competent authorities on request.

Evidence you need:

  • Complete inventory of all AI systems with purpose, scope, and deployment context
  • Risk classification decision for each system with documented rationale
  • Registration in the EU database for high-risk AI systems (Article 49)

Common gaps:

  • Organizations overlook third-party AI embedded in SaaS tools (e.g., an HR platform with AI-powered screening counts as deploying a high-risk system under domain 4)
  • No centralized registry — teams use AI tools independently with no visibility at the organizational level
  • Failure to document the “not high-risk” exemption reasoning for Annex III systems

2. Risk Management System (Art. 9)

What the regulation requires: A continuous, iterative risk management system must be established, implemented, documented, and maintained for each high-risk AI system throughout its entire lifecycle. A risk management system refers to a documented, ongoing process that identifies, analyzes, evaluates, and mitigates AI-related risks across the full system lifecycle — not a one-time assessment.

Evidence you need:

  • Documented risk identification methodology
  • Risk analysis covering health, safety, and fundamental rights impacts
  • Mitigation measures for each identified risk, with residual risk assessment
  • Testing protocols that validate mitigation effectiveness
  • Records showing the system is reviewed and updated regularly

Common gaps:

  • Treating risk assessment as a one-time exercise rather than a living process
  • Missing residual risk acceptability determinations — Article 9 explicitly requires that residual risk be “judged to be acceptable”
  • No consideration of risks to vulnerable groups or persons under 18

3. Training Data Governance (Art. 10)

What the regulation requires: High-risk AI systems trained on data must use training, validation, and testing datasets that meet specific quality criteria under Article 10. Data governance in the context of the EU AI Act refers to the documented practices, policies, and quality controls applied to training, validation, and testing datasets to ensure they are relevant, representative, and free of errors. Datasets must have appropriate statistical properties for the intended population.

Evidence you need:

  • Data governance and management practices documentation
  • Data collection process records, including annotation and labeling methodologies
  • Bias detection procedures and results
  • Data provenance records showing the origin and processing history of datasets
  • Evidence of representativeness analysis for the target population

Common gaps:

  • No documentation of data collection and preparation processes
  • Bias detection limited to protected characteristics explicitly listed, missing intersectional analysis
  • Training data not evaluated for geographic or contextual representativeness — Article 10 explicitly requires consideration of the “geographical, contextual, behavioural or functional setting” of intended use

4. Technical Documentation (Art. 11 + Annex IV)

What the regulation requires: Before placing a high-risk AI system on the market, providers must prepare technical documentation per Article 11 covering the elements specified in Annex IV. This documentation must demonstrate compliance with all applicable requirements and enable competent authorities to assess conformity.

Evidence you need:

There are 9 mandatory documentation sections required by Annex IV:

  1. General description (intended purpose, provider identity, system versions, hardware requirements)
  2. Development and design details (design choices, architecture, training methodology)
  3. Monitoring and control mechanisms
  4. Performance metrics (accuracy, robustness, cybersecurity measures)
  5. Risk management documentation (cross-reference to Art. 9 system)
  6. Lifecycle change records
  7. Applied harmonized standards or common specifications
  8. EU declaration of conformity
  9. Post-market monitoring plan

Common gaps:

  • Documentation created retroactively and missing design-phase decisions
  • No version control — Article 11 requires documentation to be “kept up to date”
  • SMEs unaware of the simplified documentation form the Commission is developing for smaller organizations

5. Transparency and User Information (Art. 13)

What the regulation requires: High-risk AI systems must be designed for sufficient transparency under Article 13, enabling deployers to interpret outputs and use the system appropriately. Transparency under Article 13 means providing clear, comprehensive instructions for use that enable deployers to understand the system’s capabilities, limitations, and appropriate conditions of use.

Evidence you need:

  • Instructions for use covering system capabilities, limitations, and intended purpose
  • Declared accuracy levels, robustness metrics, and known limitations
  • Description of human oversight measures (cross-reference to Art. 14)
  • Logging capability documentation
  • Information about computational and hardware requirements

Common gaps:

  • Instructions written for technical audiences only, not accessible to deployer organizations with varied technical literacy
  • No disclosure of known failure modes or performance degradation conditions
  • Accuracy metrics reported without confidence intervals or testing conditions

Organizations building AI applications should also consider how guardrails can enforce transparency requirements at the application layer.


6. Human Oversight Mechanisms (Art. 14)

What the regulation requires: High-risk AI systems must be designed with appropriate human-machine interface tools under Article 14 so that natural persons can effectively oversee them during use. The intensity of oversight must be proportional to the risk level, degree of autonomy, and context of use.

Evidence you need:

  • Documented human oversight procedures for each high-risk system
  • Interface capabilities enabling operators to monitor, interpret, and override AI outputs
  • Safeguards against “automation bias” — the regulation explicitly names the risk of over-reliance on AI outputs
  • Evidence that oversight personnel are trained and have authority to override or disengage the system
  • Records of oversight being exercised (not just designed)

Common gaps:

  • Oversight designed on paper but never exercised — the regulation requires both design-time and use-time oversight
  • No mechanism for the human overseer to actually stop or override the system in real time
  • Automation bias not addressed — Article 14 specifically requires measures to prevent undue reliance on AI outputs, particularly for systems providing recommendations that humans may rubber-stamp

For autonomous AI agents, human oversight becomes especially complex — our guide to AI agent governance covers graduated autonomy controls that map directly to Article 14 requirements.


7. Quality Management System (Art. 17)

What the regulation requires: Providers of high-risk AI systems must establish a quality management system under Article 17 documented as written policies, procedures, and instructions. A quality management system (QMS) under the EU AI Act is a set of documented policies, procedures, and instructions covering regulatory compliance strategy, design control, development quality assurance, data management, post-market monitoring, and accountability.

Evidence you need:

  • Written QMS policies and procedures covering all Article 17 requirements
  • Defined accountability framework with roles and responsibilities
  • Procedures for managing modifications to the high-risk AI system
  • Data management policies (acquisition, labeling, storage, retention)
  • Supplier and subcontractor management procedures
  • Resource management documentation (computing, personnel, data)
  • Incident logging and management procedures
  • Communication protocols with competent authorities

Common gaps:

  • QMS exists for other regulatory purposes (ISO 9001, SOC 2) but does not cover AI-specific requirements from Article 17
  • No procedures for managing modifications — the regulation requires explicit change management for high-risk systems
  • Accountability framework lists roles but not escalation paths or decision authority

8. GPAI Provider Obligations (Art. 53)

What the regulation requires: Providers of general-purpose AI models must comply with four core obligations under Article 53, which have been in effect since August 2, 2025. A general-purpose AI (GPAI) model is an AI model that displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications.

There are 4 core obligations under Article 53 for GPAI providers: (1) prepare and maintain technical documentation of training and testing processes, (2) provide downstream providers with sufficient information to comply with their own AI Act obligations, (3) adopt a copyright compliance policy respecting EU text-and-data-mining opt-outs, and (4) publish a detailed summary of training data content.

Evidence you need:

  • Technical documentation of model training and evaluation
  • Downstream provider information packages
  • Copyright compliance policy with text-and-data-mining opt-out procedures
  • Published training data content summary using the AI Office template
  • If relying on codes of practice: documentation of adherence

Common gaps:

  • Organizations using GPAI models as deployers — not providers — assume Article 53 does not affect them. It does: you need sufficient documentation from your GPAI provider to fulfill your own downstream obligations
  • Training data summary not published or too vague to meet the “sufficiently detailed” standard
  • Open-source model providers unaware that the documentation exemption does not apply to models with systemic risk classification

Key takeaway: Even if you only deploy GPAI models (not provide them), Article 53 affects you indirectly. You need your upstream providers to give you adequate documentation, or your own compliance obligations under Articles 9, 11, and 13 become impossible to fulfill.

Note: GPAI models placed on the market before August 2, 2025 have until August 2, 2027 to comply.


9. Post-Market Monitoring (Art. 72)

What the regulation requires: Providers must establish a post-market monitoring system under Article 72 that actively and systematically collects, documents, and analyzes data on AI system performance throughout its lifetime. The monitoring plan must be part of the technical documentation (Annex IV).

Evidence you need:

  • Documented post-market monitoring plan
  • Data collection mechanisms for ongoing performance measurement
  • Defined performance thresholds and degradation triggers
  • Analysis records showing monitoring is active (not just designed)
  • Integration with the risk management system (Art. 9) for feeding findings back into risk updates

Common gaps:

  • Monitoring plan exists on paper but no infrastructure to collect real-world performance data
  • No defined thresholds — teams monitor but have no criteria for when performance degradation triggers action
  • Monitoring data not fed back into risk management updates, creating a compliance gap between Articles 9 and 72

For organizations deploying LLMs, hallucination detection is a critical component of post-market monitoring — verifying output accuracy in production is exactly what Article 72 demands.


10. Serious Incident Reporting (Art. 73)

What the regulation requires: Providers must report serious incidents to Member State market surveillance authorities under Article 73. A serious incident is defined as an incident that directly or indirectly leads to death, serious health harm, disruption of critical infrastructure, fundamental rights violations, or serious property or environmental damage.

Evidence you need:

  • Incident classification procedures aligned with Article 73 definitions
  • Reporting workflow capable of meeting mandatory timelines:
    • 15 days after establishing a causal link (or reasonable likelihood) for standard serious incidents
    • 2 days for widespread infringements or incidents under Article 3(49)(b)
    • Immediate notification in all cases, followed by full report
  • Records of all incident assessments, including near-misses evaluated but not reported
  • Contact information for relevant national market surveillance authorities

Common gaps:

  • No incident classification framework — teams cannot distinguish “serious incidents” from general bugs or performance issues
  • Reporting timelines not operationalized — the 2-day window for widespread incidents is extremely tight
  • No pre-established contact with market surveillance authorities in relevant Member States

11. Conformity Assessment (Art. 43)

What the regulation requires: Before placing a high-risk AI system on the EU market, providers must complete a conformity assessment under Article 43. A conformity assessment is the process by which a provider demonstrates that a high-risk AI system meets all applicable requirements before placing it on the market. For most Annex III systems (categories 2-8), providers can self-assess using the internal control procedure in Annex VI. Systems involving remote biometric identification require third-party assessment by a notified body.

Evidence you need:

  • Completed conformity assessment (self-assessment or notified body assessment, depending on system type)
  • EU Declaration of Conformity
  • CE marking affixed to the system or its documentation
  • Registration in the EU database (Article 49)
  • Evidence that a new conformity assessment is triggered by substantial modifications

Common gaps:

  • Assuming all Annex III systems require expensive third-party assessment — most can self-assess
  • No trigger mechanism for re-assessment when systems are substantially modified
  • CE marking applied without completing the full conformity procedure

12. AI Literacy and Training (Art. 4)

What the regulation requires: Article 4 requires that providers and deployers ensure sufficient AI literacy among staff and other persons dealing with AI systems on their behalf. This obligation applies to all AI systems — not just high-risk — and has been in effect since February 2, 2025.

AI literacy is defined in Article 3(56) as “skills, knowledge and understanding that allow providers, deployers and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”

Evidence you need:

  • AI literacy training program tailored to staff roles, technical background, and the AI systems they interact with
  • Training records (the Commission has clarified that internal records are sufficient — no certification required)
  • Periodic refresher schedule
  • Coverage of AI risks, limitations, and the organization’s governance policies

Common gaps:

  • No training program exists, or training is generic and not tailored to specific AI systems in use
  • Training limited to technical teams while non-technical staff (HR, legal, procurement) who deploy or interact with AI systems are excluded
  • No records kept — while no formal certification is required, organizations need evidence that training occurred

Bottom line: AI literacy is the one EU AI Act obligation that applies to every organization using any AI system, regardless of risk classification. It is also the easiest to start immediately — you do not need to wait for regulatory guidance to begin training your teams.

Note: While supervision and enforcement of AI literacy requirements formally begin on August 2, 2026, the obligation itself has applied since February 2, 2025. Organizations that wait until 2026 to start training risk being out of compliance retroactively.


EU AI Act Compliance Checklist Summary

The 12-point EU AI Act compliance checklist spans three categories of obligation: documentation and classification (items 1, 4, 8, 11, 12), technical safeguards (items 2, 3, 5, 6), and operational processes (items 7, 9, 10). Use this table as a quick-reference tracker for your compliance program.

#RequirementArticleApplies FromCategoryStatus
1AI system inventory and risk classificationArt. 6 + Annex IIIAug 2, 2026Documentation
2Risk management systemArt. 9Aug 2, 2026Technical
3Training data governanceArt. 10Aug 2, 2026Technical
4Technical documentationArt. 11 + Annex IVAug 2, 2026Documentation
5Transparency and user informationArt. 13Aug 2, 2026Technical
6Human oversight mechanismsArt. 14Aug 2, 2026Technical
7Quality management systemArt. 17Aug 2, 2026Operational
8GPAI provider obligationsArt. 53Aug 2, 2025 (in effect)Documentation
9Post-market monitoringArt. 72Aug 2, 2026Operational
10Serious incident reportingArt. 73Aug 2, 2026Operational
11Conformity assessmentArt. 43Aug 2, 2026Documentation
12AI literacy and trainingArt. 4Feb 2, 2025 (in effect)Documentation

How Does the EU AI Act Compare to Other AI Regulations?

The EU AI Act does not exist in isolation. Organizations operating globally must navigate overlapping AI governance frameworks across jurisdictions. There are 3 major frameworks that intersect with the EU AI Act: the Colorado AI Act, the NIST AI RMF, and ISO 42001 (the international standard for AI management systems). TruthVouch’s Compliance Autopilot covers all three frameworks.

EU AI Act vs. Colorado AI Act (SB 24-205)

Colorado’s AI Act — the first comprehensive state-level AI regulation in the United States — shares significant overlap with the EU AI Act. Originally scheduled for February 2026, the enforcement date was pushed to June 30, 2026, just weeks before the EU deadline.

DimensionEU AI ActColorado AI Act (SB 24-205)
ScopeAll AI systems affecting EU marketHigh-risk AI systems making “consequential decisions”
Risk focusHealth, safety, fundamental rightsAlgorithmic discrimination against protected classes
Provider obligationsTechnical documentation, QMS, conformity assessmentReasonable care, bias testing disclosure, public statements
Deployer obligationsRisk management, human oversight, monitoringRisk management policy, annual impact assessment, consumer disclosure
PenaltiesUp to EUR 35M / 7% turnoverDeceptive trade practice (AG enforcement)
Affirmative defenseCompliance with harmonized standardsDiscover-and-cure + recognized framework adherence
Effective dateAug 2, 2026 (high-risk)June 30, 2026

Overlap opportunity: An organization that builds its risk management system to satisfy EU AI Act Articles 9 and 17 will have covered most of what Colorado requires — the key addition is Colorado’s specific focus on algorithmic discrimination and its annual impact assessment requirement.

How Does NIST AI RMF Map to the EU AI Act?

The NIST AI RMF, released in January 2023, is a voluntary framework organized around four functions: Govern, Map, Measure, and Manage. While not legally binding, it serves as a recognized standard that can support EU AI Act compliance.

The Colorado AI Act explicitly provides an affirmative defense for organizations adhering to recognized risk management frameworks — making NIST AI RMF adoption a practical compliance strategy that spans both jurisdictions.

NIST AI RMF FunctionEU AI Act MappingKey Overlap
GovernArt. 17 (QMS), Art. 4 (AI literacy)Organizational governance structure and culture
MapArt. 6 + Annex III (risk classification), Art. 9 (risk identification)Context-setting and risk identification
MeasureArt. 9 (risk analysis), Art. 72 (post-market monitoring)Quantitative and qualitative risk assessment
ManageArt. 9 (risk mitigation), Art. 14 (human oversight), Art. 73 (incident reporting)Risk treatment and response actions

How Does TruthVouch Support EU AI Act Compliance?

TruthVouch’s Compliance Autopilot provides automated support across seven of the twelve checklist items above — specifically Articles 9 (risk management via compliance scanning and gap analysis), 11 (technical documentation via auto-generated model cards), 14 (human oversight assignment and exercise logging), 17 (quality management through obligation tracking and compliance scanning), 53 (GPAI obligations via regulatory database covering 50+ regulations including the EU AI Act), 72 (post-market monitoring through automated evidence collection connectors), and 73 (serious incident management through obligation tracking with deadline enforcement and evidence mapping).

Additional capabilities relevant to EU AI Act compliance include:

  • Risk Classification Wizard — interactive walkthrough of the Annex III classification logic to determine your AI system’s risk tier
  • Conformity Assessment Workflow — Art. 43 conformity assessment management
  • AI Literacy Training — training program management with completion tracking and reminders
  • Bias Audit — statistical computation covering three legal frameworks including the EU AI Act
  • Regulatory Q&A — RAG-based question answering against the full regulation text
  • GRC Integration — bidirectional sync with ServiceNow, Jira, and other GRC tools via OSCAL export
  • Board Reports — auto-generated board-ready AI risk reports

Organizations that need transparency labeling for AI-generated content should also consider AI content certification with C2PA, which directly addresses the Article 50 transparency obligations taking effect in August 2026.

These capabilities map directly to the compliance evidence requirements outlined in this checklist. For organizations that are just getting started, our AI governance product overview explains the full platform, and the first compliance assessment guide walks you through initial setup step by step.

Key takeaway: No tool automates EU AI Act compliance end-to-end. The regulation requires organizational processes, human judgment, and documented decision-making. What platforms like TruthVouch do is reduce the manual burden — automating evidence collection, flagging gaps, tracking deadlines, and generating documentation artifacts so your compliance team can focus on the decisions that require human expertise.

Assess your EU AI Act readiness in 5 minutes — our free assessment covers article-by-article gap analysis across eight key areas of the regulation and produces a personalized compliance roadmap. If you want to explore the broader TruthVouch platform first, start with the AI Advisor for maturity assessment or the Hallucination Shield for production-grade output verification.


Frequently Asked Questions

When does the EU AI Act take effect?

The EU AI Act entered into force on August 1, 2024, but obligations apply in phases. Prohibited practices were banned as of February 2, 2025. GPAI provider obligations took effect August 2, 2025. The biggest wave — high-risk AI system rules, transparency obligations, and regulatory sandbox requirements — takes effect on August 2, 2026.

What are the penalties for EU AI Act non-compliance?

The EU AI Act establishes three penalty tiers under Article 99: EUR 35 million or 7% of global turnover for prohibited practice violations, EUR 15 million or 3% for high-risk system non-compliance, and EUR 7.5 million or 1% for providing misleading information to authorities. For SMEs and startups, fines are calculated as whichever amount is lower.

How do I know if my AI system is high-risk?

Your AI system is classified as high-risk if it falls under one of the eight domains listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice) or if it is a safety component of a product covered by Annex I EU harmonization legislation. Annex III systems may claim exemption if they document that no significant risk of harm exists.

Does the EU AI Act apply to companies outside Europe?

Yes. Article 2 establishes extraterritorial scope modeled on GDPR. The regulation applies to any provider placing an AI system on the EU market and to any deployer located within the EU — regardless of where the provider is headquartered. If your AI system’s output is used by persons in the EU, you are likely subject to the Act.

What is the difference between the EU AI Act and the Colorado AI Act?

The EU AI Act is a comprehensive, risk-tiered regulation covering all AI systems with obligations scaled by risk level, focused on health, safety, and fundamental rights. The Colorado AI Act (SB 24-205) is narrower, focusing specifically on algorithmic discrimination in “consequential decisions.” Colorado takes effect June 30, 2026 and offers an affirmative defense for organizations using recognized frameworks like NIST AI RMF. Building your compliance program around the EU AI Act will cover most Colorado requirements, with the main addition being Colorado’s annual impact assessment.


Sources & Further Reading

Tags:

#EU AI Act #compliance #GPAI #high-risk AI #regulatory

Ready to build trust into your AI?

See how TruthVouch helps organizations govern AI, detect hallucinations, and build customer trust.

Not sure where to start? Take our free AI Maturity Assessment

Get your personalized report in 5 minutes — no credit card required