Overview
If you ship or deploy AI systems in the EU—whether provider, deployer, or importer—your duties now follow a phased schedule. The law, Regulation (EU) 2024/1689, entered into force on 1 August 2024. That start date triggered bans, transparency, GPAI, and high‑risk requirements as set out in the official EUR‑Lex text of Regulation (EU) 2024/1689.
Since August 2025, GPAI obligations have applied to general‑purpose models. Most high‑risk Annex III duties follow later in the timeline described by the European Commission’s AI Act policy explainer.
This guide combines EU AI Act news and a practical playbook. You will find what’s new this month, the deadlines that drive workstreams, how to classify your system, and the exact evidence to assemble for audits and CE marking. Where relevant, we cite official sources, including the EUR‑Lex text of Regulation (EU) 2024/1689, the European Commission’s AI Act policy explainer, and the EU AI Office.
The takeaway is simple. Set quarterly plans now to meet GPAI obligations already in force. De‑risk high‑risk conformity assessments that will need mature documentation, testing, and governance ahead of 2027.
What changed this month
As of January 2026, expect intensified guidance, standards alignment, and national enforcement build‑out under the AI Act. The AI Office continues to coordinate codes of practice and support implementation. Member States are appointing competent authorities and aligning processes.
Your most urgent actions are clear. Monitor official guidance and align documentation with emerging standards. Scope your conformity assessment route if you offer or integrate high‑risk AI. Track updates from authoritative sources and adjust your roadmap so controls, disclosures, and evidence stay current.
Key documents and guidance
The AI Act is evolving through delegated acts, guidance, and standards work. These clarify practical expectations. Start here and check frequently:
- Regulation (EU) 2024/1689 on EUR‑Lex for the official legal text and phased application dates.
- European Commission AI Act page for governance structure, milestones, and FAQs.
- EU AI Office for implementation updates, codes of practice, and contacts to national authorities.
- NIST AI RMF 1.0 for risk practices you can reuse for EU evidence creation.
- ISO/IEC 42001 for an AI management system standard likely to integrate well with AI Act controls.
These references help you translate legal requirements into operational checklists. Assign an owner to track updates monthly and brief product, legal, and security leads.
Upcoming milestones and deadlines
With the law in force, near‑term and mid‑term dates drive resourcing and sequencing for both providers and deployers. Organize your program around these waypoints and their rationale:
- Since February 2025: bans on unacceptable‑risk practices apply (six months post‑entry into force). Providers and deployers must confirm no use cases fall into prohibited categories under the Act.
- Since August 2025: GPAI obligations apply (12 months post‑entry into force). They include training‑data summary transparency and safety practices for general‑purpose models, per the Commission’s AI Act explainer.
- 2026 and beyond: further transparency duties and governance mechanics continue to phase in. Use this period to finalize high‑risk documentation and supplier contracts.
- 2027: most high‑risk Annex III obligations, including conformity assessment and CE marking, apply after the longer runway. Begin notified‑body engagement well in advance to avoid bottlenecks.
Treat these dates as gates for documentation, testing, and supplier attestations. Build a quarter‑by‑quarter plan that back‑schedules from the above.
EU AI Act scope and risk classification in practice
In 2026, scope determines whether you need CE marking, transparency notices, or minimal controls. It also defines who in your chain is responsible. The AI Act applies to providers, deployers, importers, distributors, and authorized representatives placing systems on the EU market.
Start by mapping each AI‑enabled feature to the risk taxonomy and your role. A provider of HR screening tools likely qualifies as high‑risk under Annex III. A generative assistant for internal knowledge search may face transparency and accuracy management but not Annex III obligations. Use the official text on EUR‑Lex to verify criteria. Then document your rationale and mitigations to defend in audits. Create a system inventory with role and risk assignments for each use case.
Borderline examples and edge cases
Borderline cases can swing your obligations dramatically. Resolve them early with clear controls. Consider:
- HR chatbots that also rank candidates: a “helper” bot (limited‑risk) becomes high‑risk when it materially influences access to employment. Mitigate by restricting automated ranking or adding human‑in‑the‑loop checkpoints.
- Creditworthiness triage vs underwriting: a triage tool that flags files for human review may be limited‑risk. An automated underwriting decision engine is typically high‑risk. Mitigate by documenting explainability, bias controls, and override procedures.
- Driver monitoring vs automated driving: a vision model that alerts a driver (limited‑risk) differs from steering control (likely high‑risk within existing CE frameworks). Mitigate by hard‑coding safety boundaries and logging.
- Generative assistants in healthcare: drafting notes for clinicians involves transparency duties with strong accuracy checks. Diagnostic suggestions integrated into workflows are high‑risk. Mitigate with evidence‑based validation and human oversight.
In each case, define the decision boundary and log human interventions. Document why your classification holds under the Act. This becomes audit evidence and protects against reclassification during market surveillance.
Key timelines and phased obligations
A phased schedule gives you time to build controls. It also compresses vendor and notified‑body bandwidth as key dates approach. Bans on prohibited uses have applied since February 2025. GPAI obligations have applied since August 2025. Most high‑risk requirements come later.
Providers should already be testing and assembling technical documentation. Deployers should update governance and procurement to reflect current duties.
Translate the timeline into program increments. Use Q1–Q2 2026 to consolidate policy, inventories, and GPAI transparency that came due in 2025. Use Q3–Q4 2026 for supplier attestations, internal audits, and post‑market monitoring. Use 2026–2027 for high‑risk conformity workups. Confirm your obligations against the European Commission AI Act explainer. Slot in audit‑ready milestones. Align internal funding with this multi‑year plan.
Dates that drive workstreams
Anchor your workstreams to specific dates and outputs. Avoid last‑minute scrambles:
- February 2025: prohibited‑use attestation complete. Deprecate or redesign any non‑compliant features.
- August 2025: GPAI training‑data summary published. Model cards and safety evaluations available. Deployers update user transparency.
- 2026: supplier contracts and procurement templates fully AI‑Act‑aligned. Internal red‑teaming and post‑market monitoring live.
- 2027: high‑risk CE marking submissions finalized. Notified‑body reviews underway. Incident reporting and periodic updates in place.
Work backward 6–9 months from each date. Lock requirements, run tests, and assemble your technical file. Give notified bodies time to review without blocking release schedules.
High-risk AI: conformity assessment and CE marking
If your system falls under Annex III (e.g., employment, credit, education, biometric identification), you need a conformity assessment before placing it on the EU market. Providers must build a technical file and implement risk management. They must ensure data governance, establish human oversight, and affix the CE marking upon successful assessment in accordance with the Act.
Choose your assessment route. Use self‑assessment only where permitted and standards are fully applied, or use a notified body. Scope the system and intended purpose. Assemble evidence iteratively. Early engagement reduces rework and supports smoother market entry. Start now by drafting your intended‑use statement and hazard analysis. These drive the rest of your documentation and testing plan.
Choosing a notified body
Demand for AI‑Act‑qualified notified bodies will outstrip capacity early. Selection and preparation matter. Prioritize bodies with sector experience aligned to your use case, established quality systems, and clear expectations for Annex III evidence.
- Match scope and sector: pick a body with demonstrated experience in your application domain and adjacent CE regimes.
- Check accreditation and scope: confirm authorization once Member State designations are published. Maintain a shortlist to mitigate bottlenecks.
- Plan throughput: assume multi‑month review cycles. Batch your evidence to reduce back‑and‑forth.
- Submit a strong dossier: include traceable requirements, validation protocols and results, data governance controls, and human oversight procedures.
Book an intake call as soon as your intended use and system boundaries are stable. Bring a draft technical file and test plan. This shortens the review and clarifies expectations.
GPAI obligations, systemic risk, and copyright transparency
Since August 2025, GPAI providers must meet transparency and safety duties under the AI Act. Expect to publish a training‑data summary that respects trade secrets. Document model capabilities and limitations. Take steps to mitigate systemic risks where applicable, consistent with the framework described by the EU AI Office.
The AI Office can designate certain GPAI models as posing systemic risk. Designation is based on capabilities and impact. It triggers enhanced testing, monitoring, and incident reporting.
For providers, the implication is clear. Adopt model cards, safety evaluations, and red‑team testing now. For deployers, build user transparency and misuse safeguards into your integration. Use the EU AI Office for updates on codes of practice that translate duties into concrete checklists. Inventory your GPAI models and keep an updated transparency package aligned to 2025 obligations.
Open-source and research scenarios
Open‑source and research pathways ease some burdens. They do not remove them entirely. Where GPAI models are made available under open‑source licenses without commercial placement, the Act provides relief on certain obligations. Systemic‑risk designations and basic transparency can still apply.
If you steward an open‑source model, document your release scope, safety mitigations, and known limitations. If you integrate open‑source GPAI into a product, you inherit provider‑level duties for that integration. For research sandboxes, coordinate with national authorities early to clarify boundaries before market placement. Define what “make available” means in your context. Publish a lean but complete set of model disclosures.
Technical documentation: evidence, audit trails, and examples
Your technical file is the backbone of conformity and market surveillance. Providers of high‑risk systems need a complete, traceable evidence pack. It should cover intended purpose, risk management, data governance, performance and robustness testing, human oversight, cybersecurity, and post‑market monitoring.
Treat your file as a living artifact that mirrors product reality. Link requirements to tests. Record design changes. Log deviations with rationale. Use established formats from quality management and software assurance to keep it auditable. Start with a skeleton table of contents. Fill it continuously as features ship. Do not wait until the end of development.
Data governance, logging, and human oversight
Auditors will look for credible controls in three areas that directly reduce risk. Providers and deployers should demonstrate:
- Data governance: documented dataset lineage and representativeness checks. Bias testing and remediation. Access controls protecting training and evaluation data.
- Logging and monitoring: event‑level logs for inference, interventions, and errors. Automatic alerts for drift, performance degradation, and misuse. Retention schedules aligned with obligations.
- Human oversight: clear intervention points and escalation paths. User training and documented override authority. Oversight designed to be effective and timely.
Describe these controls in your technical file and back them with evidence. Include sample logs, bias test reports, training materials, and runbooks. Implement continuous monitoring that feeds incident response and post‑market reporting.
Extraterritorial scope, importers, and geoblocking trade-offs
Non‑EU providers are in scope if they place systems on the EU market or their outputs are used in the EU. If you sell via an importer or distributor, the Act assigns obligations across roles. You must coordinate to avoid gaps. Simply geoblocking EU traffic to avoid obligations may create legal and commercial risks if EU customers or intermediaries still access your system.
Weigh geoblocking carefully. You could lose EU opportunities and still face circumvention issues through resellers. A safer path is often to define EU‑specific deployment conditions, contract with an EU‑based importer, and meet the minimal obligations for your risk tier. Map how your products, models, and outputs can reach EU users—directly or indirectly. Assign role‑based responsibilities.
Interplay with GDPR, Data Act, NIS2, DSA/DMA, CRA, and liability
The AI Act doesn’t replace other regimes. It stacks with them. If you process personal data, GDPR duties on lawful basis, transparency, and data subject rights still apply. If you’re critical for network security, NIS2 duties on risk management and incident reporting also apply. Aligning controls across regimes reduces duplication and audit fatigue.
Map AI Act obligations to your existing privacy, security, and product safety controls. DPIAs can feed AI Act risk analysis. Your SOC’s incident management can cover AI incidents. Supplier due diligence can absorb AI‑specific attestations. Use the primary texts—GDPR on EUR‑Lex, NIS2 on EUR‑Lex, and the Data Act on EUR‑Lex—to align terminology and reporting triggers. Create a cross‑regulatory control matrix that your audit and product teams maintain together.
Multi-framework alignment (NIST AI RMF, ISO/IEC 42001)
If you work globally, reuse frameworks to avoid parallel systems. NIST AI RMF’s Govern–Map–Measure–Manage practices map cleanly to AI Act risk management, testing, and monitoring. ISO/IEC 42001 provides an AI management system structure that supports evidence creation and continual improvement.
Start by mapping AI Act articles to RMF functions and 42001 clauses. Assign owners and artifacts for each. The result is a single source of truth for controls serving EU, US, and ISO expectations. Pick one backbone (RMF or 42001) and layer AI Act‑specific evidence on top.
Sector playbooks: HR, healthcare, finance, mobility, and public sector
Sector context matters because similar models carry different risk depending on use. Providers and deployers should use sector‑specific controls and documentation patterns. This helps meet Annex III and transparency obligations efficiently.
- HR: automated screening and ranking are typically high‑risk. Document explainability for candidates, fairness metrics, and human‑in‑the‑loop decisions. Keep rejection rationales and override logs.
- Healthcare: clinical decision support leans high‑risk. Require clinical validation and robustness testing on edge cases. Provide clear clinician override pathways. Maintain post‑market surveillance for safety signals.
- Finance: creditworthiness and fraud detection can be high‑risk. Show bias mitigation and adverse action notices. Stress test against distribution shifts. Back testing with scenario analysis.
- Mobility: driver monitoring may be limited‑risk with strong transparency. Automated decision components trend high‑risk. Log interventions and safety boundaries. Integrate cybersecurity hardening.
- Public sector: transparency and accountability pressures are higher. Publish clear notices. Maintain appeal avenues. Ensure human oversight is real and documented.
Use these patterns to draft intended use, risk controls, and evidence lists tailored to your application. Embed sector‑specific KPIs into your monitoring dashboards.
Biometrics and public sector use
Biometric identification and categorization are among the most sensitive applications under the Act. Providers must meet strict data governance, accuracy, and bias controls. Deployers must justify necessity, proportionality, and safeguards—especially in public spaces.
For public bodies, add heightened transparency. Provide clear notices to affected persons, auditability, and independent oversight. When in doubt, default to constrained use, stronger human review, and tighter logging. Run a focused risk and rights impact assessment before any biometric deployment. Plan for periodic external review.
Enforcement and penalties: investigations, fines, remediation
Enforcement will escalate as the AI Office and Member State authorities ramp up. Fines scale to breach type and global turnover. The Act sets upper limits, including up to 35 million EUR or 7% of worldwide annual turnover for certain prohibited practices. It sets up to 15 million EUR or 3% for other obligations, and up to 7.5 million EUR or 1% for supplying incorrect information to authorities, as specified in Regulation (EU) 2024/1689 on EUR‑Lex. Providers, deployers, importers, and distributors can all be penalized based on their role.
Investigations will focus first on prohibited uses, systemic‑risk GPAI, and high‑impact deployments lacking documentation. Remediation can influence outcomes and reduce penalties. Rapid feature shutdowns, corrective updates, and strengthened oversight help. Create an incident and cooperation plan that aligns product, legal, and comms. Be ready to respond to supervisory requests.
Member State authorities and national variations
Each Member State designates competent authorities to supervise the AI Act. The AI Office coordinates cross‑border matters and systemic‑risk designations. National variations may appear in resourcing, sector focus, and guidance cadence.
Identify your primary authority based on your establishment or where your system is deployed. Keep a directory of contacts. Use the European Commission AI Act page and the EU AI Office to track designations and guidance. Assign a regulatory liaison who engages proactively and subscribes to national updates.
Procurement and vendor due diligence for AI systems
Buyers, especially deployers, carry obligations. Procurement must shift from generic security and privacy checklists to AI‑specific due diligence. Vendors should be ready with technical documentation extracts, transparency artifacts, and warranties aligned to the Act.
Use a concise questionnaire to reduce back‑and‑forth. Lock requirements into contract clauses and SLAs. After award, verify evidence during onboarding and at renewal. Publish standard terms and a vendor pack so suppliers can self‑serve.
Public-sector requirements
Public bodies need stronger transparency, auditability, and accountability from vendors. In RFPs, request:
- Intended‑use statement and Annex III classification rationale.
- Model cards, performance metrics, and bias testing results relevant to the population served.
- Human oversight design, escalation and appeals processes, and user‑facing transparency materials.
- Post‑market monitoring plan, incident reporting commitments, and service‑level remedies for safety issues.
Include warranties of compliance and rights to audit. Require updates as standards and guidance evolve. Publish template attestations so bidders align early and reduce procurement cycle time.
SME and startup pathways: sandboxes, grants, and phased obligations
Smaller providers can meet obligations without stalling growth. Use regulatory sandboxes, open standards, and phased—yet disciplined—documentation. Sandboxes operated by national authorities offer supervised testing and compliance guidance before market placement. Grants and technical assistance can offset costs.
Plan lean but complete documentation. Reuse open frameworks (NIST RMF, ISO/IEC 42001). Focus on the riskiest features first. If you integrate third‑party GPAI, push suppliers for disclosures and secure rights to share evidence with authorities. Contact your national authority about sandbox windows. Schedule internal “evidence sprints” that produce audit‑ready artifacts each quarter.
Building the team: roles, training, and certifications
Compliance is a cross‑functional effort that must be embedded in product development. Providers should define an AI compliance owner and engineering leads for safety and reliability. Add a data governance steward and a product counsel aligned to EU obligations. Deployers need a business owner, risk manager, and technical lead who can implement oversight and monitoring.
Invest in upskilling. Train engineers on risk and testing. Train product managers on intended‑use and change control. Train legal teams on documentation and incident reporting. Pair policy literacy with hands‑on exercises. Run a red‑team on an internal model and convert results into controls and evidence. Publish a RACI for AI Act obligations. Enroll key staff in targeted training aligned to your sector.
KPIs and compliance ROI
Measuring what matters keeps your program on track and proves value to the business. Consider KPIs across build, monitor, and assure:
- Build: percentage of AI systems with documented intended use and risk classification. Coverage of test plans and executed validations.
- Monitor: time‑to‑detect drift or safety issues. Rate of effective human interventions. Incident response time and closure.
- Assure: technical documentation completeness score. Audit findings per release. Supplier evidence coverage and renewal.
- Outcomes: reduction in harmful errors. Fairness metrics improvement. On‑time market entry with no major rework from assessments.
Set quarterly targets and tie them to release gates, not just policy checklists. Add these KPIs to your product and risk dashboards so leadership can steer resources where they deliver the most compliance impact.