EU AI Act Compliance: A Practical Guide for Enterprise Teams
The EU AI Act is a risk-based regulation that classifies AI systems into four tiers — prohibited, high-risk, limited risk, and minimal risk — and imposes compliance obligations proportional to risk level. High-risk AI systems require mandatory technical documentation, human oversight mechanisms, conformity assessments, and registration in the EU database before deployment. Obligations began phasing in from August 2024, with full high-risk system requirements applying from August 2026.
The EU AI Act is now a compliance reality for enterprises operating in or selling to European markets. Unlike many technology regulations that are primarily data-protection focused, the AI Act regulates AI systems themselves — their development, deployment, and use — creating obligations for a broader range of enterprise teams than GDPR touched. CISOs, CTOs, legal counsel, and DPOs all have roles in compliance, and the Act's requirements span technical documentation, governance processes, and ongoing monitoring obligations that must be built into operations, not added as a compliance layer.
What Does the EU AI Act Require?
The AI Act establishes a risk-based framework with four classification tiers. Understanding these tiers is the foundation of compliance, because the obligations that apply to your AI systems depend entirely on their classification.
Prohibited AI practices. A small category of AI applications is banned entirely: social scoring systems, real-time biometric surveillance in public spaces (with limited law enforcement exceptions), AI that exploits vulnerabilities to manipulate behaviour, and AI that infers sensitive characteristics from biometric data. These prohibitions applied from February 2025. Enterprises should have already reviewed their AI portfolio for prohibited applications.
High-risk AI systems. This category carries the Act's most significant obligations. High-risk systems include AI used in employment (hiring, performance evaluation), credit scoring, access to education, critical infrastructure management, biometric identification, and AI used in the administration of justice. The list is exhaustive in the regulation but requires case-by-case assessment for enterprise AI systems that may fall within multiple categories.
Limited risk AI. Systems with specific interaction risks — primarily chatbots and AI-generated content — face transparency requirements: users must be informed they are interacting with an AI. These requirements are relatively lightweight but must be incorporated into user interface design.
Minimal risk AI. Spam filters, AI-powered search, and similar systems carry no mandatory obligations under the Act, though voluntary codes of conduct may apply.
Which AI Systems Are Classified as High-Risk?
For most regulated enterprises, the highest-stakes classification question concerns enterprise AI systems that support consequential decisions. The Act's high-risk classification covers AI that:
- Evaluates individuals for employment decisions (screening CVs, scoring candidates, monitoring employee performance)
- Influences access to credit or financial services
- Manages or monitors critical infrastructure (energy, water, transport networks)
- Processes biometric data for identity verification
- Supports decisions about access to education or vocational training
- Assists law enforcement or judicial decision-making
Crucially, a knowledge retrieval system that retrieves information and presents it for human review without directly making decisions is less likely to be classified as high-risk than one whose outputs directly determine an outcome. The distinction between AI that informs decisions and AI that makes decisions matters significantly for classification. Enterprise teams should work through this distinction carefully for each AI system in their portfolio.
Step 1: Audit Your Existing AI Systems
Compliance begins with a complete inventory of AI systems in use across the organisation. This inventory is harder to compile than it appears because AI has been deployed across enterprise functions in ways that are often not centrally tracked. Shadow AI — tools adopted by individual teams without IT or security approval — complicates the picture further.
The audit should cover: AI systems purchased as standalone products or embedded in SaaS platforms (Microsoft Copilot, Salesforce Einstein, etc.); AI models developed internally; and AI components embedded in larger systems (fraud detection models embedded in transaction processing, for example). For each system, document: the vendor, the primary function, the data inputs, the outputs, and how outputs are used in decision-making processes.
This audit also creates the foundation for GDPR alignment and DORA risk assessments, since the same systems that must be assessed under the AI Act are typically the same systems that create data processing and operational resilience obligations under those frameworks.
Step 2: Classify Risk Level for Each System
Apply the Act's risk classification to each system in your inventory. The classification process should involve legal counsel, the DPO, and the technical team responsible for each system. For borderline cases, document the reasoning — the rationale behind classification decisions becomes part of the technical documentation the Act requires.
Classification is not always straightforward. An internal knowledge assistant that helps HR teams access HR policy documents might seem minimal risk, but if its outputs influence benefit entitlement decisions, the classification question becomes more complex. The decision framework should consider: does the system's output directly determine an outcome, or does a human make the final determination? If a human is genuinely reviewing and evaluating the AI output before acting, the classification is more likely to be limited or minimal risk.
Step 3: Implement Mandatory Governance Requirements
For high-risk AI systems, the Act mandates specific governance mechanisms:
Risk management system. A documented, ongoing process for identifying, evaluating, and mitigating risks associated with the AI system. This is not a one-time assessment but a continuous programme with defined review cycles. The risk management system must cover risks to fundamental rights, not merely operational risks.
Human oversight mechanisms. High-risk AI must include technical and procedural means to enable humans to monitor, understand, and intervene in AI outputs. For enterprise knowledge systems, this means ensuring that AI-generated outputs are clearly presented as AI-generated, that source citations enable verification, and that processes exist to flag and investigate outputs that may be incorrect. Glass Box AI architectures that surface source citations for every output are naturally aligned with this requirement.
Accuracy and robustness standards. High-risk systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. The Act does not specify numerical thresholds — these must be defined and documented by the operator based on the risk profile of the application. Automated quality scoring and output monitoring satisfy this requirement when configured to detect and flag significant accuracy degradation.
Step 4: Build the Technical Documentation
High-risk AI systems require technical documentation that must be produced before deployment and updated throughout the system's lifecycle. The documentation requirements are detailed and specific:
- General description of the AI system and its intended purpose
- System architecture description, including components and data flows
- Training, validation, and testing data specifications
- Pre-determined changes the system can make autonomously
- Risk management documentation
- Measures for human oversight implementation
- Performance metrics and accuracy specifications
- Cybersecurity measures
For private AI systems deployed on-premise with citation-backed retrieval architectures, the documentation task is significantly simpler than for cloud AI deployments. The dependency chain is shorter, the data flows are more contained, and the auditability mechanisms (source citations, query logs under organisational control) are directly documentable. This is a concrete compliance advantage of sovereign AI deployment — as covered in moving fast in regulated industries, architecture decisions made for operational reasons often create compliance advantages.
Step 5: Register High-Risk Systems in the EU Database
High-risk AI systems must be registered in the EU's AI database before deployment. The registration requirement applies to providers (developers and deployers who place systems on the market) rather than every enterprise that uses an AI system. However, enterprises that deploy AI systems for their own internal use — not as a product sold to others — must determine whether they are acting as "providers" under the Act's definition.
In most cases, enterprises using commercially available AI platforms are users, not providers, and registration obligations fall on the vendor. However, enterprises that develop AI systems internally, fine-tune base models for specific applications, or deploy AI systems for use by third parties (including clients or partners) are likely providers and face registration obligations.
Step 6: Ongoing Monitoring and Incident Reporting
Compliance is not satisfied at deployment. The Act requires ongoing monitoring of high-risk AI systems with documented post-market monitoring plans. This includes:
- Active collection of data on system performance in production
- Periodic review of the risk management system
- Reporting of serious incidents to market surveillance authorities
- Technical documentation updates when the system changes materially
Serious incidents — including AI system failures that cause death, serious harm, or property damage, and violations of fundamental rights — must be reported to national authorities within specific timeframes. Enterprises should incorporate AI incident response into their broader incident management processes, with clear escalation paths and reporting timelines.
How the EU AI Act Intersects with GDPR, DORA, and NIS2
The AI Act does not replace GDPR, DORA, or NIS2. It operates alongside them, creating a compliance landscape where enterprise AI must satisfy multiple overlapping frameworks.
GDPR. AI systems that process personal data must satisfy GDPR obligations simultaneously. The AI Act's documentation requirements overlap with but do not replace GDPR's Data Protection Impact Assessment requirements for high-risk processing. Where AI systems process personal data for automated decision-making that significantly affects individuals, GDPR's Article 22 requirements also apply. Map both frameworks against each AI system to identify overlapping and additive obligations.
DORA. For financial entities, the AI Act's requirements for high-risk systems interact with DORA's ICT risk management obligations. AI systems used in trading, credit assessment, or customer service in financial institutions likely face requirements under both frameworks. DORA's third-party risk management requirements add scrutiny to AI vendor relationships that the AI Act's conformity requirements also cover.
NIS2. Critical infrastructure operators face NIS2 cybersecurity obligations that encompass AI systems used in critical operations. The AI Act's cybersecurity requirements for high-risk systems align with but do not replace NIS2's requirements. Operators in both frameworks should document how each AI system's security measures satisfy both sets of requirements.
Compliance Readiness Checklist
- AI inventory complete: all AI systems across all business functions documented
- Risk classification applied: each system classified under the Act's four-tier framework
- High-risk systems identified: list of systems requiring full compliance programme
- Prohibited applications confirmed absent: no prohibited AI practices in use
- Technical documentation prepared: complete documentation package for each high-risk system
- Risk management system in place: ongoing risk identification and mitigation process
- Human oversight mechanisms implemented: technical and procedural controls
- Conformity assessment completed: for high-risk systems requiring third-party assessment
- Registration completed: high-risk systems registered in EU AI database where required
- Post-market monitoring active: ongoing performance data collection and review
Frequently Asked Questions
When does the EU AI Act apply to enterprises?
The EU AI Act applies in phases. Prohibited practice bans took effect August 2024. Requirements for general-purpose AI models apply from August 2025. High-risk AI system obligations for most categories apply from August 2026. Code of practice for GPAI models was ongoing through 2025. Enterprises should be in active compliance preparation now, as the documentation and governance requirements for high-risk systems require significant lead time.
What makes an AI system 'high risk' under the EU AI Act?
High-risk classification applies to AI systems used in specific areas: biometric identification, critical infrastructure management, education access decisions, employment decisions, access to essential services (credit, insurance), law enforcement, migration management, and judicial decisions. The classification considers both the category of use case and whether the AI output directly determines an outcome versus informing a human decision.
Does the EU AI Act apply to AI systems used internally?
Yes. The AI Act's obligations apply regardless of whether AI systems are commercial products or internal tools. Enterprises that develop AI systems for their own use and deploy them for consequential purposes face provider obligations if those systems meet the high-risk criteria. However, the provider/user distinction means that enterprises using commercially purchased AI platforms primarily face the obligations of users rather than providers — which is significantly less burdensome but still requires governance and monitoring.
How does the EU AI Act affect AI vendors selling to European companies?
AI vendors selling high-risk AI systems to European enterprises face provider obligations: technical documentation, conformity assessment, CE marking (where required), and EU database registration. European enterprise buyers should require vendors to demonstrate compliance and provide documentation of their conformity assessments. Vendor compliance becomes a procurement requirement, adding EU AI Act status to the existing checklist of security and privacy certifications.
To see how Scabera's Glass Box AI architecture simplifies EU AI Act compliance documentation for enterprise knowledge retrieval, book a demo.