What is the EU AI Act? Employer compliance guide for hiring and HR
Author
James Kelly
Last Updated
13 December 2025
Read Time
8 min
The EU AI Act
The world’s first comprehensive AI law is here, and it affects more companies than you might think.
The EU AI Act entered into force on August 1, 2024, and it’s already changing how businesses worldwide use AI.
Here’s what matters:
- It applies globally: Not just EU companies. If your AI outputs are used in the EU, you’re affected
- It uses a risk-based approach: Four risk categories: prohibited, high-risk, limited risk, and minimal risk
- First deadlines have passed: Prohibited AI practices are banned as of February 2, 2025
- Major deadlines ahead: High-risk AI requirements kick in August 2, 2026
- Penalties are serious: Up to €35M or 7% of global revenue
- Compliance takes time: Start now if you haven’t already
Think of the EU AI Act as GDPR for AI. And like GDPR, it's becoming the global standard whether you're in the EU or not.
What is the EU AI Act?
On August 1, 2024, history was made. The European Union’s Artificial Intelligence Act officially entered into force, making it the world’s first comprehensive legal framework specifically designed to regulate AI systems.
But this isn’t just another EU regulation that only affects European companies. This is a game-changer that’s reshaping how organisations worldwide develop, deploy, and use artificial intelligence.
Who does the EU AI Act apply to?
The EU AI Act has extraterritorial reach. It applies to two main groups:
01 | AI providers
Those who develop, import, or put AI systems on the market:
- AI developers and software companies
- Companies selling AI-powered products or services to EU customers
- Tech platforms with EU users, even without EU offices
- AI model providers whose systems might end up in EU markets
02 | AI deployers
Organisations using AI systems within the EU:
- Any company operating in the EU that uses AI tools in its business
- HR teams using AI for recruitment, screening, or performance management
- Companies using AI for credit scoring, fraud detection, or risk assessment
- Healthcare providers using diagnostic or treatment recommendation AI
- Educational institutions using AI for admissions or assessment
The key principle: If you build AI for the EU market OR use AI within the EU, this law applies to you, regardless of where your company is headquartered.
The four risk categories: How AI is classified
The EU AI Act doesn’t treat all AI the same way. Instead, it takes a risk-based approach, categorising systems into four tiers based on their potential to cause harm.
Unacceptable risk: Banned outright
Some AI applications are considered so dangerous to fundamental rights that they’re completely prohibited.
As of February 2, 2025, these practices are now illegal in the EU:
- Social scoring systems: Government or corporate systems that rank people based on behaviour or characteristics
- Subliminal manipulation: AI that influences behaviour without people’s awareness or consent
- Exploiting vulnerabilities: Systems targeting people based on age, disability, or economic situation
- Biometric categorisation: Using biometrics to infer race, political opinions, religious beliefs, or sexual orientation
- Untargeted scraping: Building facial recognition databases from the internet or CCTV without consent
- Real-time biometric surveillance: Live facial recognition in public spaces (narrow law enforcement exceptions exist)
- Emotion recognition in workplaces or schools: Using AI to judge how workers or students “feel”
- Predictive policing: Profiling individuals to predict criminal behaviour without evidence
The penalty: Up to €35 million or 7% of global annual turnover, whichever is higher.
High-risk: Allowed but heavily regulated
These systems can potentially cause significant harm if they fail or are misused. They’re legal but come with extensive compliance requirements.
Common high-risk AI systems include:
- Critical infrastructure: Managing electricity, water, gas, or transportation systems
- Employment & HR: Recruitment screening, performance evaluation, promotion decisions, work allocation
- Education: Automated exam scoring, assessment of learning progress, admission decisions
- Essential services: Credit scoring, insurance risk assessment, eligibility for public assistance
- Law enforcement: Evidence analysis, case prioritisation, polygraph tools
- Border control & migration: Visa processing, asylum evaluation, risk assessment
- Healthcare: Diagnostic systems, treatment recommendations, triage and patient prioritisation
- Biometric systems: Identification and categorisation with proper safeguards
Example: An AI recruitment tool that screens CVs and ranks candidates is high-risk. It requires robust documentation, bias testing, human oversight, and ongoing monitoring.
Limited risk: Transparency required
Users must know they’re interacting with AI (unless it’s obvious).
Covered systems:
- Chatbots and conversational AI
- Deepfakes and AI-generated content
- Emotion recognition (outside prohibited contexts)
- Content recommendation systems
Requirements:
- Clear disclosure of AI interaction
- Marking AI-generated content as such
- Machine-readable watermarks for synthetic media
Minimal risk: AI literacy required
Most AI falls into this category. These systems face no specific obligations beyond making sure that employees understand AI basics.
Examples:
- Spam filters
- AI-powered video games
- Inventory management systems
- AI-assisted design tool
What companies need to do to comply
If your AI system falls under the high-risk category, the EU AI Act sets out strict compliance steps before deployment. Think of it as a due diligence checklist for trustworthy AI.
Key obligations for high-risk AI:
- Risk management system: Identify, assess, and mitigate risks throughout the AI lifecycle.
- Data governance: Use high-quality, representative, and bias-tested datasets.
- Technical documentation: Maintain detailed records of how the system works, its purpose, and its limitations.
- Transparency and user information: Clearly describe system capabilities and limitations to users.
- Human oversight: Ensure trained personnel can intervene or override AI decisions.
- Accuracy, robustness, and cybersecurity: Regular testing and monitoring to prove reliability.
- Registration in the EU database: All high-risk systems must be listed publicly.
Most of these requirements mirror existing compliance processes under GDPR, product-safety, or medical-device regimes, but they now explicitly extend to AI.
EU AI Act compliance timeline
Date: 1 August 2024
Requirement: AI Act enters into force
Date: 2 February 2025
Requirement: Prohibited AI systems are banned
Date: 2 August 2025
Requirement: General-purpose AI (foundation model) obligations begin
Date: 2 August 2026
Requirement: High-risk AI requirements apply
Date: 2027 onward
Requirement: Continuous monitoring & enforcement
IMPORTANT: Don’t wait for 2026. Compliance documentation, bias testing, and human-oversight design all take time.
Penalties for non-compliance with the EU AI Act
The EU AI Act’s penalties are deliberately severe (similar to GDPR) to ensure companies take it seriously:
Violation type: Using prohibited AI systems
Maximum fine: €35 million or 7% of global annual turnover
Violation type: Breaching high-risk obligations
Maximum fine: €15 million or 3% of global annual turnover
Violation type: Providing false or misleading information
Maximum fine: €7.5 million or 1% of global annual turnover
Regulators can also suspend or recall non-compliant systems from the EU market.
What this means for employers and HR teams
If you use AI for anything touching people’s decisions, from hiring to performance evaluation, you’re directly affected. Recruitment platforms, CV screeners, and internal analytics tools all fall under the high-risk AI category.
Employers must:
- Audit existing HR tools to confirm whether AI is involved.
- Ask vendors for AI Act compliance documentation (technical files, bias tests, human-oversight procedures).
- Train HR teams to understand AI limitations and avoid over-reliance on automated outputs.
- Establish accountability: Designate who is responsible for AI compliance in your organisation.
The best rule of thumb is to use AI to help people, not to replace or judge them. The systems that will stand the test of time are those built and used responsibly.
The bottom line
The EU AI Act isn’t about slowing innovation. It’s about earning trust. Companies that build compliant, transparent systems will move faster in the long run, because customers, regulators, and employees will trust them.
For employers, it’s also a wake-up call. If you’re using AI in HR or decision-making, now is the time to audit, document, and redesign those systems to meet the new standard.
Boundless helps companies stay ahead of complex employment and regulatory changes. Whether it’s navigating global compliance, worker protections, or new AI obligations in HR, we make sure everything is done properly, without shortcuts, without surprises.
Get in touch with our team today for more information
FAQs
No. The EU AI Act applies to any company whose AI systems are used or have an effect within the European Union. That means even organisations based in the UK, the United States, or Asia must comply if their AI tools or outputs reach EU users. For example, a US software provider offering AI-driven HR technology to clients in Germany or France falls under the Act. The legislation sets a global compliance benchmark, similar to how the GDPR reshaped privacy worldwide.
You are still responsible under the Act. When you use an AI system for recruitment, employee evaluation, or other decision-making processes, you become what the law defines as a “deployer.” That means you must ensure the systems you use comply with the EU AI Act’s transparency and fairness requirements. Businesses should review how AI outputs are applied, understand any limitations or risks, and train staff to exercise human oversight. If the AI system leads to biased or unlawful outcomes, both the vendor and the deploying company may share liability.
While GDPR regulates how personal data is collected, processed, and stored, the EU AI Act focuses on how algorithms and models operate. GDPR asks whether your data use is lawful; the AI Act asks whether your system behaves safely, transparently, and fairly. Many companies will need to comply with both. For instance, an AI hiring platform that processes candidate data must ensure data protection under GDPR while also meeting the AI Act’s requirements for bias testing, explainability, and human oversight.
The first major deadline has already passed. Prohibited AI systems, such as social scoring and emotion recognition in workplaces, were banned in February 2025. From August 2025, rules for general-purpose AI models take effect, and by August 2, 2026, full obligations for high-risk systems will apply. Companies that use or build AI should not wait until the last minute. Compliance requires time to document systems, test for bias, establish oversight procedures, and train staff on new governance responsibilities.
Each EU member state will designate a national authority to supervise AI compliance, such as CNIL in France or the Federal Data Protection Authority in Germany. A new European AI Office will coordinate these regulators, issue guidance, and investigate breaches across borders. In the UK, although the EU AI Act doesn’t apply directly, regulators are developing similar guidance under the government’s “pro-innovation” framework. Any UK-based company selling AI-powered products or services to EU customers must still comply with EU law. In practice, that means aligning with both regulatory bodies is the safest strategy.
The making available of information to you on this site by Boundless shall not create a legal, confidential or other relationship between you and Boundless and does not constitute the provision of legal, tax, commercial or other professional advice by Boundless. You acknowledge and agree that any information on this site has not been prepared with your specific circumstances in mind, may not be suitable for use in your business, and does not constitute advice intended for reliance. You assume all risk and liability that may result from any such reliance on the information and you should seek independent advice from a lawyer or tax professional in the relevant jurisdiction(s) before doing so.
Explore more resources
How long can you use an EOR? Country-by-country limits explained
From Germany's 18-month cap to the UK's unlimited arrangements, learn the EOR duration restrictions in key markets.
A first-aid kit for remote newbies: Essential survival tips for remote work beginners
Office routines come naturally, but remote work forces teams to rethink priorities and cut unnecessary tasks.
5 HR challenges that derail international expansion and how to overcome them
In this blog, we break down the 5 biggest HR challenges companies face when going global and share practical ways to get them right.
Is global employment best handled by an EOR or your in-house HR?
Explore the pros and cons of EOR vs in-house HR for global hiring to ensure compliance and efficiency as your company expands
Global employment made gloriously uneventful
Talk to us and discover Boundless possibilities
Book a personalised discovery and get your questions answered by our experts.





