COMPLIANCE 10 min read

EU AI Act and Contractor Hiring Systems: 2026 Compliance Guide

Reviewed by Omnivoo Compliance Team on May 15, 2026

May 15, 2026

An engineer reviewing AI Act compliance documentation

Key takeaways

  • Regulation (EU) 2024/1689 (the AI Act) applies from 2 August 2026 for most high-risk AI obligations
  • Annex III item 4 classifies AI used in recruitment, evaluation, task allocation, and worker monitoring as high-risk
  • Maximum penalties are 35 million euros or 7% of global turnover for prohibited practices, 15 million euros or 3% for high-risk violations
  • The Act applies to providers and deployers placing AI systems on the EU market or whose outputs are used in the EU
  • Compliance includes risk management, technical documentation, bias testing, human oversight, and transparency

TL;DR

The EU AI Act (Regulation (EU) 2024/1689) applies from 2 August 2026 for most high-risk obligations. Any AI system used to recruit, screen, evaluate, allocate work to, or monitor performance of EU workers, including contractors, is high-risk under Annex III item 4. US companies are deployers if they use such systems for EU positions or workers, or providers if they sell such systems to EU customers. Penalties reach 35 million euros or 7% of global turnover for prohibited practices and 15 million euros or 3% for high-risk violations under Article 99. Compliance requires risk management, technical documentation, human oversight, transparency to workers, and registration in the EU database.

What the AI Act does

Regulation (EU) 2024/1689 is the world’s first comprehensive horizontal AI regulation. It was adopted on 13 June 2024 and published in the Official Journal on 12 July 2024. It entered into force on 1 August 2024 and applies in phases. The Act classifies AI systems by risk and applies obligations proportionate to that risk.

The four risk levels:

Risk levelExamplesObligations
UnacceptableSocial scoring, exploitation of vulnerabilityProhibited
HighRecruitment AI, credit scoring, biometric IDConformity, documentation, oversight, registration
LimitedChatbots, deepfakesTransparency to users
MinimalSpam filters, AI in video gamesNo specific obligations

Most contractor and employment AI sits at the high-risk tier under Annex III item 4.

Phased application schedule

Article 113 sets the phased application schedule. As of May 2026:

DateProvision
1 August 2024Entry into force
2 February 2025Prohibited AI practices apply (Article 5), AI literacy obligations
2 August 2025General-purpose AI model obligations, governance bodies, penalties
2 August 2026Most high-risk AI obligations under Annex III
2 August 2027High-risk AI embedded in regulated products under Annex I

The critical date for contractor and hiring AI is 2 August 2026. That is when Annex III systems must be in conformity with the Act, technical documentation must be prepared, risk management must be in place, and registration must be complete.

Annex III item 4: employment and worker management

Annex III item 4 defines high-risk AI in employment, workers’ management, and access to self-employment in two sub-categories.

Item 4(a): AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.

Item 4(b): AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of persons in such relationships.

Note the language. The Act says “work-related relationships,” not “employment relationships.” Independent contractor engagements are within scope. So is freelance work, gig work, and consultancy.

Common contractor hiring AI tools that fall under Annex III item 4:

  • AI resume screening, candidate ranking, or shortlisting
  • AI-powered job advertising platforms targeting specific candidates
  • AI interview scoring, video interview analysis
  • AI skill assessments and coding test grading
  • AI background check decisions
  • AI-driven contractor scoring on marketplaces
  • AI work allocation systems on platforms
  • AI performance monitoring tools tracking productivity or behaviour
  • AI tools that suggest disciplinary action, termination, or non-renewal

When does the Act apply to a US company?

Article 2 gives the Act broad extraterritorial reach. The Act applies to:

  1. Providers placing AI systems on the EU market or putting them into service in the EU, regardless of where the provider is established
  2. Deployers of AI systems located in the EU
  3. Providers and deployers established outside the EU when the output produced by the AI system is used in the EU
  4. Importers and distributors of AI systems
  5. Product manufacturers placing AI systems on the EU market under their own name

The third axis is the broadest. A US company that uses AI to evaluate candidates for an EU role, or to manage EU contractors, falls within scope as a deployer even with no EU establishment.

A US-based AI vendor selling a recruiting tool to EU customers falls within scope as a provider. The provider must conduct conformity assessment, register the system, and prepare technical documentation. The customer (deployer) has its own obligations.

Provider obligations for high-risk systems

A provider of a high-risk AI system under Annex III must:

Risk management system. Establish, implement, document, and maintain a risk management system across the AI system lifecycle.

Data governance. Use training, validation, and test data sets that are relevant, representative, free of errors, and appropriate. Implement data governance practices including bias examination.

Technical documentation. Prepare detailed technical documentation before placing the system on the market and keep it up to date.

Record keeping. Design the system to automatically log events.

Transparency. Provide instructions for use to deployers in a clear, comprehensive, and accessible form.

Human oversight. Design the system so deployers can effectively oversee its operation.

Accuracy, robustness, cybersecurity. Design the system to be accurate, robust, and cybersecure relative to its intended purpose.

Conformity assessment. Conduct conformity assessment before placing the system on the market. For most Annex III systems, this is an internal control procedure.

EU declaration of conformity. Issue an EU declaration of conformity and CE marking.

Registration. Register the system in the EU database of high-risk AI systems before placing it on the market.

Post-market monitoring. Establish a post-market monitoring system and report serious incidents.

Deployer obligations for high-risk systems

A deployer of an Annex III high-risk AI system must:

Use as instructed. Use the system in accordance with the instructions for use provided by the provider.

Human oversight. Assign human oversight to natural persons with the necessary competence, training, and authority. Human oversight must enable intervention in the system’s operation.

Monitor and log. Monitor the operation of the system. Keep automatically generated logs for at least six months unless other law requires longer.

Input data quality. Ensure that input data is relevant and sufficiently representative for the intended purpose.

Worker information. Before putting a high-risk AI system into service in the workplace, deployers who are employers must inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This applies to contractors as well as employees.

Data protection impact assessment. If required under Article 35 of the GDPR, conduct a DPIA before deployment.

Fundamental rights impact assessment. Some deployers (public bodies and certain private deployers in regulated sectors) must conduct a fundamental rights impact assessment.

Penalties under Article 99

Article 99 sets a tiered penalty structure. Penalties apply from 2 August 2025.

Violation categoryMaximum penalty
Prohibited AI practices (Article 5)35 million euros or 7% of total worldwide annual turnover, whichever is higher
Most provider, deployer, importer, distributor, notified body obligation breaches15 million euros or 3% of total worldwide annual turnover, whichever is higher
Supplying incorrect, incomplete, or misleading information to authorities7.5 million euros or 1% of total worldwide annual turnover, whichever is higher

SMEs and startups receive reduced caps at the lower of the percentage or fixed amount. Penalties consider the nature and gravity of the infringement, prior infringements, the size of the operator, intentionality, and cooperation with authorities.

For comparison, GDPR fines max out at 20 million euros or 4% of global turnover. The AI Act is the steeper regime.

Practical compliance for US companies

If you are a US company with AI in any part of your contractor hiring or management workflow that touches the EU, your compliance plan needs to address both deployer obligations and any provider obligations if you built the AI in-house.

Step 1: Inventory. List every AI system in use across recruiting, contractor management, performance, work allocation. For each, document the provider, the purpose, the data inputs, and the decision points.

Step 2: Classify. For each system, determine whether Annex III applies. Annex III item 4 covers recruitment and worker management AI. Item 5 covers credit scoring. Item 6 covers law enforcement. Item 7 covers migration. Other items may apply.

Step 3: Determine your role. Are you a provider (built the system) or deployer (using a third-party system) for each AI tool? You may be both for different tools.

Step 4: Build conformity. For each high-risk system, ensure provider obligations are met. If you are using a third-party provider, ask for the EU declaration of conformity. If you are the provider, conduct conformity assessment and register the system in the EU database.

Step 5: Inform workers. Prepare written information notices to EU workers and contractors about the AI systems used to manage them.

Step 6: Implement human oversight. Document the human oversight role for each high-risk AI system. Train the assigned overseers.

Step 7: DPIA and FRIA. Conduct data protection and (where applicable) fundamental rights impact assessments before deployment.

For a US founder running a contractor business, the cleanest path is often to keep contracting infrastructure separate from AI-driven evaluation tools. Omnivoo Contract Management handles contracts and payments without AI scoring, work allocation, or performance monitoring. Companies that use Omnivoo for contracting can run their AI evaluation separately, and only the AI evaluation falls within Annex III. The contract and payment workflow does not.

AI Act overlap with other regulations

The AI Act overlaps with several other EU regulations. A single AI system in contractor management may trigger obligations under all of them.

RegulationWhat it covers
AI ActAI system risk management, conformity, oversight
GDPRPersonal data processing, including in AI inputs and outputs
Platform Work Directive (2024/2831)Algorithmic management of platform workers
Equal Treatment Framework Directive (2000/78)Non-discrimination in employment
DSA (2022/2065)Algorithmic content moderation on online platforms

A US contractor platform using AI to allocate work to EU contractors faces obligations under the AI Act (provider/deployer), the GDPR (data processing), and the Platform Work Directive (algorithmic management). These regimes are layered. Compliance with one does not exempt the other.

What to do before 2 August 2026

The high-risk obligations apply from 2 August 2026. Your action items:

  1. Complete the AI inventory and classification by Q3 2026
  2. Obtain EU declarations of conformity from third-party providers, or build conformity yourself if you are the provider
  3. Register high-risk systems you provide in the EU database before placing them on the market
  4. Inform EU workers and contractors about AI used to manage them, before 2 August 2026
  5. Implement human oversight roles with training and authority
  6. Conduct and document DPIAs for each high-risk system

How Omnivoo Contract Management helps

The AI Act creates real costs when AI sits in the critical path of contractor hiring or management. The cleanest mitigation is to limit AI to the workflows where it adds clear value, and keep the contracting and payment infrastructure separate.

Omnivoo Contract Management is a contract drafting, signing, and payment platform. It does not use AI to evaluate contractors, allocate work, score performance, or make termination decisions. That structural separation means our product does not fall under Annex III item 4. Companies that use Omnivoo for contracting can run their hiring AI on separate tools, and only those tools fall within the AI Act’s high-risk regime.

Pricing is a flat $49 per finalized contract. Transaction fees are passed through at cost. No per-seat licensing. No platform fees. See pricing.

Talk to our team about contractor workflows that keep AI evaluation and contract infrastructure cleanly separated. Get in touch.

Does the AI Act apply to a US company using AI in hiring?
Yes if the AI system is placed on the EU market or its outputs are used in the EU. A US company screening candidates for EU positions with an AI tool falls within scope as a deployer. A US AI vendor selling a recruitment tool to EU customers is a provider. The Act applies extraterritorially on both axes.
Which contractor hiring tools are high-risk?
Annex III item 4 covers two categories: (a) AI used for recruitment or selection of natural persons, in particular targeted job advertisements, analysis or filtering of job applications, and candidate evaluation, and (b) AI used to make decisions affecting terms of work-related relationships, promotion or termination, task allocation based on behaviour or characteristics, or performance monitoring. Both extend to contractor relationships, not just employment.
What does high-risk classification require?
Providers must implement a risk management system, prepare technical documentation, conduct conformity assessment, register the system in an EU database, ensure data governance, enable human oversight, ensure accuracy and cybersecurity, and provide instructions for use. Deployers must use the system as instructed, ensure human oversight, monitor for risks, and inform workers and their representatives before deployment.
When does the AI Act start applying?
The Act entered into force on 1 August 2024. Phased application: prohibited practices from 2 February 2025, GPAI model obligations from 2 August 2025, most high-risk obligations including Annex III systems from 2 August 2026, and embedded high-risk AI in regulated products from 2 August 2027.
What are the penalties?
Article 99 sets a tiered penalty structure. Prohibited practices: up to 35 million euros or 7% of total worldwide annual turnover, whichever is higher. Most operator obligation breaches: up to 15 million euros or 3% of turnover. Providing misleading information to authorities: up to 7.5 million euros or 1% of turnover. SMEs receive reduced caps.
Is using ChatGPT to screen CVs high-risk under the Act?
Using a general-purpose AI system specifically to screen CVs for an EU position turns the deployer into a deployer of a high-risk AI system under Annex III item 4(a). The deployer obligations apply regardless of the underlying model. Workers must be informed before deployment. Human oversight must be in place. Documentation must be maintained.
Does the Act apply to US persons providing services in the US?
Not on the basis of provider or deployer status alone. The Act applies when the output of the AI system is used in the EU. If a US recruiter uses AI to screen US candidates for US positions, and no output reaches the EU, the Act does not apply. If the same recruiter screens EU candidates for any position, or any candidates for an EU position, the Act applies.

Hire your first employee in India

Start onboarding in as little as 5 days. No local entity required.

Get started →