The Equal Employment Opportunity Commission has issued its long-awaited final guidance on the use of artificial intelligence in hiring, establishing the first comprehensive federal framework for algorithmic employment decisions. The rule, which takes effect July 1, 2026, carries significant implications for both employers and job seekers.
The three core requirements are disclosure, validation, and recourse. Employers must now inform candidates when AI tools are used at any stage of the hiring process — from resume screening to video interview analysis to predictive assessments. They must also conduct and document adverse impact analyses showing that their AI tools do not disproportionately screen out candidates based on protected characteristics.
Most significantly for job seekers, the rule establishes a right to human review. Candidates who are rejected through an automated process can request that a human reviewer examine their application within 30 days. Employers who fail to provide this review face per-violation penalties starting at $10,000.
The practical impact is already being felt. Major ATS providers including Workday, Greenhouse, and Lever are rushing to add disclosure language and human-review workflow capabilities to their platforms. Several AI screening startups — particularly those using video interview analysis — have either pivoted their products or shut down entirely, unable to demonstrate the required adverse impact compliance.
For job seekers, the guidance provides new ammunition. If you suspect an AI tool incorrectly filtered your application, you now have a legal mechanism to request human review. Career advisors recommend keeping records of all application submissions and noting any AI-related disclosures in the process, as this documentation could support future complaints.