Artificial intelligence (“AI”) is no longer theoretical in employment decisions—it is actively shaping the hiring process in 2026. Recent estimates suggest that 99% of Fortune 500 companies now use AI to filter job applicants and roughly 40% of companies expect to use AI to conduct screening interviews of job candidates. That level of AI adoption has outpaced the law. As a result, the use of AI has quickly become one of the most significant employment law issues, particularly in hiring.
While many states have not enacted AI-specific employment statutes, including Kentucky, Indiana, Ohio, and Tennessee, recent case developments make clear that existing law is more than sufficient to regulate AI-driven decisions.
AI “Discrimination in Hiring” Litigation is Moving Past the Pleading Stage
One of the most closely watched cases is the ongoing AI hiring bias litigation against HR vendor, Workday. It signifies a first of its kind, addressing a defining question in employment law - whether artificial intelligence-based hiring programs could be discriminatory.
The plaintiff in Mobley alleges, in part, that Workday’s AI-hiring tools incorporated data points, like interruptions in periods of employment and medical-related leave, which are commonly correlated with treatment and recovery periods for serious health conditions. The plaintiff claims that Workday’s AI tool resulted in violations of the Age Discrimination in Employment Act, the Americans with Disabilities Act, and Title VII of the 1964 Civil Rights Act.
At the beginning of this year, the federal court in Mobley allowed the federal claims to move forward, rejecting arguments from Workday and denying its motion for dismissal.
As evidenced by Mobley, AI-driven hiring tools can be particularly susceptible to disparate impact claims, as they may rely on facially neutral inputs - such as employment history; geographic data; or education—that nonetheless disproportionately exclude protected groups. Put another way, though AI hiring procedures may not look discriminatory, the data inputs may result in a discriminatory outcome or disparate impact. This outcome can create liability for employers or recruiters, even without any intent to discriminate.
The “Black Box” Problem in AI Hiring
2026 AI hiring litigation is not limited to employment discrimination claims. A separate lawsuit in California against AI hiring platform, Eightfold AI Inc., reframes the issue as a transparency and consumer protection violation, alleging the AI tools generated secret candidate reports without disclosure.
The case was filed by two job applicants against Eightfold, a hiring platform used by companies like Microsoft and PayPal, on January 20, 2026. The complaint alleges that Eightfold violated the Fair Credit Reporting Act (“FCRA”) and California’s Investigative Consumer Reporting Agencies Act by secretly generating AI-driven applicant “likelihood of success” scores on a 0-5 scale - conduct that, according to the plaintiffs, amounts to illegal, undisclosed consumer credit reports.
It is unclear whether plaintiffs will succeed on their claims. But the case represents a shift in focus away from discrimination statutes and toward “lack of transparency” claims.
At bottom, the takeaway is straightforward: though new legal frameworks may not yet exist for AI hiring, applicants can and will use familiar doctrines to assert claims against companies using AI to screen, interview, and select job candidates.
How Employers and Recruiters Should Adapt
As AI continues to embed itself in hiring processes, employers should recognize that these tools are not merely operational enhancements—they are decision-making mechanisms subject to legal scrutiny. The increasing volume of litigation in 2026, and its progression, demonstrates that courts are prepared to evaluate AI-driven hiring outcomes under established employment law frameworks, regardless of the technology involved.
Accordingly, employers should approach AI in hiring with the same level of diligence applied to any other employment practice. This includes understanding how such tools function, ensuring decisions remain job-related and consistent with business necessity, and maintaining sufficient oversight to explain and defend outcomes if challenged.
The use of AI in hiring is no longer a question of innovation alone - it is a matter of risk management and legal compliance.
