listen to this post

like us before reported, the Equal Employment Opportunity Commission (“EEOC”) had on its radar potential harms that can result from the use of artificial intelligence (“AI”) technology in the workplace. While some jurisdictions have already enacted requirements and restrictions on the use of AI decision-making tools in employee selection methods,(1) On May 18, 2023, the EEOC updated its guidance on the use of AI for work-related decisions, issuing a technical assistance document titled “Selected Issues: Assessing the Adverse Impact on Software, Algorithms, and Artificial Intelligence Used in Job Selection Procedures Under Title VII of the Civil Rights Act 1964” (“Updated Guidance”). The Updated Guidance comes nearly a year after the EEOC published guidance explaining how employers’ use of algorithmic decision-making tools may violate the Americans with Disabilities Act (“ADA”). Instead, the Updated Guidance focuses on how the use of AI may implicate Title VII of the Civil Rights Act of 1964, which prohibits discrimination in employment based on race, color, religion, sex and national origin. In particular, the EEOC focuses on the disparate impact that AI can have on “selection procedures” for hiring, firing, and promotion.

Background to Title VII

As a brief background, Title VII was enacted to help protect candidates and employees from discrimination based on race, color, religion, sex and national origin. Title VII is also the act that created the EEOC. In its nearly 60 years of life, Title VII has been interpreted to include protections against sexual harassment and discrimination based on pregnancy, sexual orientation, gender identity, disability, age and genetic information. It prohibits discriminatory actions by employers in making employment-related decisions, including, for example, with regard to recruiting, hiring, monitoring, promoting, transferring and terminating employees. There are two main categories of discrimination under Title VII: (1) disparate treatment, which refers to an employer’s intentional discriminatory decisions, and (2) disparate impact, which refers to unintentional discrimination that occurs as a result of an employer’s standards. employer and practices. As stated above, the updated EEOC Guidance focuses on the effects that AI can have on the latter.

Updated EEOC guidance on using AI decision-making tools

The Updated Guidance provides important information to help employers understand how the use of AI in “selection procedures” could expose them to liability under Title VII, as well as some practical tips for limiting liability.

First, as a starting point, it is important for employers to understand whether they are using AI decision-making tools in their “selection procedures” as defined in Title VII. The EEOC clarifies that a “selection procedure” is “any ‘measure, combination of measures or procedure’ that is used as the basis for an employment decision”. In other words, the EEOC considers a selection process to encompass any and all decisions made by the employer that affect the employee’s position in the company, from the employee’s candidacy to termination.

Examples of AI-powered decision-making tools that employers can use in selection procedures include:

  • resume scanners that prioritize applications using certain keywords;
  • monitoring software that ranks employees based on their keystrokes or other factors;
  • “virtual assistants” or “chatbots” that question job seekers about their qualifications and reject those who do not meet predefined requirements;
  • video interview software that evaluates candidates based on their facial expressions and speech patterns; It is
  • Testing software that provides “appropriateness for work” scores to candidates or employees regarding their personalities, aptitudes, cognitive abilities, or perceived “cultural fit” based on their performance on a game or a more traditional test.

Second, the EEOC explains how employers can and should evaluate their AI-driven selection procedures for adverse impacts. If an AI-based method causes members of a given group to be screened at a “substantially” lower “selection rate” when compared to individuals from another group, an employer’s use of that tool would violate Title VII. A “selection rate” is the proportion of applicants or applicants who are actually hired, promoted, fired, or otherwise selected. It is calculated by taking the total number of applicants or applicants from a given pool that were selected and dividing that number by the total number of applicants or applicants in that pool as a whole. As a general rule, a given group’s selection rate is “substantially” lower if it is less than 80% or four-fifths of the most favored group’s selection rate. The EEOC appropriately refers to this as the “four-fifths rule”. The EEOC warned, however, that compliance with the “four-fifths rule” does not guarantee a compatible selection method. “The courts have agreed that use of the four-fifths rule is not always appropriate, especially when it is not a reasonable substitute for a test of statistical significance.”(two)

Third, the EEOC reiterated that yes, just as an employer can be held liable for ADA violations by using AI decision-making tools designed or administered by a third party, the same is true for violations of Title VII. Reliance on a software vendor’s warranties will not absolve the employer of liability if the software results in a “substantially” lower selection rate for certain groups.

Finally, the Updated Guidance makes clear that employers should also evaluate the use of AI tools against the other stages of the Title VII disparate impact analysis, including “whether a tool is a valid measure of important job-related characteristics or characteristics ”.

Practical tips for employers

  1. Require employees to seek approval before using algorithmic decision-making tools so you can do tool due diligence. Earlier, we explained why employee policies should be updated to address the use of AI tools in that article;
  2. perform periodic audits to determine whether the tools you are using result in a different impact and, if they do, whether they are linked to relevant job-related skills and consistent with the business need;
  3. Requiring their software vendors of these tools to disclose what steps were taken to assess whether using the tool could have a different impact, and specifically whether it was based on the four-fifths rule or whether it was based on a standard such as statistical significance can also be used by courts;(3)
  4. ensure that your contracts with vendors have adequate provisions for indemnification and cooperation in the event your use of the tool is called into question;
  5. ensure your employees receive adequate training on how to use these tools; It is
  6. if you are outsourcing or relying on third parties to carry out selection procedures or act on your behalf to make employment-related decisions, require them to disclose their use of AI decision-making tools so that you can properly assess your exposure.

main conclusions

As AI continues to evolve at an alarming rate, employers need to adapt to ensure they are using the technology in a responsible, compliant and non-discriminatory manner. While AI can speed up the selection process and even reduce costs, relying on AI without due diligence can be problematic. Employers, not software developers and vendors, are ultimately responsible for ensuring a selection rate that is not “substantially” lower for a group of people. Employers need to remain critical of the methods they implement for selection, from the application stage through separation and transfers. Employers should continue to audit the use of these tools and ensure their employee policy and vendor contracts are updated to minimize their exposure to liability under Title VII and other labor laws. If adjustments or changes are necessary, employers must adapt and work with their suppliers to ensure they implement the least discriminatory methods or can justify their decisions as work-related and consistent with business need.

As always, Sheppard Mullin will continue to provide updates and insights on any developing legal trends related to the employment and use of AI technology.


(1) For a review of the New York City Automated Employment Decision Tools Act, click here.

(two) quoting Isabel v. city ​​of memphis404 F.3d 404, 412 (6th Cir. 2005).

(3) To see Jones v. city ​​of bos., 752 F.3d 38, 50, 52 (1st Cir. 2014) (explaining that the four-fifths rule may be rejected when a test of statistical significance would indicate an adverse impact, such as when there is a small sample size or in cases where the “disparity is so small as to be almost imperceptible without detailed statistical analysis.”)