Since the US Equal Employment Opportunity Commission (EEOC) increased scrutiny of the use of artificial intelligence (AI) in hiring earlier this year, we now have a much better idea about how they plan to apply the law and their policies in practice with the settlement of a lawsuit the agency brought against an employer.
The suit asserted that federal anti-discrimination laws were violated when the iTutor Group and related companies hired thousands of tutors each year to provide online tutoring from their homes or other remote locations. EEOC alleged that the online job application system requested dates of birth that was used by the application software to automatically reject female applicants age 55 or older and male applicants age 60 or older.
“The settlement serves as a strong reminder of the EEOC’s ongoing emphasis on AI and algorithmic bias, and a reminder to employers that the results of any technology-assisted screening process should comply with existing civil-rights laws,” stress attorneys Rachel See, Annette Tyman and Joseph Vele of the law firm of Seyfarth Shaw.
They also note that although the EEOC’s complaint and proposed consent decree did not expressly reference AI or machine learning, the agency’s press release linked the case to its recent AI and algorithmic fairness initiative as an example of the types of technologies that the EEOC is increasingly interested in pursuing.
“To be clear, automatically rejecting older job applicants, when their birthdates are already known, does not require any sort of artificial intelligence or machine learning,” they point out. “However, it is entirely fair to say that the EEOC’s complaint and positioning on the allegations squarely falls within the broader scope of its greater scrutiny of all sorts of technology in hiring, and not just artificial intelligence.”
The attorneys add, “EEOC’s iTutor settlement provides an important reminder about how employers must continue to scrutinize their use of any technology, including those that align more closely to ‘algorithmic fairness,’ in this rapidly developing area, given the broader context and scope of the EEOC’s ongoing efforts in this area and attendant media coverage.”
In addition to EEOC, the Consumer Financial Protection Bureau (CFPB), Department of Justice’s Civil Rights Division (DOJ) and the Federal Trade Commission (FTC) have joined together to jointly work on ensuring that AI does not violate individual rights and regulatory compliance in the areas of civil rights, equal employment opportunity, fair competition and consumer protection.
In fact, the EEOC’s scrutiny of application tracking systems follows similar settlements reached by other agencies in which employers were accused of using these systems in ways that allegedly violated existing civil-rights laws.
In 2022 and 2023, the DOJ Civil Rights Division’s Immigrant and Employee Rights Section reached settlements with 30 employers, assessing combined civil penalties of over $1.6 million, over the employers’ use of a college recruiting platform operated by the Georgia Institute of Technology.
The first complaint came from a student who was a lawful permanent resident, who observed that an employer’s paid internship posting on the platform was available only to U.S. citizens. DOJ’s subsequent investigation identified dozens more facially discriminatory postings on the website.
The DOJ announcement of the settlement confirmed that the website allowed employers to post job advertisements that deterred qualified students from applying for jobs because of their citizenship status, and in many cases also blocked otherwise eligible students from applying, all of which were in violation of the immigration law.
Employers Need Guidance
Last May, the EEOC issued a guidance for employers that describes in detail how the agency expects them to act in the future in regard to these developments during a period when the technology—and the law—is continuing to evolve. The agency also has made it clear that it’s keeping an eagle eye out for any kind of software that incorporates algorithmic decision-making at a number of stages in the employment process.
Among the examples of practices that are getting attention from the agency that are included in the guidance are resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; and “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.
But the EEOC doesn’t intend to stop there. It also is taking a close look at video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.
See, Tyman and Vele believe that another clear indication of where the agency is heading came on March 20, when it announced a settlement with a job search website operator. Although that case did not specifically target the use of a process deploying AI, the underlying charge alleged that the website’s customers were posting job ads that were designed to discourage U.S. citizens from applying for certain jobs.
With the goal of preventing discriminatory job postings, the EEOC’s conciliation agreement with the website operator required it to “scrape” the website for potentially discriminatory keywords such as “OPT,” “H1B” or “Visa” that appeared near the words “only” or “must” in new job postings, In other words, the EEOC’s conciliation agreement required that the operator implement a simple keyword filter that would serve to identify potentially discriminatory job postings.
“Unquestionably, many employers are already using (and others are contemplating using) artificial intelligence as part of their hiring and other HR processes,” the attorneys observe. “The EEOC’s iTutor complaint, combined with its ongoing focus and outreach in this area, means that employers’ use of any technology, and not just technology characterized as artificial intelligence, is receiving increased scrutiny.”
See, Tyman and Vele believe that the iTutor settlement, and the commission’s ongoing emphasis in the area of AI and algorithmic bias, serves as a strong reminder to employers to make sure that the results of any of their technology-assisted screening processes should comply with existing civil-rights laws. “This reminder applies to both complicated and simple technology. It applies whether an employer is using cutting-edge artificial intelligence products or if its recruiters are simply setting filters on a spreadsheet.”
They advise employers to take immediate action to make sure that their systems already in place will not trigger this kind of enforcement action. “A robust compliance and risk management program should periodically evaluate how technology, both sophisticated and simple, is being used in the hiring process to ensure compliance and manage other risks.”