On January 10, 2023, the Equal Employment Opportunity Commission (“EEOC”) published a draft of its Strategic Enforcement Plan (“SEP”) in the Federal Register, which outlines the enforcement goals for the Commission for the next four years. While the Agency aims to target a number of new areas – such as underserved workers and pregnancy fairness in the workplace – it is notable that it listed as priority number one the elimination of barriers in recruitment and hiring caused or exacerbated by employers’ use of artificial intelligence.
On the same day, the Department of Justice’s Civil Rights Division published a Q&A in support of the Commission’s guidance on Artificial Intelligence and the potential for violations of disability discrimination laws in the hiring process. The guidance from both agencies comes on the heels of a technical assistance document issued by the EEOC last June that provided guidance to employers on the use of automated decision tools and ADA protections, which we discussed in a previous article. And not to be left out, the National Labor Relations Board’s General Counsel issued Memorandum 23-02 in October, which urged the Board to adopt a new framework to protect workers engaged in protected concerted activity from “intrusive or abusive electronic monitoring and automated management practices,” and discussed in more detail here. Suffice to say, the federal government is zeroing in on employers’ use of AI from all angles, which means that employers should examine carefully how they use such tools throughout the employment cycle.
What the SEP Can Tell Us, and What It Leaves Out
In its draft plan, the EEOC explained why AI and machine learning tools merit greater scrutiny, and provided specific examples of the risks these automated tools pose for discrimination. Highlighted first is the use of such tools during the hiring and recruitment process in a way that may intentionally exclude or adversely impact protected groups. Next, the plan calls out “restrictive application processes or systems” that are difficult for individuals with disabilities to access. Finally, the EEOC highlights screening tools, such as pre-employment tests and background checks aided by AI, that disproportionately impact workers based on their protected status. The Agency says these practices are of “particular concern” in growth industries such as construction and high-tech.
Unfortunately, the SEP does not offer employers guidance on how they should evaluate or alter their automated tools to comply with non-discrimination laws, leaving companies to “wait-and-see” how the broadly stated goals will be enforced. One suit brought by the EEOC against an online tutoring company may shed some light on the matter. The EEOC alleges in its complaint, brought in the Eastern District of New York, that iTutorGroup, Inc. discriminated against applicants based on their age when its automated screening tool allegedly rejected all applicants over 60 years old. Certainly, such an example is not likely to be typical or perhaps even intentional, but because AI trained with machine learning endeavors to discover and replicate “successful” hiring patterns, such a tool may arrive at a similar outcome, whether explicitly programmed to do so or not.
What Employers Can Do Now While the SEP issued in January was just a draft, and the Commission may revise its plan based on public comments, it is likely that a final SEP is adopted in similar form given that Commissioners from both sides of the aisle have expressed interest in the use of AI tools. In the meantime, employers can look to state action – like New York City’s “Automated Employment Decision Tools” law – for potential defensive efforts, such as independent audits of AI tools, and pre-use disclosure to applicants and employees. We have discussed the various state efforts to implement safeguards against discrimination in AI on the Hunton Employment & Labor Perspectives blog, and will update this article when the EEOC’s SEP is finalized.