ADVERTISEMENT

EEOC issues guidance on use of AI in employment decisions....

The Tradition

HR King
Apr 23, 2002
123,535
97,153
113
Employers can't rely on a vendor's assurances that its AI tool complies with Title VII of the Civil Rights Act of 1964. If the tool results in an adverse discriminatory impact, the employer may be held liable, the U.S. Equal Employment Opportunity Commission (EEOC) clarified in new technical assistance on May 18. The guidance explained the application of Title VII of the Civil Rights Act of 1964 to automated systems that incorporate artificial intelligence in a range of HR-related uses.

Without proper safeguards, employers might violate Title VII when they use AI to select new employees, monitor performance, and determine pay or promotions, the EEOC said.

"Too many employers are not yet fully realizing that long-standing nondiscrimination law is applicable even in the very new context of AI-driven employment selection tools," said Jim Paretti, an attorney with Littler in Washington, D.C. He described the guidance as a "wake-up call to employers."

Neutral tests or selection procedures, including algorithmic decision-making tools, that have a disparate impact on the basis of race, color, religion, sex or national origin must be job-related and consistent with business necessity; otherwise they are prohibited, the EEOC said. Even if the procedures are job-related and consistent with business necessity, alternatives must be considered. The agency noted that disparate impact analysis was the focus of the technical assistance.

Employers May Be Surprised by Title VII's Reach

"Employers are not prepared for the Title VII implications of using AI HR tools for two main reasons," said Bradford Newman, an attorney with Baker McKenzie in Palo Alto, Calif.

"First, front-line HR managers and procurement folks who routinely source AI hiring tools do not understand the risks," he said. "Second, AI vendors will not usually disclose their testing methods and will demand companies provide contractual indemnification and bear all risk for alleged adverse impact of the tools."

The EEOC puts the burden of compliance squarely on employers. "If an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor," the agency states in its technical assistance guidance.

The employer may also be held responsible for agents' actions, including software vendors, if the employer has given them authority to act on the employer's behalf. "This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf," the EEOC stated in the guidance.

Employers may want to ask the vendor whether steps have been taken to evaluate whether use of a tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII, the agency recommended. If the vendor says a lower selection rate for a group of individuals is expected, the employer should consider whether the tool is job-related and consistent with business necessity and whether there are alternatives.

"The guidance leaves unanswered the key question of how employers should establish that AI-based tools are, in fact, job-related," said Mark Girouard, an attorney with Nilan Johnson Lewis in Minneapolis.

In addition, if the vendor is incorrect about its own assessment and the tool results in disparate impact discrimination or disparate treatment discrimination, the employer could be liable.

Four-Fifths Rule for Selection Rate Explained

The document explains the four-fifths rule—a general rule of thumb for determining the selection rate for applicants—and notes that it applies to the use of algorithmic decision-making tools.

Employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether the use of the procedure causes a selection rate for individuals in the group that is substantially less than the selection rate for individuals in another group.

Suppose 80 white individuals and 40 Black individuals take a personality test as part of a part of a job application that is scored using an algorithm, and 48 of the white applicants and 12 of the Black applicants advance to the next round of the selection process, the EEOC hypothesized. Based on these results, the selection rate for white individuals is equivalent to 60 percent and the selection rate for Black individuals is 30 percent.

The ratio of the two rates is thus 30/60 or 50 percent. Because this ratio is lower than four-fifths, or 80 percent, the selection rate for Black applicants is substantially different than for white applicants and could be evidence of discrimination.

"Courts have agreed that use of the four-fifths rule is not always appropriate, especially where it is not a reasonable substitute for a test of statistical significance," the agency cautioned.

Employers may want to ask a vendor whether it relied on the four-fifths rule when determining whether the use of a tool might have an adverse impact or relied on a standard such as statistical significance that's often used by courts, the EEOC added.

"Unfortunately, the current EEOC standards for establishing the validity of a hiring tool—which are now nearly 50 years old—don't lend themselves neatly to analysis of these new and emerging technologies," Girouard said.

Oversight

Employers using AI HR tools should have effective AI oversight, including a chief AI officer, and routinely test for potentially adverse impact, Newman said.

Scott Nelson, an attorney with Hunton Andrews Kurth in Houston, noted AI and its interplay with the law is rapidly evolving.

"I'm not sure any of us are truly ready for the potential impact AI can, and likely will, have on our daily lives," said Erica Wilson, an attorney with Fisher Phillips in Pittsburgh. "That being said, employers have been put on notice that they cannot simply pick a software program off the shelf, or write one themselves, and assume it works as intended without inadvertent bias. Employers need to pay attention and test their employment-related AI tools early and often to make sure they aren't causing unintended harm."


The EEOC does not welcome our new AI overlords....
 
Last edited:
The head of the U.S. agency charged with enforcing civil rights in the workplace says artificial intelligence-driven “bossware” tools that closely track the whereabouts, keystrokes and productivity of workers can also run afoul of discrimination laws.

Charlotte Burrows, chair of the Equal Employment Opportunity Commission, told The Associated Press that the agency is trying to educate employers and technology providers about their use of these surveillance tools as well as AI tools that streamline the work of evaluating job prospects.

And if they aren’t careful with say, draconian schedule-monitoring algorithms that penalize breaks for pregnant women or Muslims taking time to pray, or allowing faulty software to screen out graduates of women’s or historically Black colleges – they can’t blame AI when the EEOC comes calling.

“I’m not shy about using our enforcement authority when it’s necessary,” Burrows said. “We want to work with employers, but there’s certainly no exemption to the civil rights laws because you engage in discrimination some high-tech way.”

The federal agency put out its latest set of guidance Thursday on the use of automated systems in employment decisions such as who to hire or promote. It explains how to interpret a key provision of the Civil Rights Act of 1964 known as Title VII that bars job discrimination based on race, color, national origin, religion or sex, which includes bias against gay, lesbian and transgender workers.

Burrows said one important example involves widely-used resumé screeners and whether or not they can produce a biased result if they are based on biased data.

“What will happen is that there’s an algorithm that is looking for patterns that reflect patterns that it’s already familiar with,” she said. “It will be trained on data that comes from its existing employees. And if you have a non-diverse set of employees currently, you’re likely to end up with kicking out people inadvertently who don’t look like your current employees.”

Amazon, for instance, abandoned its own resume-scanning tool to recruit top talent after finding it favored men for technical roles — in part because it was comparing job candidates against the company’s own male-dominated tech workforce.

Other agencies, including the Department of Justice, have been sending similar warnings for the past year, with previous sets of guidance about how some AI tools could discriminate against people with disabilities and violate the Americans with Disabilities Act.

In some cases, the EEOC has taken action. In March, the operator of tech job-search website Dice.com settled with the agency to end an investigation over allegations it was allowing job posters to exclude workers of U.S. national origin in favor of immigrants seeking work visas. To settle the case, the parent company, DHI Group, agreed to rewrite its programming to “scrape” for discriminatory language such as “H-1Bs Only,” a reference to a type of work visa.

 
ADVERTISEMENT

Latest posts

ADVERTISEMENT