Please ensure Javascript is enabled for purposes of website accessibility
Recent News
Home / Wire Stories / Civil rights enforcers warn employers against biased AI
Kristen Clarke
Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference at the Department of Justice in Washington, Thursday, Aug. 5, 2021. (AP file photo: Andrew Harnik)

Civil rights enforcers warn employers against biased AI

The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.

The U.S. Justice Department and the Equal Employment Opportunity Commission jointly issued guidance to employers to take care before using popular algorithmic tools meant to streamline the work of evaluating employees and job prospects — but which could also potentially run afoul of the Americans with Disabilities Act.

“We are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers,” Assistant Attorney General Kristen Clarke of the department’s Civil Rights Division told reporters Thursday. “The use of AI is compounding the longstanding discrimination that jobseekers with disabilities face.”

Among the examples given of popular work-related AI tools were resume scanners, employee monitoring software that ranks workers based on keystrokes, game-like online tests to assess job skills and video interviewing software that measures a person’s speech patterns or facial expressions.

Such technology could potentially screen out people with speech impediments, severe arthritis that slows typing or a range of other physical or mental impairments, the officials said.

Tools built to automatically analyze workplace behavior can also overlook on-the-job accommodations — such as a quiet workstation for someone with post-traumatic stress disorder or more frequent breaks for a pregnancy-related disability — that enable employees to modify their work conditions to perform their jobs successfully.

Experts have long warned that AI-based recruitment tools — while often pitched as a way of eliminating human bias — can actually entrench bias if they’re taking cues from industries where racial and gender disparities are already prevalent.

The move to crack down on the harms they can bring to people with disabilities reflects a broader push by President Joe Biden’s administration to foster positive advancements in AI technology while reining in opaque and largely unregulated AI tools that are being used to make important decisions about people’s lives.

“We totally recognize that there’s enormous potential to streamline things,” said Charlotte Burrows, chair of the EEOC, which is responsible for enforcing laws against workplace discrimination. “But we cannot let these tools become a high-tech path to discrimination.”

A scholar who has researched bias in AI hiring tools said holding employers accountable for the tools they use is a “great first step,” but added that more work is needed to rein in the vendors that make these tools. Doing so would likely be a job for another agency, such as the Federal Trade Commission, said Ifeoma Ajunwa, a University of North Carolina law professor and founding director of its AI Decision-Making Research Program.

Leave a Reply