Listen to this post

Meaghan Pickles is a co-author of this post and is a Summer Associate at Bradley.

Artificial intelligence (AI) is the best way to save time and make fair decisions — right? Not so fast. As AI is more common in our day-to-day lives, we have seen it make mistakes and replicate human shortcomings. For many it came as a surprise when AI hiring algorithms appeared to replicate human biases as well. If you are an employer using AI hiring algorithms, you may be at risk of liability under federal law.

The Problem

Some companies with high-volume hiring enlisted the help of AI to assist in employment-related decisions. At first, this only included basic functions such as scanning résumés for key words. However, as the technology evolved, companies began using tools such as computer-scored video interviews or facial recognition technology to screen applicants.  

Examples of how AI has been used during the hiring process include:

  • Résumé scanners that prioritize applications using certain key words;
  • Employee monitoring software that rates employees on the basis of their keystrokes or other factors;
  • “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • Video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  • Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test. 

While having a computer handle all of these tasks was helpful, some companies halted the use of AI in hiring decisions when the technology appeared to screen candidates based on protected statuses. For example, AI recruitment software used by Amazon was training itself to seek out men for technical roles. Additionally, researcher Joy Buolamwini found that facial-recognition software has failed to recognize women and people of color, which may lead to the software not accurately reflecting diverse candidates’ performance during a computer-screened video interview. Furthermore, AI hiring tools might unintentionally screen out applications from people with disabilities, even when they could perform the job with a reasonable accommodation. Depending on how it is programmed, AI software absorbs the collective attitudes and biases of whatever it reads online. And without teaching AI how to identify and mitigate bias, it will likely perpetuate it. However, employers utilizing AI can help prevent the perpetuation.

EEOC Guidance on the Use of AI in Employment-Related Decisions

The EEOC recently issued guidance on how employers’ use of AI can comply with the Americans with Disabilities Act (ADA) and Title VII. Employers using AI to make employment decisions should review the EEOC guidance.

  • Title VII

On May 18, 2023, the EEOC issued guidance to assist employers in “determin[ing] if their tests and selection procedures are lawful for purposes of Title VII disparate impact analysis.” Disparate impact discrimination occurs when a facially neutral policy or practice has the effect of disproportionately excluding persons based on a protected status (unless the procedures are job related and consistent with business necessity). If an employer administers a selection procedure, it may be responsible under Title VII, even if the test was developed by an outside vendor. The guidance makes clear that employers are responsible for selection procedures developed by third- party software vendors.

If you want to have a software vendor develop or administer an algorithmic decision-making tool, ask the vendor, at a minimum, whether it has taken steps to evaluate whether the tool results in a disparate impact based on a characteristic protected by Title VII. If the tool results in a lower selection rate for individuals of a particular protected class, then you need to consider whether it is job related and consistent with business necessity and whether there are alternatives that may have less of an impact.

  • ADA

On May 12, 2022, the EEOC issued guidance on AI about “how existing ADA requirements may apply to the use of” AI in employment decision making. It further “offers promising practices for employers to help with ADA compliance when using A.I. decision making tools.”

Not surprisingly (and consistent with its May 18, 2023, guidance), the EEOC concluded that an employer who administers a pre-employment test may be responsible for ADA discrimination if the test discriminates against individuals with disabilities, even if the test was developed by an outside vendor.

Regardless of who developed an algorithmic decision-making tool, the EEOC advises that employers take additional steps during implementation and deployment to reduce the chances that the tool will discriminate against someone because of a disability (intentionally or unintentionally). Suggested steps include:

  • Clearly indicating that reasonable accommodations, including alternative formats and alternative tests, are available to people with disabilities;
  • Providing clear instructions for requesting reasonable accommodations; and
  • In advance of the assessment, providing all job applicants and employees who are undergoing assessment with as much information about the tool as possible, including information about which traits or characteristics the tool is designed to measure, the methods by which those traits or characteristics are to be measured, and the disabilities, if any, that might potentially lower the assessment results or cause screen out.

State and Municipal Laws

Additionally, states and municipalities are beginning to address the use of discriminatory AI hiring tools.

  • Illinois

In 2020, Illinois enacted the Artificial Intelligence Video Interview Act. This law requires employers that use AI-enabled analytics in interview videos to take the following actions:

  • Notify each applicant about the use of AI technology.
    • Explain the AI technology to the applicant, how it works, and what characteristics it uses to evaluate applicants.
    • Obtain the applicant’s consent before the interview.

The video must be destroyed within 30 days upon the request of the applicant, and employers must limit the distribution of the videos to only those individuals whose expertise is necessary to evaluate the applicant.

If the employer relies solely on AI to make a threshold determination before the candidate proceeds to an in-person interview, that employer must track the race and ethnicity of the applicants who do not proceed to an in-person interview, as well as those applicants ultimately hired.

The Illinois law does not include explicit civil penalties.

  • Maryland

In 2020, Maryland passed its AI-employment law, called H.B. 1202. H.B. 1202 prohibits employers from using facial recognition technology during an interview to create a facial template without consent. Consent requires a signed waiver that states:

  • The applicant’s name;
    • The date of the interview;
    • That the applicant consents to the use of facial recognition; and
    • Whether the applicant read the consent waiver.

Like the Illinois law, the Maryland law does not include a specific penalty or fine for a violation of the law.

  • New York City

Most recently, New York City has enacted Local Law Int. No. 1894-A, which requires an independent “bias audit” of AI hiring tools at least one year before first use. The law also requires that information about the audit be publicly available, and that the company notify applicants that AI hiring algorithms will be used. The price tag is a $500 to $1,500 penalty for each violation.

Notably, 1894–A defines an audit as “an impartial evaluation by an independent auditor” used to test the technology for any discriminatory impact based on race, ethnicity, or sex. It’s still unclear who is qualified to conduct an audit. So far, law firms are stepping in to offer the service. In the event employers need an audit performed, employers should not hesitate to call their attorney.

Takeaways

Like with any technology, AI hiring tools will evolve over time. We remain hopeful that these glitches in AI will soon be corrected. However, for now, employers using or planning to use AI hiring tools should ensure that their use of AI follows the law. You should: 

  • Review EEOC guidance to ensure that your AI hiring tools comply with the requirements of the ADA and Title VII. Specifically, employers should ensure that algorithms are not discriminating against individuals based on protected characteristic and disabilities.
  • Research whether municipal or state laws require AI audits, restrictions on facial recognition services, or restrictions on AI analysis of video interviews. 
  • Ensure third-party vendors of AI technology are aware of and follow federal, state, and local requirements. 

In all of these discussions and assessments, consider involving your favorite employment lawyer (at least if you want to protect it under attorney-client privilege).

*Meaghan Pickles is not a licensed attorney.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Whitney J. Jackson Whitney J. Jackson

Whitney Jackson’s practice focuses on commercial litigation, employment, and intellectual property matters.

Whitney earned her J.D. (cum laude) from the University of Mississippi School of Law, where she served as associate articles editor of the Mississippi Law Journal, senator of…

Whitney Jackson’s practice focuses on commercial litigation, employment, and intellectual property matters.

Whitney earned her J.D. (cum laude) from the University of Mississippi School of Law, where she served as associate articles editor of the Mississippi Law Journal, senator of the Student Bar Association, and vice president of the Black Law Students Association. While in law school, Whitney interned with the legal departments of Fortune 500 companies, where she assisted senior management in researching and analyzing various legal compliance matters. Whitney also interned with the University of Mississippi’s Office of Technology Commercialization, where she assisted potential patent-applicants in prior-art searches and patent development. She earned her Bachelor of Science (magna cum laude) degree in Biochemistry from Alcorn State University.

Photo of Anne R. Yuengert Anne R. Yuengert

Anne Yuengert works with clients to manage their employees, including conducting workplace investigations of harassment or theft, training employees and supervisors, consulting on reductions in force and severance agreements, drafting employment agreements (including enforceable noncompetes) and handbooks, assessing reasonable accommodations for disabilities, and…

Anne Yuengert works with clients to manage their employees, including conducting workplace investigations of harassment or theft, training employees and supervisors, consulting on reductions in force and severance agreements, drafting employment agreements (including enforceable noncompetes) and handbooks, assessing reasonable accommodations for disabilities, and working through issues surrounding FMLA and USERRA leave. When preventive measures are not enough, she handles EEOC charges, OFCCP and DOL complaints and investigations, and has handled cases before arbitrators, administrative law judges and federal and state court judges. She has tried more than 30 cases to verdict.