Artificial Intelligence in Hiring: A Double-Edged Sword
For years, online job marketplaces have facilitated an influx of job seekers applying for open positions, giving employers more opportunities to find the best candidates for each role. Some businesses have struggled to keep pace with an overwhelming surge in applications, especially for roles that can be performed remotely. In response to the strain on resources posed by application review, business have begun relying on technology to streamline the hiring process. This includes using artificial intelligence (AI) to cull applications to a more manageable number, and even allowing AI to dictate hiring decisions
This trend only accelerated during the pandemic, yet many businesses do not know enough about these tools and their potential flaws, raising the risk that they may be failing to comply with their existing and evolving legal obligations. The technical assistance document recently published by the U.S. Equal Employment Opportunity Commission (EEOC), and the New York City ordinance on AI in hiring that will become effective Jan. 1, 2023, are some of the early signs that lawmakers are increasingly paying attention to the use of these tools in employment. With diversity, equity and inclusion top of mind for many businesses, employees and investors, it is critical for employers to understand the tools they are using. “From a DEI perspective, it’s imperative that we consider the role AI plays in recruiting and retention,” said Armstrong Teasdale’s Vice President, Diversity, Equity and Inclusion Sonji Young. “This is not a concept that is unfamiliar, or going away, and as organizations look to grow, the management of candidates and the pipeline must be carefully and appropriately vetted.” Moreover, employers need to be aware of regulatory and legislative moves in these areas, so they can take steps to prevent AI from introducing bias into hiring decisions.
What is an AI hiring tool?
Generally, AI refers to “the capability of a machine to imitate intelligent human behavior,” such as decision-making. In the employment context, federal and state authorities have been defining the term broadly as systems such as machine learning, computer vision, intelligent decision support, and other computational processes used to assist or replace human decision making in the hiring process.
Scholars have cautioned that machine learning and other algorithmic tools are predicated on a fundamental flaw: AI learns from pre-existing behavior, which itself may be faulty. AI systems trained on biased data from existing workplaces may be perpetuating the same imbalances or creating new ones, and may be doing so in violation of applicable law, by recreating employee populations with insufficient numbers of women, people of color, those with disabilities, and other marginalized groups.
As Armstrong Teasdale Chief Human Resources Officer Julie Paul has observed, “AI cannot be thoughtful about considering candidates who might not fall within bright-line criteria, yet may still be qualified. Those candidates will never be seen, because they will be screened out by the platform, and this creates not only missed opportunities, but liability for employers and hiring managers.”
State and local lawmakers and federal regulators are currently wading into this area to remind employers about their existing legal obligations regarding fair hiring, and increasingly to impose new obligations specific to the technologies themselves.
The EEOC’s Technical Assistance Document
Last fall, the EEOC launched the Algorithmic Fairness Initiative to ensure that employers using AI in employment decisions comply with federal civil rights laws that the agency enforces. One result of this initiative is the EEOC’s May 12, 2022Technical Assistance Document (TAD), which addresses how Americans with Disabilities Act (ADA) requirements may apply to the use of AI in employment matters. The TAD notes that while vendors creating AI tools may vet them for race, ethnicity and gender bias, these techniques may not address employers’ obligations not to discriminate against individuals with disabilities. The EEOC cautions that “[i]f an employer or vendor were to try to reduce disability bias in the way” they do for other protected categories, this “would not mean that the algorithmic decision-making tool could never screen out an individual with a disability” because “[e]ach disability is unique.”
The TAD also provides recommendations on how employers can comply with the ADA, and addresses applicants who believe their rights may have been violated. In a noteworthy move, the EEOC lists—but does not mandate—various “promising practices” that employers could adopt to combat disability bias, such as asking the vendor of the algorithmic decision-making tool about its development, including whether the tool is attentive to applicants with disabilities.
Vetting for bias on the basis of disability promises to be a complex process, and one in which vendors may not be prepared to invest. While it remains unclear what weight, if any, these recommendations will be accorded by the courts or even the EEOC in the future, they provide key insights into the agency’s current views on employers’ expected conduct around AI use.
New Law for New York City Employers
New York City has taken a more proactive approach: starting Jan. 1, 2023, every business with employees in the city will be prohibited from using any computational processes that substantially assist or replace discretionary employment decision making (which the ordinance refers to as automated employment decision tools (AEDTs)) to screen employees or candidates for employment or promotion, unless the tool has undergone an independent bias audit no more than one year prior to its use and the employer has posted the results online. An acceptable independent “bias audit” includes testing of the AEDTs to assess potential disparate impact on persons based on race, ethnicity, or sex. The law does not specify who qualifies as an “independent auditor,” but presumably it would not include an in-house expert or the vendor who created the assessment. Notably, the statute imposes penalties of $500 to $1,500 per day that the tool is in use in violation of the law.
Given the roughly 200,000 businesses operating in New York City, this law is poised to have a significant impact, yet despite the availability of recently published proposed regulations, its broad scope leaves many open questions. It also remains unclear whether long-standing computer-based analyses derived from traditional testing validation strategies are covered by the law, or whether passive evaluation tools, such as recommendation engines used by employment firms, could fall within the scope of the law.
Looking Ahead
Businesses will no doubt continue to feel pressure to use AI and other tools to process employment applications efficiently, and other states and localities are likely to issue their own laws and regulations. In this complicated and evolving landscape, employers should proceed with caution to avoid potentially violating both existing anti-discrimination obligations and new rules targeted at these tools.
“In any instance, we need to be mindful of the technology we are leveraging,” said Paul. “The increasingly competitive nature of the hiring market is such that many organizations have more candidates than they know what to do with, and being strategic in making hiring decisions is key. But this should not be allowed to result in a failure to consider diversity and fairness in reviewing such candidates.”