AI in the Workplace: Is Your Enterprise Intelligent About Artificial Intelligence?
Artificial intelligence (AI) is everywhere—from chatbots that answer questions, draft essays and write code, to virtual assistants and self-driving cars. In the workplace, it is estimated that 99% of Fortune 500 companies use AI. Employers recognize the benefits of the myriad uses AI offers, including for pre-screening and interviewing applicants, onboarding, training, tracking performance and monitoring productivity. The CEO of a major multinational technology corporation recently said his company expected to pause hiring for back-office human resource functions, which he expects will be replaced by AI. However, there are risks to the use of AI in the workplace, which companies must be mindful of when using or contemplating the use of AI platforms for employment-related matters.
Recently, the Equal Employment Opportunity Commission, Federal Trade Commission, Consumer Financial Protection Bureau and U.S. Department of Justice announced their joint commitment to ensuring employers’ use of AI in the workplace conforms with applicable laws, including those prohibiting discrimination and protecting privacy. On May 1, 2023, the White House announced it will probe how companies are using AI to track employees and the potential detrimental effects it may have on employee mental health, their ability to organize and potential discrimination. A number of state and local laws regulating the use of AI in recruiting and hiring practices have already been enacted (e.g., Illinois and New York City) or proposed (e.g., California, Maryland and Washington, D.C.).
Why the Concern?
Predictive AI platforms used to pre-screen applicants or conduct initial interviews to determine the candidates who are qualified and the “right fit” are only as good as the data input or algorithm used. Even neutral algorithms can lead to unintentional bias, e.g., using ZIP codes or creating a match to the existing workforce/culture that may not be diverse. AI also often tries to predict outcomes through behavior and past performance, but the correlation created and used may not be tied to the relevant job skills. For example, a large e-commerce company attempted to use predictive AI in its hiring process to pre-screen applicants. Because the AI was programmed to vet candidates by using patterns from prior resumes, the unintended result downgraded female applicants due to the industry being historically dominated by males. That company scrapped the AI hiring program.
Routine audits of any AI pre-screening programs may be necessary to ensure there is no unintended disparate treatment or disparate impact. Beyond statistical analysis, employers may want to audit the data and algorithm used to ensure the coding and data collected are free of bias and geared toward the relevant requisite job skills.
Additionally, AI pulling applicant information from a variety of open sources can potentially yield incorrect information about a candidate or run afoul of various state laws. Many states have enacted Ban the Box laws, prohibiting employers from asking about criminal history on applications or running background checks before an offer of employment is made. If an AI pre-screener yields information relating to a candidate’s criminal history prior to an offer being made, an employer could be at risk of liability.
It has never been easier to track details of work productivity, including keystrokes and performance data, which can support promotional or raise considerations, which is potentially important in the post-COVID-19 remote or hybrid workforce. Tracking, however, as noted above, is subject of the White House’s probe. According to the White House Domestic Policy Council, “The constant tracking of performance can push workers to move too fast on the job, posing risks to their safety and mental health.” Separate from that, the surreptitious use of AI to track employees could risk invading an employee’s right to privacy.
Takeaways
While the White House has not yet issued guidance on the use of AI in the workplace, companies can take action now. A proactive employer should have a well-defined IT Resources and Communications Systems Policy that clearly informs employees they are being monitored through company-owned computers and systems (which we addressed in a previous advisory, “California Court of Appeals Decision Reminds Employers to Have Clear, Enunciated IT Systems Use Policy"). Further, companies should ensure that AI monitoring of workflow, process, activity and productivity is applied consistently and not targeted to a specific person or group of individuals. Any disciplinary actions as a result of AI should be examined to ensure similar treatment for similar behavior among all employees, regardless of their race, gender, national origin, etc. If an employment-related decision, such as promotion, raise or termination, is based on AI-collected data, that data should be neutral and result in no unintentional bias.
Of course, the implementation of AI in pre-screening applicants, making employment decisions and tracking employees are just a few examples of actions that could increase risk to employers. Armstrong Teasdale anticipates further regulations and legislation addressing the use of AI in the workplace at the federal, state and local levels. In the meantime, companies should analyze their existing AI platforms or ensure any planned use of AI complies with current laws. If you have questions regarding best practices in using AI platforms, please reach out to your regular Armstrong Teasdale contact or one of the authors listed below.