The following is an edited version of the content presented in the video.
Generative AI could profoundly affect workplace dynamics, potentially bringing significant benefits to companies that are able to manage the risks. Here we focus on employment law as it relates to use of generative AI at work, including what employers need to do to protect confidential information and align practices with regulatory guidelines to guard against the possibility that incorporating AI in HR processes could engender bias or discrimination.
Protecting confidential information
Generative AI systems are trained on large datasets that include information that is input by users. Inputs usually are not protected as confidential, which means that most systems can use these inputs to generate outputs for other users.
In most cases, employees and contractors that input company information, including confidential or sensitive information, are essentially putting it in the public domain. As a result, that information will potentially lose the confidentiality protections that courts typically recognize. This is true even when employees and contractors sign non-disclosure agreements or other restrictive covenant agreements, because these agreements generally do not protect information that is made publicly available.
Moreover, it may not be sufficient for companies to expand the definition of “confidential information” in their agreements. Confidentiality is a frequent area of dispute in restrictive covenant litigation. Several factors are considered, but information that is available in the public domain typically is not protected. Courts will have to specifically consider whether such information loses its confidential status when it is input into generative AI systems. This is likely to become increasingly important as use of generative AI systems increases.
To protect confidential information, employers should implement policies about the use of generative AI systems. Some may choose to ban use of these technologies, though this may put them at a disadvantage as generative AI expands in scope and effectiveness. Another option is to develop clear guidelines about the appropriate use of AI, including guidelines about what information can be entered and how to avoid overreliance on AI outputs, which can be fraught with inaccuracies, misleading data, and bias. Guidelines should be reinforced through employee training programs, content monitoring, and technical solutions to restrict use, and policies will need to evolve as AI technologies do.
Avoiding adverse impacts in HR processes
The Equal Employment Opportunity Commission (EEOC) has been grappling with the potential impacts of AI on discrimination in the workplace since 2016. At first, the agency was optimistic about the potential of AI to reduce unconscious bias in decision making, including in hiring and performance assessment. Increasingly, however, the EEOC became concerned about the risks of algorithmic decision making, as did other regulators and state legislators.
In 2021, the EEOC launched an agency-wide initiative to ensure that AI, if used for employment-related decisions, adheres to anti-discrimination laws. The first significant step was taken in May 2022 when guidance related to the Age Discrimination in Employment Act was published. This guidance primarily addresses the need for reasonable accommodations for applicants and employees with medical conditions or disabilities that might affect their ability to take or perform well on a test.
The EEOC recently issued additional guidance, which focuses on whether AI use could create an 'adverse impact' on a protected group, extending its guidance beyond disability-related concerns. According to the EEOC, employers bear the responsibility for the use of AI tools at work, even if these tools are designed or administered by an external entity, such as a software vendor. Employers are required to conduct self-audits to ensure these tools aren't producing adverse impacts; if they are, employers must consider ways to mitigate these impacts.
Employers incorporating AI tools into their practices are held accountable, and must ensure their actions align with both the EEOC guidance and the emerging state and local laws nationwide. Those using AI tools to assist with employment-related decisions should consult employment counsel to ensure full compliance.
Contacts
- /en/people/f/fay-jennifer-merrigan
Jennifer Merrigan Fay
Partner - /en/people/s/schaffer-drew
Drew Schaffer
Associate