Generative AI is a hot topic of conversation in every industry, including law. Indeed, industry analysts and commentators have argued that generative AI will soon be able to do much of the work that is typically done by junior attorneys.
To better understand the implications of AI for legal professionals, we tested a well-known GenAI model to see if it could draft client alerts, nondisclosure agreements (NDAs), and briefs — tasks that are often done by junior attorneys.
We found that GenAI models can be helpful tools, but their outputs often require significant additional work from a lawyer who has the ability to put facts into context, analyze nuances, and finesse language.
Can a GenAI model write a client alert?
We asked the model to draft a client-ready article summarizing the features and tax requirements for qualified small business stock (QSBS). To its credit, the model provided an accurate, clear, and well-structured summary of QSBS.
However, its analysis was too basic to be helpful to a typical client seeking a thorough examination of the topic. Although we refined our prompts in an attempt to improve the analysis, the model did not produce anything we would consider “client-ready.” Requests on other tax topics generated comparable results.
Takeaway: Experience and access to quality analysis make the difference. At first glance, the alert on QSBS was easy to read and factual. It was only due to prior experience that we could identify where the model overemphasized or glossed over issues in its analysis. Moreover, the model was limited in its ability to access important analysis that would inform its outputs. The Internal Revenue Code and IRS regulations are available for free online, but most tax analysis is available only on legal databases that require subscriptions. It remains to be seen whether the model’s analysis of tax law would improve with access to proprietary databases.
Can a GenAI model write a mutual NDA?
We asked the model to draft a single-tier, two-way mutual NDA. At first glance, the results were impressive: the model produced a well-organized and thorough NDA that could readily be adapted to a specific client situation. That may not be very surprising, however. NDA language within commercial agreements is standardized, and we suspect the output was modeled on widely available NDA examples online.
We then asked the model to prepare a two-tiered NDA with separate durations and treatment requirements for different classes of confidential information. The request was based on a hypothetical scenario involving a fictitious tech startup, indicating that the startup would need to protect some of its IP but disclose less sensitive information to potential investors.
Although the model could produce a two-tiered NDA, the result was not as polished as the single-tiered version, and it would have required further development by someone familiar with the client’s specific situation and needs.
Takeaway: Context is critical. GenAI models can generate standard NDA language, but they are not able to prepare solutions that depend on a deeper understanding of more complex client situations and needs.
Can a GenAI model prepare a litigation brief?
We asked the model to draft a legal standards section for a litigation brief. While the generative AI's output was factually accurate1 and cited relevant legal precedent, the draft lacked the language of advocacy essential to litigation briefs. When we asked the model to make the language more compelling, the writing got punchier but not more persuasive.
The model was also wordy, often generating outputs that were twice as long as what an attorney could produce. A human would have to edit them to meet the word-count limits prescribed for legal standards sections of trial briefs.
Takeaway: To advocate is human. AI models can often generate a section on legal standards that reflects solid understanding of the law, but they are less able to use that understanding to craft language that advocates effectively for clients.
* * *
AI can be used in a range of legal applications, but it still requires the expertise and oversight of a lawyer to ensure outputs are correct, appropriate, and relevant to client needs. While we can envision a future where AI would play a strong supporting role in helping junior lawyers get their work done more efficiently, it would appear that their jobs are safe, at least for now.
[1] Accuracy is a significant concern when using generative AI, as is clear from a recent case in which attorneys faced sanctions for including false information in a legal brief that was drafted with the help of generative AI (see Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 BL 213626 (S.D.N.Y. June 22, 2023). Any lawyer using AI to draft briefs should thoroughly check all facts and assertions to ensure accuracy.