With technology evolving so quickly, powered by the rapid development of generative artificial intelligence (AI) tools, keeping pace with change becomes all the more critical. For lawyers, the ethical requirement of maintaining technology competence plays a large part in that endeavor.
The duty of technology competence is relatively broad, and the obligations required by this ethical rule can sometimes be unclear, especially when applied to emerging technologies like AI. Rule 1.1 states that a “lawyer should provide competent representation to a client.” The comments to this rule clarify that to “maintain the requisite knowledge and skill, a lawyer should … keep abreast of the benefits and risks associated with technology the lawyer uses to provide services to clients or to store or transmit confidential information.”
With the proliferation of AI, this duty has become all the more relevant, especially as trusted legal software companies begin to incorporate this technology into the platforms that legal professionals use daily in their firms. Lawyers seeking to take advantage of the significant workflow efficiencies that AI offers must ensure that they’re doing so ethically.
That’s easier said than done. In today’s fast-paced environment, what is required to meet that duty? Does it simply require that you understand the concept of AI? Do you have to understand how AI tools work? Is there a continuing obligation to track changes in AI as it advances? If you have no plans to use it, can you ignore it and avoid learning about it?
Some states have addressed these issues. In New York, for example, there are now two sets of ethics guidance available: the New York State Bar’s April 2024 Report and Recommendations from the Taskforce on Artificial Intelligence and more recently, Formal Opinion 2024-5, which was issued by the New York City bar association.
The New State Bar’s guidance on AI is overarching and general, particularly regarding technology competence. As the “AI and Generative AI Guidelines” provided in the Report explains, lawyers “have a duty to understand the benefits, risks and ethical implications associated with the Tools, including their use for communication, advertising, research, legal writing and investigation.”
While instructive, the advice is fairly general, and intentionally so. As the committee explained, AI is no different than the technology that preceded it, and thus, “(m)any of the risks posed by AI are more sophisticated versions of problems that already exist and are already addressed by court rules, professional conduct rules and other law and regulations.”
For lawyers seeking more concrete guidance on technology competence when adopting AI, look no further than the New York City Bar’s AI opinion. In it, the Ethics Committee offers significantly more granular insight into technology competence obligations.
First, lawyers must understand that current generative AI tools may include outdated information “that is false, inaccurate, or biased.” The Committee requires that lawyers understand not only what AI is but also how it works.
Before choosing a tool, there are several recommended courses of action. First, you must “understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Additionally, you may want to learn about AI by “acquiring skills through a continuing legal education course.” Finally, consider consulting with IT professionals or cybersecurity experts.”
The committee emphasized the importance of carefully reviewing all responses for accuracy explaining that generative AI outputs “may be used as a starting point but must be carefully scrutinized. They should be critically analyzed for accuracy and bias.” The duty of competence requires that lawyers ensure the original input is correct and that they must analyze the corresponding response “to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.”
The committee further clarified that you cannot delegate your professional judgment to AI and that you “should take steps to avoid overreliance on Generative AI to such a degree that it hinders critical attorney analysis fostered by traditional research and writing.” This means that all AI output should be supplemented “with human-performed research and supplement any Generative AI-generated argument with critical, human-performed analysis and review of authorities.”
If you plan to dive into generative AI, both sets of guidance should provide a solid roadmap to help you navigate your technology competence duties. Understanding how AI tools function and their limitations are essential when using this technology. By staying informed and applying critical judgment to the results, you can ethically leverage AI’s many benefits to provide your clients with the most effective, efficient representation possible.
Nicole Black is a Rochester, New York, attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, LawPay, CASEpeer, and Docketwise, AffiniPay companies. She is the nationally recognized author of “Cloud Computing for Lawyers” (2012) and co-authors “Social Media for Lawyers: The Next Frontier” (2010), both published by the American Bar Association.