Please ensure Javascript is enabled for purposes of website accessibility
Home / Uncategorized / Lawyers Need to Begin Formalizing AI Training

Lawyers Need to Begin Formalizing AI Training

Bob Lucic

Attorney Robert R Lucic

As lawyers begin integrating artificial intelligence (AI) platforms into their practices, we need to begin formalizing training for both new and senior lawyers on the appropriate use of this rapidly-developing technology.  As AI tools increasingly shape legal research, contract analysis, litigation strategy, and client service, lawyers must be equipped not only to use these technologies competently but also to understand their limitations, risks, and ethical implications. Effective training requires a balanced approach that integrates technical literacy, legal judgment, professional responsibility, and lifelong learning.

The first step in training lawyers on AI is building foundational literacy. Most lawyers do not need to become programmers or data scientists, but they must understand how AI systems work at a conceptual level. Training should explain core ideas such as machine learning, natural language processing, predictive analytics, and generative models in clear, non-technical language. Lawyers should understand how AI tools are trained, what data they rely on, and why outputs are probabilistic rather than authoritative. This foundational knowledge helps lawyers avoid treating AI as a “black box” and reduces the risk of blind reliance on automated outputs.

Once foundational literacy is established, training should focus on practical use cases relevant to legal practice. Lawyers learn best when AI is presented as a tool that supports their daily work rather than as an abstract technology. Training programs should demonstrate how AI can assist with legal research, document review, due diligence, contract drafting, and litigation preparation. Hands-on workshops, simulations, and supervised exercises allow junior lawyers to experiment with AI tools while receiving guidance from more experienced practitioners. This practical exposure builds confidence and encourages thoughtful adoption rather than resistance or overuse.

Equally important is teaching lawyers how to critically evaluate AI outputs. AI systems can produce persuasive but infamously inaccurate, incomplete, or biased results. Training must emphasize verification, cross-checking, and professional judgment.  Lawyers need to be taught to ask key questions: What assumptions underlie this output? What data might be missing? How reliable is this result for the specific legal context? By reinforcing that responsibility always rests with the lawyer—not the tool—training programs preserve the core professional value of independent legal judgment.

Ethics and professional responsibility should be central pillars of AI training. Lawyers must understand how existing ethical duties apply to AI use, including duties of competence, confidentiality, supervision, and candor. Training should address risks such as inadvertent disclosure of client information, unauthorized practice of law through automated systems, and overreliance on unverified outputs. Real-world case studies involving AI-related errors or misconduct can be particularly effective in illustrating these risks. By framing AI ethics as an extension of traditional professional obligations, training helps young lawyers integrate technology responsibly into their practice.

Another critical aspect of training is data awareness and bias recognition. AI systems can reflect and amplify biases present in their training data. Lawyers must be taught to recognize how bias can affect research results, predictive analytics, and decision-support tools. Training should encourage skepticism toward claims of neutrality or objectivity and emphasize the lawyer’s role in identifying and mitigating unfair or discriminatory outcomes. This awareness is especially important in areas such as employment law, criminal justice, and regulatory enforcement, where biased outputs can have serious real-world consequences.

Mentorship and institutional culture also play a key role in effective AI training. Young lawyers learn not only from formal instruction but from observing how senior lawyers use technology. Law firms and legal organizations should encourage partners and supervisors to model responsible AI use, discuss their decision-making processes, and openly address uncertainties or limitations. Creating a culture where questions about AI are welcomed and mistakes are treated as learning opportunities fosters healthy experimentation and continuous improvement.

Training should also emphasize adaptability and continuous learning. AI tools evolve rapidly, and what is cutting-edge today may be obsolete tomorrow. Rather than focusing solely on specific platforms, training programs should teach young lawyers how to evaluate new tools, stay informed about technological developments, and update their skills over time. Encouraging participation in continuing legal education, interdisciplinary collaboration, and self-directed learning helps ensure long-term competence in an evolving technological landscape.

Finally, training lawyers on AI should reinforce the human dimensions of legal practice. While AI can increase efficiency and enhance analysis, it cannot replace empathy, moral reasoning, strategic judgment, or the ability to build trust with clients. Training should highlight how AI can free lawyers from repetitive tasks and create more time for high-value human work, such as counseling clients, negotiating solutions, and exercising ethical leadership. By framing AI as a complement to, rather than a replacement for, legal expertise, training preserves the profession’s core identity.

In conclusion, training young lawyers on artificial intelligence requires a comprehensive, balanced approach that combines technical understanding, practical application, ethical awareness, and professional judgment. By investing in thoughtful, well-structured training programs, legal institutions can prepare the next generation of lawyers to use AI responsibly and effectively, ensuring that technological innovation strengthens rather than undermines the rule of law and the integrity of the legal profession.

Robert R Lucic is Chair of the New Hampshire Bar Association’s Special Committee on Artificial Intelligence, a shareholder at Sheehan Phinney, and President-Elect of the New Hampshire Bar Association.