Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / AI: Developing a strategy and approach

AI: Developing a strategy and approach

Every day we open our news feeds and there is yet another article about Artificial Intelligence (AI).

One article states how the proliferation of AI will end democracy. Another talks about a situation where AI will take over all jobs. Yet another article talks about an Executive Order to create AI safeguards.

How can teams work through all this noise to create their company’s AI strategy? While the concept of artificial intelligence has been around for a long time, the proliferation of certain technologies like ChatGPT are now putting capabilities directly in our hands. Let’s talk about how companies can create a strategy to progress AI.

Education and understanding of technology

Artificial intelligence comprises many capabilities. When we look at AI, we may see terminology such as Deep Learning and Machine Learning. It’s important to learn about these technologies and understand how they can support your business goals. We also hear a lot of buzz around Large Language Models. LLMs are a type of artificial intelligence that have been trained on massive amounts of data to understand content and generate output. You might also hear about Natural Language Processing which, simply put, allows computer programs to understand human language, both written and spoken. There are even more capabilities to consider.

As these technologies begin to realize more value for companies, it’s important to note that unique skill sets will be needed. Traditional IT and data science skills may blend together, with additional skills also required. As a first step in understanding these technologies, it’s necessary to recognize the multiple capabilities and then begin to determine how they may apply to overall business goals and strategies.

Applying business use cases

Once a baseline education and understanding of AI capabilities exists, companies can begin to develop use cases. Workshops can be held to bring in thought leaders who can determine what AI capabilities can help meet business goals. The use of AI can lead to productivity gains, with more common tasks removed, freeing up team members’ time to work on more critical tasks. This often leads to processes being accelerated. AI can also benefit R&D and new product development, which leads to the “innovation” side of what AI often promises to bring.

Let’s consider a couple use cases. A talent organization may look for AI to help the onboarding process, using generative AI to create onboarding materials. Engineering and design teams can look at AI and machine learning for failure rate analysis. A contract and risk management team may place past favorable contracts into a large language model and then be able to understand and redline risks in new contracts. The possibilities appear endless.

Determining your AI approach

As you start or continue your journey, it is often asked if you should buy AI technology or build your own. The answer will most likely be both, depending on your business goals and outcomes desired. AI is being integrated into many popular platforms such as Workday, Salesforce, Google Workspace and Microsoft M365. Systems such as ChatGPT are available for purchase as well. Additionally, you can leverage a build-your-own approach. This will likely involve a cloud provider, but this is where you may choose to build and tune your own large language model. Modeling and forecasting are popular areas for leveraging your own LLM.

The idea of running multiple small pilots will benefit some companies as they look to enter this space. If the pilots meet the desired outcomes or achieve ROI criteria, those often continue. Having an iterative approach that allows for learning, experimentation, proving out the use case then scaling will help keep the focus.

Governance and Responsible AI

Developing a cross committee team involving enterprise risk management, legal, line of business representation, information technology and others can drive governance. A concept called Responsible AI is critical to the success of AI not only within a company but also in the industry as a whole. Risks are present that include transparency, accuracy, bias, and intellectual property and copyright. These need to be addressed to ensure the technology works as intended and for the right purposes. Governance teams can also help with prioritization, investment, return on investment, organizational change management and championing the technology.

Conclusion

Organizations can benefit from the appropriate use of artificial intelligence through a focus on reducing the overall noise and churn and developing a detailed plan and approach. Gain buy-in from senior stakeholders on your approach and how technology can support your business goals, all while ensuring responsible AI is met.