Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / New year, same artificial intelligence: Setting AI goals in 2026

New year, same artificial intelligence: Setting AI goals in 2026

In 1984, famed director James Cameron “mainstreamed” concepts about artificial intelligence through the fictional technology known as Skynet and the cinematic release of “The Terminator.” With the help of acting icon Arnold Schwarzenegger, Cameron helped to embed the cultural concept of the “rise of the machines” and the duality of AI technology that can simultaneously save and destroy humanity.

For the scientific community, these concepts were anything but new since Alan Turing — a British mathematician and technology pioneer — wrote what many consider the first AI paper on “Computing Machinery and Intelligence” in 1950.  Now jump forward to some present-day context, when federal and state lawmakers introduced a whopping 1,000 plus potential AI-related regulations just last year.

So, despite pop culture, a long-standing scientific history, and the expanding engagement from regulators, how are many organizations still struggling to outline their AI objectives?  Well, just like our annual and regularly failed diet and fitness goals, business leaders should learn the lessons from the past to drive change in habits and performance. To build AI strategy, companies should utilize past processes that brought success in cybersecurity, privacy, and other technology initiatives.  And in the spirit of the new year, let’s walk through five ways your organization can use past principles to set AI governance goals for the coming year.

First, who is on your team of AI “terminators”? Call it a committee, council or anything in between, every organization should designate a talented group of leaders to drive this initiative.  Whether it is a new set of team members or an existing governance entity, the effort and the people should represent a diverse set of stakeholders that include, or have the support of, management.

Second, this team should quickly define the organization’s AI dictionary.  This process should include questions like: how do we define AI related to our products and services; how do our industry peers and regulators define AI; and how do our third-party stakeholders and vendors define AI in our/their contracts?

Third, the organization should begin to outline the various AI use cases that could benefit the business, as well as the corresponding harms or risks that could result from each of these use cases.

Fourth, organizations should begin to develop simple inventories of all AI technology systems or tools, and the data sets that are used as inputs for these AI technologies.

And fifth, the organization should incorporate a set of governance principles to provide a framework for oversight and risk reduction.  These guidelines should drive governance and responsible use of AI, including considerations like accuracy, branding standards, documentation, human partnership, intellectual property, privacy and security, training, and transparency.

In summary, with these governance goals in hand and a strategy now sketched out on the vision board, your organization can confidently prepare for a “rise of the regulators” and any AI-hungry stakeholders in 2026.

Josh Snavely is the co-leader of McAfee & Taft’s Cybersecurity and Privacy Group and devotes his practice to advising clients on business and technology strategy, compliance and risk assessment, crisis management, and incident response.