The National Bureau of Economic Research recently found that 25 percent of American workers use AI tools weekly, with over 10 percent using them daily. Their uses are diverse, from marketing teams generating social media content to engineers creating design documentation, finance departments analyzing data, and HR teams screening resumes.
While organizations are seeing benefits, those benefits must be carefully balanced with risk management. One ever-present concern is ensuring an organization’s confidential information is handled with due care. Inputting sensitive information into AI tools without fully understanding the confidentiality implications may raise concerns regarding obligations surrounding customer data; maintaining confidential status of financial information, strategic plans or proprietary processes; or maintaining information with protected trade secret or privilege status.
As lawyers counsel clients across sectors on technology adoption and risk management, they must understand how these tools handle confidential information and what safeguards are necessary to protect their clients’ most valuable assets.
Understanding third-party risks, information sensitivity
Third-party AI tools are, of course, services provided by external vendors. When employees upload documents, input data or use AI tools to polish presentations, they’re transmitting potentially sensitive information to external systems beyond their organization’s direct control. Any time information is shared with a third party, confidentiality concerns should be addressed.
AI tools may in some cases be provided for on-premises execution, which can ameliorate some concerns. But a fundamental challenge with most AI tools is that they operate through cloud-based services in which input data travels across the internet (and thus through and past other third parties) to remote servers for processing.
Understanding what happens to that data — how securely it is transmitted and stored, how it is made available for access while stored, and how the third party may potentially reuse it beyond the specific use you have asked for — is important for risk assessment.
Different information may warrant different levels of concern. A marketing department providing publicly available information to an AI tool to generate social media copy may present lower risk compared to an engineer inputting confidential details about an unreleased product to the tool to create design information. Using a vendor’s confidential information to generate content may damage business relationships.
Traditional information security frameworks can guide this analysis. Many organizations already classify information into categories — such as public, internal, confidential and restricted — with corresponding handling requirements. These same classifications may be used to govern AI tool usage.
Public information may be suitable for general-purpose AI tools, while restricted information may require specialized solutions with enhanced protections or may be inappropriate for AI processing altogether.
Evaluating vendor practices, contractual obligations
Before using any AI tool with sensitive information, organizations must understand the vendor’s data handling practices.
Security considerations include how information is protected while traveling to the vendor’s servers and while stored there, requiring attention to encryption standards and security certifications.
Data retention policies vary dramatically between vendors, with some promising immediate deletion after processing and others retaining data indefinitely. Access controls determine who at the vendor can access stored information and what oversight procedures govern employee access.
Understanding how the AI models are instantiated, and whether your organization has its own private instance of an AI model or whether your information may be pooled with others’ uses may be important context.
Perhaps most critically, organizations must understand whether vendors will use input data to train or improve their AI models, or for other business purposes, as this may represent a significant risk. There is always a risk that information used in training may potentially be reflected in outputs to other users. The risk of exact reproduction is often vanishingly small but may be realistic in the case of unique or niche information.
Careful review of terms of service and privacy policies is essential. Many AI tools offer multiple service tiers — free versus paid, or different paid levels — with varying privacy protections. Enterprise versions often provide enhanced confidentiality controls that consumer versions lack.
Organizations should subscribe only to service levels that provide appropriate protections for their intended use and should be certain to review the terms/policies for the version they use.
Perhaps most critically, organizations must understand whether vendors will use input data to train or improve their AI models, or for other business purposes, as this may represent a significant risk.
Organizations must also consider whether existing agreements restrict AI tool usage. Non-disclosure agreements increasingly include explicit prohibitions on sharing information with AI systems. Some vendors, clients or partners now include “AI clauses” that specifically forbid inputting their confidential information into third-party AI tools.
Before using AI tools with information received from others, review all applicable agreements to ensure compliance. Violating such restrictions could result in breach of contract claims, damaged relationships, or loss of access to valuable information.
Regulatory compliance, professional obligations
Specific legal doctrines can create additional risks when sharing information with third parties. Trade secret protection requires that “reasonable measures” be taken to preserve information secrecy. Sharing trade secrets with third-party AI vendors may undermine this protection, particularly if the vendor’s terms don’t provide adequate confidentiality safeguards.
However, a strong business justification combined with robust vendor confidentiality protections may support an argument that AI tool usage was “reasonable.” This analysis should be conducted before sharing trade secret information, not after a problem arises.
Similarly, attorney-client privilege can be waived by sharing privileged information with third parties. While some arguments exist for maintaining privilege when AI tools assist with legal work, the safest approach is obtaining explicit client consent before using AI tools with privileged information.
Legal and other professional obligations may govern AI tool usage. The Model Rules of Professional Conduct require lawyers to maintain client confidentiality and demonstrate technological competence. Recent ethics opinions emphasize that lawyers must understand AI tools’ data handling practices before using them with client information.
Other regulated industries face similar requirements. Health care organizations must consider HIPAA implications, financial institutions must address privacy regulations, and public companies must protect material nonpublic information. Other sectors may have additional regulations.
Moving forward responsibly
Given widespread employee AI usage and significant confidentiality risks, every organization should consider implementing clear AI usage policies.
These policies should define what types of information can be used with which AI tools, specify approved vendors and service levels for different information types, require review of vendor terms and contractual restrictions before tool usage, establish approval processes for new AI tool adoption, and provide training on confidentiality risks and proper usage.
Without such policies, organizations may face unpredictable risks as employees make individual decisions about AI tool usage with sensitive information.
AI tools offer transformative potential, but their adoption must be thoughtful and informed. Organizations that understand both the technology and their confidentiality obligations can leverage these tools while protecting sensitive information.
The key is implementing systematic approaches that match AI usage to information sensitivity, vendor capabilities, and legal requirements. This means developing clear policies, training employees, and regularly reassessing both tools and risks as the landscape continues evolving.
Organizations that master these practices may benefit from AI while maintaining the confidentiality protections their business success depends upon.
Andrew “A.J.” Tibbetts is a shareholder in the intellectual property and technology practice group in Greenberg Traurig’s Boston office. He is a former software engineer. He can be contacted at [email protected].