For some time now, in response to the California Online Privacy Protection Act, Canada’s Personal Information Protection and Electronic Documents Act, and similar statutes and regulations from other jurisdictions, any company with any web presence to speak of has provided a public-facing privacy policy on its website explaining what it does with each user’s information, how it complies with the relevant laws, what rights users have to access their information, etc.
In 2018, those policies gained more prominence as the European Union’s General Data Protection Regulation went into effect and thousands of companies informed their contact lists that their privacy policies had been updated.
Although artificial intelligence is not nearly as well regulated as data privacy, there are some requirements, expectations and norms that are emerging from a combination of expert opinion, pending legislation and the limited black-letter law.
I recommend that clients get ahead of public opinion and legal requirements by making considered disclosures in their AI policies about the data their AI uses.
This article provides guidance to attorneys as they prepare public-facing policies disclosing their clients’ AI usage. In the sections below, I discuss the following topics:
1) Disclosure and explanation of any AI that interact with customers;
2) Explanation of any automated processing, including profiling, which produces legal effects concerning customers or similarly significantly affects them;
3) Disclosure of the categories of data AI relies on; and
4) Explanation of how AI relies on data categories to reach its decisions.
Depending on the client’s practices and branding, you may want to address any or all of them in an AI policy.
AI that interacts with customers
Concern regarding unidentified and misleading chatbots and other forms of AI that interact with the public has been widespread since the 2016 presidential election.
Last year, California passed legislation (the “California Bot Bill”) requiring businesses to either refrain from using autonomous online chatbots to incentivize commercial activity, or to disclose the bots’ existence to users.
Any customer service AI should self-identify to the customer, but I also advise clients to get out in front of the issue by including in their AI policies a statement addressing customer service chat bots or other autonomous communications technology, as relevant. The statement should reflect the company’s actual use of AI, the legal requirement it hopes to satisfy, and its concerns about its customers’ experience.
This advice is intended to anticipate expected regulations and demonstrate transparency to customers concerning a topic that can be somewhat unpopular and controversial.
Automated processing
The GDPR’s language addressing “automated processing” suggests that the decisions made by AI should be stated clearly for customers and other members of the public, or at least as clearly as the organization is comfortable with due to trade secrets, patents, business practices, etc.
The GDPR states that each “data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her,” although there are carve-outs for certain situations.
If the automated processing relies on special categories of personal data — which include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, data concerning health, and data concerning a natural person’s sex life or sexual orientation — the controller is obligated to use “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests.”
Given that non-compliance with these requirements carries potential administrative fines up to €20 million or 4 percent of the total worldwide annual turnover of the preceding financial year, whichever is higher, organizations are incentivized to affirmatively show that they are using AI and autonomous technology in a way that complies with this requirement.
An AI policy can be an appropriate platform to do that.
Categories of data relied on
Article 15 of the GDPR grants data subjects the right to obtain from controllers the categories of data being processed, a requirement that applies to all processing of data, not just automated decision-making.
Similarly, American laws like CalOPPA and the California Consumer Privacy Act of 2018 also address disclosing the types of data organizations have collected.
The central issue in including this information in a client’s AI policy is transparency. Most companies that collect personal data or personally identifiable information on the internet maintain a privacy policy that discloses the categories of that data or information collected, consistent with the GDPR and CalOPPA. Why restate this information in an AI policy or make a new disclosure in an AI policy?
The biggest reason is to stay ahead of the trend of required and preferred disclosures. The GDPR and CCPA are the most current and widely applicable privacy laws that your clients are likely going to encounter in the near future. They not only require the disclosure of the categories of data an organization relies on, but upon the request of an individual they require an organization to release a copy of the personal data and personal information from that person that an organization has processed, and to disclose the purposes for which the organization is using that person’s information.
PIPEDA contains similar requirements. The legal trend is toward requiring greater disclosure of the data you rely on; more transparency, not less.
As that trend continues, your clients’ AI practices will come under scrutiny. I recommend that clients get ahead of public opinion and legal requirements by making considered disclosures in their AI policies about the data their AI uses.
First, the disclosure should address Article 22(4) of the GDPR, which specifically prohibits companies from using special categories of personal data (explained above) in automated decision-making unless the data subject consents.
An AI policy should either affirm that the organization’s AI does not use special categories of personal data or explain which special categories the organization’s AI uses, how it uses them, and how it obtains consent from data subjects.
Second, disclosing the categories of data an organization’s AI processes is an opportunity to demonstrate to consumers that the client is aware of privacy law and foresighted enough to recognize the trend toward more disclosure in AI.
In doing so, the organization should explain its understanding of the current requirements in the GDPR, CalOPPA and the CCPA and how those laws inform its AI’s handling of personal data. A properly drafted AI policy can provide useful information to consumers without disclosing sensitive information or other trade secrets.
In short, in addition to demonstrating compliance with the GDPR, a client’s AI policy is an opportunity to market itself as a leader in AI thought and policy without disclosing any of the intellectual property that has made the organization a leader in AI thought and policy.
How AI reaches its decisions
The GDPR requires that when an organization collects information from a data subject, it is obligated to disclose the existence of automated decision-making, including profiling; organizations are also required to disclose “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”
The right to “meaningful information” about an AI’s decision-making is frequently referred to as the “right to an explanation.”
In truth, it is not clear what kind of “meaningful information” about an AI’s decision would satisfy the GDPR. For example, does Amazon’s Alexa have to be able to explain why it picked a song for you when asked? Do the “Why am I seeing this ad?” boxes that Google, Facebook and Amazon use qualify?
Based on the wording of Article 13(2)(f), the meaningful information could take another form altogether, as that article suggests that the information could be conveyed at the time the company collects the personal data. This contrasts with the idea that the AI needs to explain its decisions in real time in response to a question.
You can explain your interpretation of Article 13(2)(f) in a client’s AI policy. In doing so, you should state that the organization is aware of the requirement for meaningful information about the logic involved in automated decision-making; disclose the categories of data that the AI relies on and general information about how it relies on them to make decisions; and note that other than general explanations like this, decision-specific, real-time explanations are not possible now. This provides information that consumers will appreciate and demonstrates compliance with the GDPR.
Conclusion
Depending on the industry and business practices of the client, some of these considerations may not be relevant and/or you should address other topics. You should consult with the client’s CTO to determine how best to draft its AI policy.
Although the legal requirements governing AI usage are fairly minimal now, the trend toward more disclosure is clear, and you can put your clients in a stronger position going forward by preparing a policy now.
John F. Weaver is an attorney with McLane Middleton, based in Woburn, Massachusetts. He can be contacted at [email protected].