By Mike Hintze
In recent weeks, ChatGPT has been the subject of much discussion. A wide range of issues and concerns have been raised, and a number of those relate to privacy and data protection. Here are a few of my thoughts on what privacy and data protection professionals should consider when reviewing uses of ChatGPT (and similar generative AI services).
I think it's useful to think of the issues with regard to ChatGPT in terms of both the input (what data you might enter into the service) and the output (what text is generated by the service).
Data Input
On the input, the privacy issue is pretty simple. Be extremely cautious about inputting any personal data, and, in particular, any sensitive personal data, into ChatGPT. It may be safest to just adopt a bright line rule against it. And while not a privacy issue, the same caution should be applied to any other confidential or proprietary information.
Why the reluctance? Going through the process of signing up to use ChatGPT and reviewing the applicable data terms (if you can find them), does not inspire confidence.
When you try to sign up for ChatGPT, you start on this page. There is no Terms of Use or Privacy link anywhere on the page – just “Log in” and “Sign up” buttons. When clicking sign up, and going through the process, there is likewise no Terms or Privacy link presented anywhere in the flow. Once the account is created, and you sign in, there are still no such links in the footer or anywhere else I could find. I had to do a search in the FAQs to find an article that had the links.
Once you find them, you learn that ChatGPT is subject to the OpenAI Terms of Use and Privacy Policy. OpenAI is the organization that provides the ChatGPT service. (Note, that there are links to these Terms and Privacy Policy in the footer if the OpenAI website, but, again, they appear nowhere in the ChatGPT user interface). Upon reading them, the key takeaway is that they are pretty lightweight with respect to data use and protection. As one indicator that insufficient attention is being paid to privacy, the Privacy Policy addresses older CCPA requirements but does not seem to have been updated to address the CPRA amendments to the CCPA. And for business users, there is little in the Terms that indicate OpenAI has a mature and reliable approach to acting as a data processor.
OpenAI does supposedly have a “Data Processing Addendum.” But OpenAI does not make it readily available. The one short paragraph in the Terms of Use that address processing personal data states:
“If you are governed by the GDPR or CCPA and will be using OpenAI for the processing of “personal data” as defined in the GDPR or “Personal Information,” please contact support@openai.com to execute our Data Processing Addendum.”
I sent an email to that address seeking to review and execute the DPA, and over two weeks later I still have not received the DPA or even an acknowledgement of the request. The fact that the mystery DPA apparently covers only personal data subject to the GDPR and CCPA may make it insufficient for organizations seeking strong (and enforceable) assurances of data protection for all the personal data they wish to input into ChatGPT.
If the DPA is insufficient with respect to its CCPA “service provider” provisions, the input of any personal data into ChatGPT could be considered a “sale” of personal information under the CCPA. Given that the DPA purportedly covers only data subject to the CCPA and GDPR, there is possibly an even greater risk of the data transfer being considered a “sale” under other U.S. state privacy laws.
Finally, before proceeding with inputting any personal data into ChatGPT, the DPA should be carefully scrutinized with regard to the assurances made regarding data use, sharing with third parties, data security, use of subprocessors, etc.
Getting back to the Terms, there is a provision stating that content input into an OpenAI service may be used by OpenAI to improve its services. It goes on to say that organizations can opt-out from having content used for product improvement by contacting support@openai.com. If using ChatGPT to process personal data, organizations should exercise that opt-out option. However, given my experience of not getting a response when emailing that support address, and the other indicators noted above, I have a low level of confidence that the opt-out request would be effective and that OpenAI will otherwise use and protect the personal data appropriately.
Data Output
On the output, my biggest privacy-related concern is FTC Act Section 5 liability. Right now, the largest use case for ChatGPT seems to be as a quick and inexpensive way to generate content. Marketers will love this. As will other teams within organizations that are tasked producing lots of content (customer support, product documentation, etc.). But if the output has any inaccuracies (which it often does) and the organization publishes that content, it could create a deceptiveness claim under Section 5. Incorrect or misleading claims about data have been the impetus for many FTC investigations and enforcement actions over the years.
Thus, you should verify every material claim in the output as accurate before publishing. This will require training of those who are using ChatGPT to generate content.
Guarding against errors may be particularly challenging because ChatGPT is very good at creating what sounds like intelligent prose (perfect grammar, clear writing style, etc.). As a result, there may be a tendency by people to conflate those attributes with reliability. It's a normal human reaction / bias; we tend to view well-written prose as more trustworthy. But in fact, the output of ChatGPT should be viewed as no more reliable than the first results in a Google or Bing search. In fact, it may be more problematic because at least with a traditional web search, you can click through to investigate the sources to help determine reliability. With ChatGPT, you don't know from what sources the AI drew, so it may be more difficult to verify accuracy.
You should also carefully review the output of ChatGPT to be sure that it does not contain any other problematic information, including personal information. There has been at least one report of a ChatGPT providing personal information in its output.
Beyond accuracy and privacy concerns, as with many applications of AI, there are of course concerns with biased output. If the data from which the AI is learning and drawing reflects historical biases, then the output may be similarly problematic. Organizations should be aware of this, and especially where it may be used in contexts where any bias could lead to serious discrimination harms and/or legal claims (employment, housing, education, etc.), this really needs to be top of mind.
* * *
In sum in these early days, use ChatGPT and other generative AI tools with great caution. The technology is genuinely amazing and holds enormous promise for any number of use cases. But these technologies also raise important privacy and data protection concerns and issues that must be addressed to avoid legal, reputational, and other risks.
Finally, it is worth noting that the above issues are related to ChatGPT, and by extension to text-to-text generative AI generally. With other forms of generative AI, such as those producing image, voice, or video output, there are a number of other privacy, security, and AI ethics risks that need to be considered.
Mike Hintze is a Member Partner at Hintze Law PLLC and a recognized leader in privacy and data protection law, policy, and strategy. With more than 25 years of experience in privacy and data protection, Mike emphasizes pragmatic and actionable advice that enables his clients to meet their objectives while complying with the law and managing risk.
Hintze Law PLLC is a Chambers-ranked, boutique privacy firm that provides counseling exclusively on global data protection. Its attorneys and privacy analysts support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy and data security.