Hand with text AI

Hey Claude: Are Communications with an AI Platform Privileged?

The legal boundaries of generative artificial intelligence (“AI”) communications have been the subject of much discussion. Now, the United States District Court for the Southern District of New York has weighed in on generative AI legal risks, including the issues of whether communications to an AI provider can be subject to the attorney-client and work product privileges. In the United States v. Bradley Hepner, the District Court judge refused to recognize that either the attorney-client privilege or the work product doctrine protected defendant’s AI communications from government inspection.

A Landmark Ruling on AI and Legal Privilege

AI platforms are themselves “third parties,” and as many have recognized, if an attorney inputs  information into an AI program, an argument can be made that information that was once privileged loses its protection. Further, as the US v. Hepner court recognized, users of popular open-source AI platforms may have consented to those platforms’ retention, use and sharing of user inputs and the platforms’ outputs. For this reason, the court found that defendant’s communications with the AI platform Claude (operated by Anthropic) were not confidential and, therefore, not privileged. Coupled with the obvious fact that (a) an AI platform is not a licensed attorney, (b) because Claude is not an attorney, the party was not communicating for the purposes of obtaining legal advice and, therefore, the AI communications could not be protected by the Attorney-Client privilege.

The court also found that the AI communications did not merit protection under the work-product doctrine because they were not “prepared by or at the behest of counsel” and did not “reflect defense counsel’s strategy…” The court, however, seemingly left the door open to the possibility that AI communications could, under some circumstances, be considered attorney work product.

What This Means for Businesses and Individuals

The court’s memorandum serves as a stark warning that any information communicated to public AI platforms can potentially be discovered and used in court proceedings. However, the decision also begs the question—could AI attorney-privilege exist in some instance? The answer is potentially “Yes.” Under the court’s reasoning in US v. Hepner, if a client communicates with AI at the behest of counsel, and the information provided reflects counsel’s legal strategy at the time it was used, such communications could be protected under the work-product doctrine.

Final Takeaway: AI Is a Tool—Not Your Lawyer

Certainly, there will be more decisions to come which will define the legal boundaries of AI communications and AI attorney-client privilege. In the meantime, the US v. Hepner decision underscores the legal risks of using generative AI in sensitive contexts. While AI platforms like Claude can be powerful tools for research and analysis, they are not substitutes for licensed legal professionals and are generally not afforded the protection of privileged communications.  The ruling also highlights the need for users to carefully consider the privacy policies of AI platforms, as sharing information with third-party services can waive confidentiality and privilege protections. If you need legal advice, please think twice before sharing sensitive or confidential information with an AI platform. Instead, skilled attorneys at firms like Levin Ginsburg should be your first step.