Over the course of 24 days, California’s Pacific Palisades burned to the ground after strong winds fueled an underground fire to the point of catastrophe. Almost a year later, a 29-year-old Florida man has been charged in connection with starting the fire. Assisting in this indictment were ChatGPT prompts, which is somewhat unsurprising given AI’s pervasive use in contemporary society. However, such assistance wasn’t provided in the form of an AI-generated response that instructed authorities to arrest this man. Rather, it was the suspect’s prompts themselves that contributed to his arrest.
Indeed, authorities uncovered search queries such as “Are you at fault if a fire is lift [sic] because of your cigarettes?” and other prompts requesting the AI tool to produce a “dystopian painting” of a crowd fleeing from a burning forest. More specifically, this prompt included details that may be relevant to the context of the fire itself. Pertinent verbiage included: “In the middle [of the painting], hundreds of thousands of people in poverty are trying to get past a gigantic gate with a big dollar sign on it…On the other side of the gate and the entire wall is a conglomerate of the richest people…chilling, watching the world burn down…” While there may be some symbolism commenting on the poverty levels of Los Angeles compared to the affluent nature of the Pacific Palisades, which formerly held properties valued at $3.4 million dollars on average, the more concrete takeaway here is that ChatGPT prompts can be admitted as evidence.
So, how is that the case? Aren’t the things you do on your private devices or in the privacy of your home protected by the Constitution? Not always. And, by no means does this article endeavor to explain Constitutional principals in any depth other than a surface level review (we can leave that up to the Justices), but the explanation can be fairly simple. The most relevant Constitutional provision here would be the Fourth Amendment, which summarily prohibits Government intrusion into a person’s reasonable expectation of privacy. These are two separate things, and each one should be understood. First, the intrusion must be done by the Government – private entities, such as ChatGPT are not subject to these restrictions imposed by this Amendment in the same way (notably, OpenAI has stated that the logs will be turned over to law enforcement if legally compelled to do so, making such entries non-private). Second, the intrusion must violate a reasonable expectation of privacy. Yes, this is ostensibly vague and non-descript, but for the purposes of ChatGPT logs, it doesn’t always need to be. In essence, courts generally assess whether the expectation of privacy is one that society is prepared to recognize as reasonable. This is not a one-stop question but takes into account a “totality of the circumstances.” For example, was the information inherently private? Was it disclosed to a third party? Who was the third party? Was the disclosure voluntary? The answers to these questions determine whether such expectation of privacy is heightened or diminished.
In this regard, we can connect the dots. The search queries or prompts – while they may have been made on a private device or in the sanctity of the suspect’s resident – were nonetheless cast out into the very public and expansive universe of the internet. In following the assessment above, the prompts were both voluntary and disclosed to a third party, ChatGPT (or OpenAI) – thus diminishing the suspect’s expectation of privacy. Most important is to notice how the searches were entered into a third-party internet AI tool, which is entirely different than asking questions to an attorney, doctor, or fiduciary (for more on this, see Ismail Amin’s article “Client Beware: the Utilization of Artificial Intelligence Platforms and the Potential Waivers of Attorney-Client Privilege”). Where the former of the two can be obtained from issuing a warrant or subpoena largely in part due to the diminished expectation of privacy, an intrusion into private communications with the latter (doctors, attorneys, etc…) is much harder to do – because there are inherent privacy and confidentiality concerns regarding such conversations. This is not to say that anything an individual enters online is definitively public, especially considering the novelty of AI and the fact that OpenAI has explicitly stated that its data would be turned over to law enforcement if legally compelled to do so. Nonetheless, it is an important indication that AI tools such as ChatGPT can be used against you just as much as they can be used for you.