In the early hours of 28 August, a quiet car park on a college campus in Missouri became the scene of a violent vandalism rampage. In the space of 45 minutes, 17 cars were left with shattered windows, broken mirrors, ripped-off wipers and dented chassis, causing tens of thousands of dollars worth of damage.
After a month-long investigation, police had gathered evidence that included shoe prints, witness statements and security camera footage. But it was an alleged confession to ChatGPT that eventually led to charges against 19-year-old college student Ryan Schaefer.
In conversations with the AI chatbot shortly after the incident, Schaefer described the carnage to the app on his phone and asked, “how f**ked am I bro?.. What if I smashed the shit outta multiple cars?”
It appears to be the first ever case of someone incriminating themselves through the technology, with police citing the “troubling dialogue” in a report detailing the charges against Schaefer.
Less than a week later, ChatGPT was again mentioned in an affidavit, this time for a far more high-profile crime. 29-year-old Jonathan Rinderknecht was arrested for allegedly starting the Palisades Fire, which destroyed thousands of homes and businesses and killed 12 people in California this January, after requesting the artificial intelligence app to generate images of a burning city.
The two cases are unlikely to be the last in which AI implicates people in crimes, with OpenAI boss Sam Altman saying that there are no legal protections for users’ conversations with the chatbot. It not only highlights privacy concerns for the nascent technology, according to Altman, but also the intimate information people are willing to share with it.
“People talk about the most personal shit in their lives to ChatGPT,” he said on a podcast earlier this year. “People use it, young people especially, as a therapist, a life coach, having these relationship problems… And right now, if you talk to a therapist, a lawyer or a doctor about these problems, there’s like legal privilege for it.”
The versatility of artificial intelligence models like ChatGPT means that people can use them for everything from editing private family photos, to deciphering complex documents like bank loan offers or rental contracts – all of which contain highly personal information. A recent study by OpenAI revealed that ChatGPT users are increasingly turning to it for medical advice, shopping, and even role playing.
.png)
Other AI apps are explicitly billing themselves as virtual therapists or romantic partners, with few of the guardrails used by more established firms, while illicit services on the dark web are allowing people to treat AI not only as a confidant, but an accomplice.
The amount of data being shared is astonishing – both for law enforcement purposes, and for criminals who may seek to exploit it. When Perplexity launched an AI-powered web browser earlier this year, security researchers discovered that hackers could hijack it to gain access to a user’s data, which could then be used to blackmail them.
The companies controlling this technology are also looking to take advantage of this new trove of deeply intimate data. From December, Meta will begin using people’s interactions with its AI tools to serve targeted ads across Facebook, Instagram and Threads.
Voice chats and text exchanges with AI will be scanned in order to learn more about a user’s personal preferences, and what products they might be interested in buying. And there is no way of opting out of it.
“For example, if you chat with Metal AI about hiking, we may learn that you’re interested in hiking,” Meta said in a blog post announcing the update. “As a result, you might start seeing recommendations for hiking groups, posts from friends about trails, or ads for hiking boots.”
This may sound relatively benign, but case studies of targeted ads served through search engines and social media show just how destructive they can be. People searching terms like “need money help” have been served ads for predatory loans, online casinos have targeted problem gamblers with free spins, and elderly users have been encouraged to spend their retirement savings on overpriced gold coins.
Meta chief executive Mark Zuckerberg is well aware of how much private data will be swept up in the new AI ad policy. In April, he said that users will be able to let Meta AI “know a whole lot about you, and the people you care about, across our apps”. He is also the same person who once described Facebook users as “dumb fucks” for trusting him with their data.
“Like it or not, Meta isn’t really about connecting friends all over the world. Its business model is almost entirely based on selling targeted advertising space across its platforms,” Pieter Arntz, from the cyber security firm Malwarebytes, wrote shortly after Meta’s announcement.
“The industry faces big ethical and privacy challenges. Brands and AI providers must balance personalisation with transparency and user control, especially as AI tools collect and analyze sensitive behavioral data.”
As AI’s utility deepens the trade-off between personal privacy and convenience, our relationship with technology is once again under scrutiny. In a similar way to how the Cambridge Analytica scandal forced people to reckon with how they were using social media sites like Facebook, this new data harvesting trend, as well as cases like Schaefer’s and Rinderknecht’s, could see privacy pushed back to the top of the tech agenda.
Less than three years after the launch of ChatGPT, there are now more than a billion people who use standalone AI apps. These users are often unwitting subjects to exploitation from predatory tech firms, advertisers and even criminal investigators.
There is an old adage that if you’re not paying for a service, then you are not the customer, but the product. In this new era of AI, this saying may need to be rewritten to replace the word product with prey.