I usually aim to publish a new AI Humanist post every Saturday morning, but I’m sending this one out on Friday because I find it so alarming.

Yesterday The Bookseller, a publishing trade publication, revealed that book editors are now routinely feeding entire manuscripts into ChatGPT and other AI bots in order to create instant plot summaries and market comparisons.

Literary agent Gordon Wise told The Bookseller: “Editors uploading confidential manuscripts into ChatGPT or other open AI platforms in order to help them ‘read’ books quickly is not responsible behaviour, given the security risks involved in handing over such property to a third party. But, disturbingly, conversations in the course of [last month’s London Book Fair] have indicated that this seems to have become a widely adopted practice.”  

This. Is. Insane.

If you read nothing else in this post, please read this: Whatever you prompt in a chatbot, you publish via that chatbot. If you feed an unpublished novel into ChatGPT, Gemini, or Claude, you have effectively published that novel.

Chatbots are the friendly front-facing portal to AI models. Whenever data is fed to the model is captured and stored by that model. And whatever data is in the model can and will be extracted. Including that manuscript.

It is our duty as adults to understand this technology.

Disdain for AI is a badge of honor in the book publishing world. And fair enough: AI has stolen the world’s copyrighted material and made billions from it. It’s actively putting writers out of work.

But disdain is not the same thing as cultivated ignorance. It is possible to blend a healthy tech skepticism with a basic understanding of how these new machines work. You don’t have to own an automobile, but you do have to understand how they function in the world. Otherwise you’ll get flattened.

If book editors are feeding manuscripts into ChatGPT, those editors are fundamentally ignorant of the machine. And they are breaking copyright law.

Yes, AI remembers everything. Yes, AI will regurgitate everything.

The Bookseller report came out a few days after the publication of an obscure but important research paper. A team of scientists and scholars from Stony Brook, Carnegie Mellon, and Columbia U. attempted to prompt the most popular AI models into regurgitating the books on which they were trained. They succeeded.

By fine-tuning their prompts they were able to get ChatGPT, Gemini, and DeepSeek to cough up 85% to 90% of copyrighted books. Gilead. The Remains of the Day. Slouching Towards Bethlehem. Life of Pi. In some cases the AI models offered full-verbatim sections of 450 words (about a page and a half in a book).

That research wasn’t a fluke. The researchers noted that others first showed how AI model memorize and regurgitate their training data in 2021—and have proven it again and again in papers published every year since.

AI developers like OpenAI, Google, and Meta “have repeatedly assured courts and regulators that their models to not store copies of training data,” the researchers noted. Most chatbots are fitted with filters aimed at screening out this sort of regurgitation.

Clearly, those filters fail.

Wait, isn’t there a toggle to prevent memorization?

Yes, many tech companies offer a Systems toggle that assures a user the machine won’t ingest and memorize the material it’s offered. Yes, you should toggle to disable a chatbot’s memory feature.

That feature exists alongside this knowledge: Tech companies lie. For years, social media platforms assured us they had child-safe designs built into Instagram, YouTube, Facebook, etc. They did not. 23andMe promised to keep your DNA safe and private. They did not. Tech companies swore they would never sell your data to the highest bidder. Or any bidder, really. That was not true.

Coming soon: Plagiarizing your own work.

If book editors continue to feed manuscripts into AI models, we can expect to see a few things in the near future.

  • Callouts and cancellations. An editor caught feeing AI will be caught in the act, publicly shamed, forced to resign, and held up as a cautionary tale. It will be the internal version of the Shy Girl cycle.

  • Copyright infringement lawsuits. Feeding a manuscript to AI is akin to uploading it to the dark web or posting entire chapters on Facebook. Editors feeding ChatGPT are ruining the monetary value of an author’s work product.

  • AI and plagiarism screening apps will flag an author for their own work. If you feed a manuscript into an AI model and then use that AI model to flag a revised version of that manuscript for plagiarism, it will sound the alarm. Mark my words, this will happen.

Human-created work is extremely valuable. Treat it as such.

I’ll leave you with these two thoughts.

One of the best pieces of advice I ever received came from a very wise editor who told me: Never put in writing anything you wouldn’t mind seeing on the front page of The New York Times.

I’ve seen variations on this theme. Anything you wouldn’t want read aloud in open court. Anything you wouldn’t want your grandmother to read.

It applies to all platforms: Text, email, social media, AI chatbots. Hacks happen. Lawsuits get filed, discovery documents get leaked. Eventually it all comes out.

As we all learn to survive in this AI-influenced world, we suffer a never-ending rain of doom when it comes to human-created work. We’re told machines do it better, faster, easier. What they don’t tell you is this. As AI increasingly spews out digital slop, organic human-created material becomes more valuable. AI can’t function without a continuous stream of new data. And human-created data is the richest loam. Tech companies crave it. Stop feeding it to them for free.

Recent conversations

If you enjoyed last month’s post about Shy Girl and book publishing in the age of AI, check out Jonathan Evison’s podcast, A Fresh Face in Hell. Jonathan had me on last week to talk more about books, AI, authorship, and the value of authentic human creation.

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading