At the AI Humanist we keep you updated on stories worth reading without cluttering up your life. We sort the pile so you don’t have to. Here’s a rundown of the interesting chatter happening now.
1. ‘Shadow Library’ is the hottest club in town
Okay no it’s not really a club. But if it was it would have everything…pirated bestsellers, obscure textbooks, and 37 gigabytes behind the bar.

If you’re looking for a night of total infringement, have we got the place for you…
There are more than 30 active copyright infringement lawsuits moving through U.S. courts at the moment, all filed against tech companies for using copyrighted material without consent to train AI models.
To the authors, artists, and songwriters whose work has been poached the violation feels apparent on its face. In the court of law, though, these are difficult issues. Nothing here is a slam dunk. Early rulings have left the question of machine “learning” and fair use largely unsettled.
What has emerged as a clear violation, however, is the knowing use of illegally pirated data by tech companies. The judge in the recently-settled Anthropic case set a strong precedent on that score. Now plaintiffs in other cases are hopping on what’s become known as the “shadow library” strategy.
The folks at ChatGPTIsEatingTheWorld, who obsess about these copyright cases so we don’t have to, explained: “The Shadow Library Strategy is to raise a separate theory of infringement—apart from the training of AI models—based on the AI company’s initial acquisition of copies of works from controversial shadow libraries.”
A ‘shadow library’ is a massive trove of illegally pirated books, like the notorious Books3 database. The idea is that the act of a tech company knowingly downloading a database of illegally pirated works constitutes copyright infringement.
It’s a still-developing theory that’s based only on the single win in the Anthropic case. But plaintiffs in other cases are gettin’ after it. Attorneys leading lawsuits against Apple, Meta, Microsoft, Nvidia, OpenAI, Salesforce, Bloomberg, and Databricks have all filed ‘shadow library’ infringement claims to proceed alongside their original data training infringement claims.
Will it work? Nobody knows! Not even Stefon.
2. Netflix’s guidance on AI: Helpful for the rest of us
As a creator, I hate the word ‘creator’ but the planet spins on with rude disregard for my thoughts on such matters. So. As a creator, I and other authors, artists, musicians, filmmakers, et al, are caught in an AI bind: Feeling salty about the theft of our creative work and earning potential, while also testing new AI tools and figuring out how to use them ethically and appropriately.
To that end, Netflix has released a set of guidelines for its production partners that I find adds some meat to the meal.
At the heart of their guidance to filmmakers is a simple demand for disclosure:
“To support global productions and stay aligned with best practices, we expect all production partners to share any intended use of GenAI with their Netflix contact, especially as new tools continue to emerge with different capabilities and risks.”
In other words, Keep us in the loop. No surprises.
Five guiding principles are invoked:
The outputs must not replicate or substantially recreate identifiable characteristics of unowned or copyrighted material, or infringe any copyright-protected works.
The generative tools used must not store, reuse, or train on production data inputs or outputs.
Where possible, generative tools should be used in an enterprise-secured environment to safeguard inputs. (In other words, don’t upload your script in a ChatGPT prompt.)
Generated material should be temporary and not part of the final product.
GenAI should not be used to replace or generate new talent performances or union-covered work without consent.
These seem like reasonable guidelines, and a good starting point for those of us searching for input on our own use of AI tools. I’ll continue looking for further guidance and keep this conversation going.
3. Cities start turning off Flock surveillance cameras
Privacy advocates have been warning us for years about the dangers of geolocation data leaks. This week the rubber hit the road.

No, not that kind of flock.
Geolocation data is a collection of location-specific pings each of us emit as we go about our daily lives. Your smartphone and its apps are tracking you constantly. Six years ago the New York Times put together an amazing project that showed how easy it is to track the movements of millions of Americans. Location data companies store and sell this information. As the Times explained: “To anyone who has access to this data, your life is an open book.”
The coming of AI has made that data all the more valuable—and dangerous.
Seattle Times reporter Catalina Gaitán reported yesterday that a number of towns in the Seattle region have disabled their Flock Safety cameras in the past week due to concerns about outside agencies—specifically U.S. Immigration and Customs Enforcement agents—tapping into the data and using it to track and arrest local people.
Flock cameras are automatic license plate readers (ALPRs), like the devices used at toll booths. Towns originally installed them to help find stolen cars and crime suspects. They look like this:

A Flock Safety camera, from the company’s website.
Three weeks ago a report by the University of Washington’s Center for Human Rights revealed that federal agencies like ICE and the Border Patrol had burrowed into the data of 18 Washington police departments without the local police’s consent or knowledge. In ordinary times we would call this criminal hacking.
Adding to the risk: Washington has a strong public records law. Last week a Skagit County judge (about an hour north of Seattle) ruled that Flock pictures are public records and must be released as required by the Public Records Act.
So far the cities of Redmond, Renton, Mukilteo, Auburn, and Lakewood have disabled or changed their Flock settings to secure their data from the feds. But they can’t stop ordinary citizens from requesting the data as part of the public record.
Flock company officials have previously said their cameras are used by more than 5,000 municipalities in 45 states.
4. OpenAI hit with 7 more lawsuits for manipulation and suicidal assistance
In a follow-up to the cautionary tale of Adam Raine and his ChatGPT-encouraged decision to end his own life, last week seven more product liability lawsuits were filed against OpenAI alleging wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims.
Both OpenAI and the company’s CEO, Sam Altman, were named as defendants.
The suits claim that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that the product was dangerously sycophantic and psychologically manipulative. The claims in the new lawsuits may be bolstered by the recently released 52-page deposition of OpenAI co-founder Ilya Sutskever, who was ousted back in the Nov. 2023 Battle For The Soul Of OpenAI. The core issue: Safety. Specifically the lack of safety testing and protocols that may have led to the reckless release of ChatGPT 4o.
I know good people in tech who gloss the chatbot-suicide issue: There are a lot of suicidal teens, and many of them use ChatGPT. Correlation does not imply causation.
Fair enough. But once you actually read the lawsuits and the evidence quoted, I suspect you’ll see a disturbing problem here. The technology of AI isn’t causing suicides. Poor design, a reckless disregard for safety, corporate greed, and a rushed product release—all of these factors, encouraged by OpenAI CEO Sam Altman, add up to a pretty damning case against the company. Read more:
5. In which the Humanist uses a meme to cap the Hot List, offering the reader a moment of cheap amusement

Enjoy what you’re reading?
Become an AI Humanist supporter.
MEET THE HUMANIST
Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.
A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.
Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

