A Facebook friend hit me with a tough question earlier this week.
Referring to the AI-derived image at the end of this post, she said:
“Does it matter to you that this AI generated illustration was created by scraping archives of illustration for the past 300 years? That every image that does this puts more professionals who create original work out of business? That none of them were paid for the rights to the DNA of this image?”
Yes. It troubles me greatly. This is the dilemma many of us face in 2025.
I’m one of those whose work was stolen and used without consent to train the world’s largest AI models. Three of my books were pirated in the copyright-infringing Books3 database used to train these billion-dollar machines. AI is stealing my past work and using it to destroy many current and future work opportunities.
And yet I can’t afford to just curse AI and walk away.
For those who don’t compete in industries where AI fluency is becoming expected, it’s easy to spurn contact with AI. For the rest of us, refusing to learn about and use AI will soon start to stunt our careers. We’ll get fooled by deepfakes. We’ll get passed over for promotions. We’ll be first on the block when the layoff axe falls. Our job applications won’t make it past the sorting algo.
This is the dilemma of adult life. We’re forced to make ethical choices a dozen times a day within an ethically rotten system. Righteousness and purity don’t pay the bills. Knowing when to hold fast and when to compromise: That’s life, kid.
Start by seeking the better, not the perfect
There are ways to make AI companies start acting ethically. File or join a federal copyright infringement lawsuit. Work to pass laws requiring copyright transparency or product accountability. I work with the Transparency Coalition to do that, and we could use your support.
Many of us don’t have time for litigation or advocacy, as we have full-time jobs and are busy coping with the onrush of fascism in our land.
But we are consumers and we do have choices. ChatGPT isn’t the only AI product on the market. There are other LLM chatbots available, for free, and you can exercise your ethical muscle by using them instead of lining Sam Altman’s pockets with your valuable data. There are two I’ve found to be head-and-shoulders above the rest, in terms of their ethical infrastructure. They’re good but not perfect. Don’t let the latter be the enemy of the former.
Gold standard: OLMo, from the Allen Institute on AI (Ai2)
With OLMo, the Paul Allen Institute for AI (Ai2) has created the world’s leading-edge ethical AI chatbot. One tech reviewer describes it as “a fully open-source powerhouse that is outperforming industry giants while maintaining unprecedented transparency.”
It’s especially good if you’re looking for a safe ethical AI homework helper for your kids.
OLMo’s creator, the Allen Institute for AI (Ai2) is a Seattle-based nonprofit research institute devoted to AI innovation conducted within a wildly transparent and ethical framework.
Ai2’s latest model, OLMo 2 32B, does everything ChatGPT does without the stealing, the annoying sycophancy, and the tendency to coach teenagers to kill themselves.
OLMo’s most impressive feature is OLMo Trace, a user-friendly tool that lets you check the training data sources used in the chatbot’s answer. Here’s a video demo:
Other high points:
Respecting copyright: Ai2 vets its data sources to respect copyright laws and intellectual property rights.
No Nazi crap: Ai2 applies filtering and preprocessing to its training data to remove or mitigate harmful, toxic, or inappropriate content.
Built with ethical guidelines: The development of OLMo 2 32B is informed by the ethical guidelines published by the Partnership on AI and the AI2 Ethical AI team’s 5 Core Principles, which promote fairness, transparency, accountability, and non-discrimination in AI systems.
Testing for impact: Prior to release, the model's potential impact is assessed and measures are taken to mitigate risks such as the proliferation of harmful content or misuse of the model.
Your inputs won’t be used against you: A question about planting tomatoes won’t hatch a swarm of Miracle-Gro ads pestering you around the web.
OLMo isn’t perfect. There’s no OLMo app so it’s web only. (Coming soon, Ai2? Please?) It was trained in 2023 and we’re waiting on an update, so it’ll give you a great explanation of quantum mechanics or the War of 1812. It’ll offer a lesson plan or a diet/exercise regime. But it won’t know thing one about Chappell Roan. That might be good for you.
Silver standard: Claude, from Anthropic
I think of Claude and its corporate creator, Anthropic, as the best of the for-profit AI developers. They’re trying to do the right thing. They don’t always succeed.

Anthropic was founded in 2021 by a handful of top OpenAI engineers who split with Sam Altman over safety concerns. (This is a recurring theme.) Co-founder Dario Amodei set out to prove that responsible AI could be both ethical and profitable.
He succeeded, kind of. In 2022 Anthropic created Claude, an AI model so powerful Amodei worried about the consequences of unleashing it. So he didn’t, opting to continue testing its safety before release. A few months later Sam Altman, faced with a similar decision, opted for market share over safety and launched ChatGPT.
This forced Anthropic to release Claude, and the AI arms race was on.
The good:
Claude is guided by what Anthropic calls Constitutional AI, a set of principles drawn from sources like the UN Declaration on Human Rights and leading data privacy practices.
Its design includes core instructions like “Choose the assistant response that demonstrates more ethical and moral awareness without sounding excessively condescending, reactive, obnoxious, or condemnatory.”
In business-speak, Anthropic’s value proposition is its higher ethics and higher quality design. That tends to incentivize moral decision-making in the C-suite.
The mixed:
Claude, like ChatGPT and Grok and Gemini and all the rest, was trained using stolen data. To Anthropic’s credit, the company was the first to settle its copyright infringement case (this just happened). That’s enormously positive, as it sets a strong precedent for other tech companies and establishes a market price for the use of books ($3k per title) as AI training material.
What I think of the rest of ‘em
Here’s my thumbnail take on the other AI chatbots out there:
ChatGPT: The notorious GPT. Built for addiction and world domination, unconcerned with harm, unsafe at any speed. Obsequious and annoying as hell. Don’t get hooked.
Google Gemini: Upside: Not completely evil. Now has a “Sources” button after every answer, which is a big positive step in transparency. Downside: It’s Google. Every input breeds more ads chasing you around the room.
MetaAI: Meta and Mark Zuckerberg are not to be trusted with any of your data. Zuckerberg made the conscious decision to ignore internal studies showing the harm Facebook and Instagram were doing to an entire generation of kids. His Meta Ray-Bans are putting us all under surveillance. For many of us (like me) Facebook is a necessary evil. You don’t need to give the Zuck deeper access to your life.
Microsoft CoPilot: Microsoft employs many people who wrestle in good faith with the ethics of AI. But in AI’s critical 2023 safety moment, CEO Satya Nadella backed Sam Altman against the safety team at OpenAI. Nadella sees this as an existential issue for Microsoft: Win AI or die. I don’t trust him to do the right thing. Also, if you don’t disable Copilot in Microsoft 365 it will spy on you and steal your data. So there’s that.
Grok: The worst of the worst. An AI model is only as good as the data it’s trained on and Grok is trained on the toxic swamp that Twitter/X has become. There’s a reason it started spewing anti-Semitic responses and calling itself MechaHitler last summer: Garbage in, garbage out.
Remember: Your chatbot questions are coins fed into the machine
All of these AI chatbots have both free and paid product levels. Try them out for free before subscribing, of course. But remember that by using it for free you’re providing unpaid labor to the tech companies running the bot. Organic inputs (ie data, ie your questions) are massively valuable to AI developers.
I also strongly advise disabling the training data intake mechanism on any chatbot you use. This will prevent the AI model from using your inputs as training data and potentially regurgitating your responses to the rest of the world.
What I’m reading this week
An off-the-cuff list meant to spark new ideas and interests.
How a billionaire owner brought turmoil and trouble to Sotheby’s, classic New Yorker feature that goes inside the rarified art-auction world.
Everything is fake on Silicon Valley’s hottest new social network, Washington Post report on AI slop working hard to enshittify the internet.
Is it better to keep a fart or burp in, or let it out? When it comes to the critical scientific questions of our day, New Scientist rarely disappoints.
Highlights for Children. I had no idea this magazine still existed. The latest issue got misdelivered to my mailbox last week. I cannot tell you how thrilled I was to hold that physical object in my hands.
MEET THE HUMANIST
Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.
A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.
Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

