It’s Sunday morning so nobody knows who won today’s Super Bowl tilt. But we already have a winner in the advertising game. It’s Anthropic in a blowout.

Anthropic is the AI company that’s been steadily gaining ground on OpenAI this past year. Anthropic’s big LLM, named Claude, became my paid AI subscription of choice last year after I could no longer stomach the annoying obsequiousness of OpenAI’s ChatGPT.

Well, that and ChatGPT’s propensity to offer up crazy hallucinations, send users into psychotic episodes, and encourage people to kill themselves.

While OpenAI and X’s Grok were racing each other to the bottom of the no-limits porn-creation business, Anthropic quietly released Claude Code, a vibe coding platform that’s taken the tech world by storm. The Verge last week noted: “For the last couple months, Anthropic’s Claude and its coding platform have been having a moment—on social media, in engineers’ circles, and in C-suite offices.”

Now comes Anthropic’s big-stage Super Bowl debut. If you haven’t seen the ads (they’ve been all over social media), here’s your chance:

Spot-on. Brilliant. The slight delay. The creepy smiles. It’s ChatGPT embodied. Full respect to Anthropic and its ad agency.

You know the ads are good because they’ve got OpenAI CEO Sam Altman hopping mad. He called them “clearly dishonest,” which is quite the kettle-calling-pot response.

For the record, Altman three weeks ago said ChatGPT would begin showing ads to users based on their chatbot conversations. The Super Bowl spots may be exaggerated but they’re hardly hallucinations.

Death and lies don’t rattle consumers. Ads do.

While I’m pleased to see Anthropic rise at OpenAI’s expense, there are mixed feelings here. Claude bests ChatGPT on safety, usefulness, quality, and ethics. Apparently those aren’t values that spur OpenAI consumers to switch over to Anthropic. What will do it? Ads! Because everybody hates ads.

It’s a bit like putting Al Capone behind bars for tax evasion. It does the job but dilutes the justice.

Trust is the real issue

On another level, though, these Super Bowl hits aren’t just about one AI product embedding ads while its rival doesn’t. I don’t think anybody would be surprised to see Anthropic introduce ads in some form down the road.

What the creepy AI professor, personal trainer, and therapist capture is our innate human uneasiness with the chatbot experience. The uncanny near-human fakiness of the voice. The robotic cheerfulness. The embedded ads aren’t the real problem—they’re the flaws that allow us to identify the deeper issue, which is trust.

We’ve all experienced the enshittification of our favorite tech tools. The Google searches that turned into pop-up ads chasing you around the internet. The Facebook feed that switched from friends and family to an addictive ragebait scroll. The Twitter account you had to abandon when Elon turned X into a cesspool.

The Anthropic spots are showing us the future we’ve been conditioned to expect: OpenAI is about to turn ChatGPT into a mess of ad-driven deception.

My tech-forward friends have until now been able to dismiss ChatGPT’s troubling flaws. They’re not suicidal. They’re not going to spin into a psychotic episode. They know how to spot hallucinations. Anthropic’s Super Bowl spots hit hard because ads embedded in an AI model’s response would be something even the tech savvy could not avoid. And anybody paying attention knows it’s exactly the money-chasing move Sam Altman would make.

If you’re killing time before kickoff treat yourself to a Kara Swisher dive into these ads with NYU marketing professor Scott Galloway.

Instagram post

Go Hawks!

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading