This may be the month the AI spell finally broke.

For more than two years we’ve been told that the AI takeover was unstoppable. Inevitable. Talking with tech hypesters felt like confronting the hivemind of the Borg: Resistance is futile.

Over the past four weeks, though, the weather has changed. The public taste for AI, excited by those first encounters with ChatGPT and other magical talking bots, is beginning to sour.

Consider:

  • On Tuesday, OpenAI abruptly announced the end of Sora, its much-touted AI video-generation app. Also cancelled: OpenAI’s short-lived $1 billion content partnership with Disney. Poof. Gone.

  • Also on Tuesday, a new chatbot safety law was enacted in Washington. Another is coming soon in Oregon. Similar bills are finding favor with Republicans in Arizona, Oklahoma, Georgia, and Idaho—despite the Trump Administration’s efforts to quash any and all safeguards around AI.

  • Last week juries in two separate cases found social media companies liable for the harm they caused children by intentionally designing their products to be addictive. The effect of these lawsuits on the AI industry (which is inextricably bound up with social media companies like Meta) will be profound. Investors immediately bailed on Meta, which shed $119 billion in market value within 24 hours.

  • OpenAI’s eagerness to create warbots and spy on Americans, in partnership with Pete Hegseth, sent consumers running for the exits. The number of people uninstalling ChatGPT surged 200% after OpenAI's partnership with the Department of Defense was announced in early March.

  • Two months ago AI agents were all the rage. Clawdbot, aka OpenClaw, an open-source ‘AI employee’ that could plan and execute tasks on its own, seemed like the future for about five minutes. In the past two week we’re seeing mounting reports of agents going rogue. They steal data, expose passwords, delete entire email histories, and threaten developers to avoid being shut down.

  • The rising local groundswell against data centers made national headlines this week. More communities are resisting these power-sucking, water-draining AI factories. The new issue of People magazine includes a heartwarming feature on farmers refusing to sell their land to Big Tech. On Tuesday, Bernie and AOC floated the idea of a national moratorium on data center construction. This is an issue that riles up people in red states, blue states, big cities and small towns. They are using local power to fight back.

Not the end, but the start of a new reckoning

This is not the end of AI. This is the end of the starry-eyed glorious-digital-future phase of artificial intelligence. This is the beginning of the end of the hype.

It was easy to believe AI could do anything after a first encounter with ChatGPT. The release of Sora sent the entire film industry into a panic.

I’ve seen the tide beginning to turn. Two years ago most state legislators scoffed at the idea of raising guardrails around AI. “This is great stuff!” they’d say. “ChatGPT answers all my questions. What’s the worry?”

You don’t hear that so much anymore. Lawmakers have read the accounts of teens killing themselves with the encouragement of chatbots. They’ve seen the detrimental effects on their own kids. They hear the outrage about data centers from their hometown constituents.

They’ve also witnessed the massive overreach of the tech industry. Over the past year the companies behind the AI rollout tossed their ethics to the wind, partnering with Trump in a bid for unlimited power and profit. Their repeated attempts to pass a law exempting them from all laws that might impose the slightest safety standards on this powerful new technology showed them for what they are: Unbound by any moral sense whatsoever.

What comes next: Make it do something useful

The AI industry isn’t about to collapse. But the go-go phase of hysteric investment and unlimited spending may have peaked. We saw this in earlier cycles of tech exuberance: At a certain point a startup’s massive burn rate becomes unsustainable. By some estimates it cost OpenAI $15 million a day to create Sora’s funny five-second video clips. Your $20 monthly subscription ain’t covering that cost.

Both OpenAI and Anthropic are hoping to launch IPOs later this year. That will bring a lot more hard-eyed scrutiny to their products, their costs, and the real (or phantom) value AI is returning to the businesses adopting their products.

Some tech CEOs can see a reckoning on the horizon. In late January, Microsoft CEO Satya Nadella warned that AI will start losing public support unless it starts to “do something useful that changes the outcomes of people and communities.”

“We will quickly lose even the social permission to take something like energy, which is a scarce resource, and use it to generate these tokens [that power AI],” he said.

In other words: We’d better find beneficial uses for AI before the public turns on us.

Which was not the worst warning the CEO of Microsoft could send to his industry.

Of course, Nadella immediately undermined his own message. Instead of pushing tech companies to actually make AI useful, he proposed pushing more hype and fear.

"The demand side of this is a little bit like, every firm has to start by using it," he said. "People need to say, 'Oh, I pick up this AI skill, and now I'm a better provider of some product or service in the real economy.’"

In other words, Nadella urged the tech industry to encourage job seekers to learn AI skills in order to become more employable. Even as the industry continues to tout AI as a way to shed labor costs and, you know, the people who are employed to carry out the labor.

Update: The ‘Shy Girl’ conversation continues…

If you enjoyed last week’s post about Shy Girl and book publishing in the age of AI, check out Jonathan Evison’s podcast, A Fresh Face in Hell. Jonathan had me on last week to talk more about books, AI, authorship, and the value of authentic human creation.

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading