Emily Bender knows she rubs some people the wrong way. And she’s okay with that.

“We’re living through this moment of marketing where we’re told everyone is all in on artificial intelligence,” she said earlier this week. Emily Bender is not all in. And she’s not afraid to say it.

When she’s forced to use the term artificial intelligence she puts air quotes around the two words. She calls AI chatbots “text extrusion machines.”

Over the past few years she’s become one of the world’s foremost critics of AI and AI hype, at a time when the world is in desperate need of such a person. Bender is a distinguished professor of computational linguistics at the University of Washington, and most recently wrote the book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.

Earlier this week she gave the UW’s annual Solomon Katz Distinguished Lecture in the Humanities, and she did not disappoint. Speaking to a packed house in UW’s Kane Hall, she laced into the dehumanizing aspects of AI, defending the primacy and worth of the full human experience.

What does it mean to be fully human?

This is a question that’s not going away. I tackled it recently here at the AI Humanist (Stopping AI personhood before it starts), and last week Anthropic stirred the pot with hysteria over the purported sentience of Claude. (It’s not.)

Emily Bender set a helpful baseline. To be human, she said, is to be:

  • entitled to all rights recognized as human rights,

  • equally in possession of an internal life and point of view, and

  • welcomed and known as one’s full self.

Bender sees AI acting as a dehumanizing force in a number of ways, but the one that stuck with me was what she described as the computational metaphor.

This is where neuroscience borrows from computer science due to a kind of language laziness among brain researchers: “The brain is a computer.” That’s then reinforced by technologists eager to impart consciousness to their machines: “The computer is a brain.”

Rational pattern-matching is not the brain’s only function

I found it helpful to be reminded that all of this debate around AI rising to match and surpass the human brain is being framed by computer scientists and tech billionaires whose worldview is limited to their slim slice of sentience. The computational metaphor “affords the human mind less complexity than is owed, and the computer more wisdom than is due.” (Bender is quoting Alexis Baria and Keith Cross, from their 2021 work The Brain is a Computer is a Brain.)

It’s like when you see the dentist. As you step into the office you enter a world where oral care is paramount, supreme, the highest of all callings. They cannot believe you don’t floss twice daily and sometimes miss a brush. You want to remind them that your teeth exist in a vast and varied world of duty, upkeep, maintenance and care. You’re trying to sleep, you’re dodging Covid, you have a job, you have to eat, you’re hydrating the best you can. Gum care is not number one.

So it is with the tech titans who trap the brain’s essence within in the only frame that matters to them: Its ability to match patterns of words and images. To extrude words.

This assumes a hierarchy of human value in which a computer-limited notion of “intelligence” stands above all other functions of the brain (and aspects of being human) such as emotion, sensual experience, joy, pain, creativity, longing, and a thousand other aspects of human sentience.

Constructive resistance

 Against this false framing, Bender offered five constructive ways to resist our current “moment of marketing.”

  1. Practice skeptical consumerism: Don’t stand docile while companies try to push new AI products or functions on you. Demand to know why they’re necessary. A few of them are useful. Most aren’t.

  2. Call out the environmental damage, data theft, and labor exploitation that function as AI’s invisible foundation.

  3. Resist the hierarchy of expertise. Computer science is only one kind of expertise. It is not the only field that matters. It just happens to be making a few people a crap-ton of money right now. In an earlier age it was railroads and steel.

  4. Don’t assume that the truly beneficial work of AI in one field will translate to all fields. AI may be working wonders in genetics labs or the pharma industry. That doesn’t mean it’s going to speed up innovation in every other industry or aspect of life.

  5. Ask why. This may be the hardest one of all, because it requires an individual to stand against the corporate tribe. When your department head encourages everyone to adopt AI, ask why. “Because it’s the future” is not an answer. Four years ago Mark Zuckerberg was so convinced that virtual reality and “the metaverse” was the future that he changed the name of Facebook to Meta. He was wrong. Nobody adopted his dorky VR headsets and it cost the company $70 billion.

It is also possible that Emily Bender is not entirely correct.

I am deeply grateful for Emily Bender’s work and her public skepticism. She is truly, as she mentioned on Tuesday night, opening space for others to voice their own resistance. She does it from a position of relative safety, as a tenured university professor. Most of us don’t have that luxury. When the boss says jump we’re expected to shut up and execute, collaboratively and cross-functionally.

That’s where I’m at right now. I wish I could ignore AI and refuse its incursions. But I have to make a living in 2026. As a writer I don’t want to close myself off to this moment of massive technological and social disruption. I refuse to stop learning—and this is one of the most compelling new things to understand, use, shape, and resist right now.

We are all fully human and, for better or worse, the challenge of grappling with all of these questions is an essential part of the human experience.

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading