I’m a bit obsessed with words and language.
No kidding, right? I’m a writer. I’m an editor. But it runs deeper than that. I’m fascinated by how words frame contentious issues and shape our perception of reality. One of the common threads running through my books The Last Flight of the Scarlet Macaw and Weed the People was the way in which language acts as a weapon of power.
One of the most compelling aspects of AI is the role language plays in its promotion and adoption. I’m going to hit this note occasionally here at The AI Humanist because it’s a foundational part of AI literacy—and it’s evolving quickly.
Are they really models? Do they really learn?
Listen to the language around AI. We’re told that AI machines “learn” just like people. Billion-dollar AI mechanisms are known as “models,” a word loaded with positive associations (cars, appliances, Tyra Banks).
None of this verbiage came about by accident. The words used to describe AI were coined by people with a vested interest in humanizing their work in order to ease its adoption by the non-technical public.
The invention of “artificial intelligence”
The bedrock of the field—the term artificial intelligence—is itself a product of linguistic marketing. John McCarthy, the Stanford University computer and cognitive scientist, is one of the discipline’s founders. In 1973 he explained how the term came about in the early 1950s:

“I invented the term artificial intelligence.
I invented it because we had to do something when we were trying to get money for a summer study in 1956.”
Fair context: McCarthy and mathematician Claude Shannon were hoping to bring together a handful of researchers working on early computers. In the grant application they needed to describe their nascent field. McCarthy floated the aspirational term artificial intelligence, he recalled, but “Shannon thought [that] was too flashy a term.” So they tried automata studies. Which flopped.
Other researchers tried complex information processing. “Which is certainly a very neutral term,” McCarthy recalled, “but the trouble was that it didn’t identify their field, because everyone would say, well, my information is complex. I don’t see what’s special about you.”
John McCarthy died in 2011 and I’m sorry he’s not still around. He sounds like a good guy with a warm sense of humor. In any event, his flashy term stuck and his summer study—the groundbreaking Dartmouth Project—got funded.
Today’s AI terms result from ‘conceptual borrowing’
Some of the most insightful work on the linguistics of AI is being done today by Yale cognitive scientist Luciano Floridi. Last year he and collaborator Anna C. Nobre published a paper that I keep thinking about.
Floridi and Nobre showed how AI researchers appropriate language from psychology and biology to explain and legitimize actions within their field. This is known as “conceptual borrowing.”
“When new sciences emerge,” Floridi and Nobre wrote, “they lack the technical vocabulary to describe and communicate their unique phenomena.” One option is to invent a wholly new word, but your audience is already struggling to understand this new concept. A foreign-looking word isn’t going to help.
A safer bet is to appropriate language from established and respected disciplines. If neuroscience was a desirable stately neighborhood, a realtor selling a new tract house in nearby AI might list it as neuroscience-adjacent.
Neurons, synapses, and hallucinations
Some examples of AI terms borrowed from psychology and biology:
Adaptation - How AI systems modify to accomplish tasks over time better. (Evolutionary biology)
Hallucination - An output error, fiction offered as fact by an AI model. (Psychology)
Neuron - The basic processing units of artificial neural networks. (Neuroscience)
Neuroplasticity - The ability of artificial neural networks to change their structures and connections. (Neuroscience)
Synapse - The connections between artificial neurons that strengthen or weaken based on signals passed across them. (Neuroscience)
Borrowing a word borrows its legitimacy
This conceptual borrowing is a kind of stolen valor. Using a well-established term from an adjacent field can give the impression that a new concept within an emerging field is similarly well-established. When, in fact, it’s not been established at all.
As Floridi and Nobre note, “once you speak of ‘machine learning,’ it becomes natural to wonder whether machines can learn—not just metaphorically—but in the biological and psychological sense. One assumes or seeks similarities between machine and human learning, running the risk of under-scrutiny.”
Learn may be the most problematic claim AI developers make. Do machines really learn like humans, or do they merely calculate probabilities based on patterns in massive datasets? It’s the latter. But the word learn carries within it an inherent nobility. What decent adult could speak against learning? In fact, there’s a raging ethical debate in AI circles about whether machines have “the right to learn.”
The contentious nature of learn is playing out in real time right now in the dozens of copyright lawsuits filed against AI developers. Writers, artists, and publishers accuse tech companies of illegally copying their protected property for improper commercial gain. Tech companies argue that their machines are merely learning from the content, just as a person learns from a book they pick up at the library.
And then there’s the term hallucination. The decision to use this word—as opposed to error, malfunction, or glitch—deserves a separate post of its own, and I’m planning to pull that together in the next few weeks.
Until then, hold your futureshock despair at bay. Go outside and touch grass, wander in a park, walk down the sidewalk and enjoy the color of the autumn leaves. Our machines may be changing but we’re still human and the real world’s pleasures are out there waiting for you.
See you next week!
MEET THE HUMANIST
Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.
A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.
Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.
