New technology tends to send us grasping for analogies. Help me understand: What is this new thing like?
Pie-eyed stock investors will tell you artificial intelligence is the next Google. Doomsday pessimists will tell you AI is as deadly as HAL from 2001: A Space Odyssey—or worse.
Dip into the world of AI and you won’t go far without encountering the phrase pattern recognition. What has come before predicts what will (or should) come next. Pattern recognition powers the autocomplete function in our text and email apps. At their most simplistic, AI models are massive pattern recognition machines.
This is also, in a sense, how our own brains operate.
To the bookshelf!
My own AI odyssey recently inspired me to pull a copy of Douglas Hofstadter and Emmanuel Sander’s 2013 book Surfaces and Essences off the shelf. If you aren’t familiar with the book, I highly recommend it as a guide to our current moment.
It’s subtitled Analogy as the fuel and fire of thinking, but it might better be subbed Epistemology can be fun!
Epistemology is the field of philosophy that asks how we know what we know.

Douglas Hofstadter (left): Brilliant mind, could use a new barber.
Hofstadter and Sander believe humans “know” things because our minds categorize all the things and ideas we encounter in the world. “A category is a mental structure that is created over time,” they write. A category contains information in an organized form.
Categories evolve in our mind as we age—minute by minute, year by year. And we’re categorizing constantly.
“Nonstop categorization is every bit as indispensable to our survival in the world as is the nonstop beating of our hearts,” they write. “Without the ceaseless pulsating heartbeat of our ‘categorization engine,’ we would understand nothing around us, could not reason in any form whatsoever, could not communicate with anyone else, and would have no basis on which to take action.”
Enjoy what you’re reading?
Become an AI Humanist supporter.
This is like that.
How do we categorize the world? Through the use of analogies. “Analogy is the machinery” that makes categorization happen. Per the authors:
“It is by searching for strong, insight-provoking analogues in our memory that we try to grasp the essences of the unfamiliar situations that we face all the time—the endless stream of curve balls that life throws at us.”
The truth of this is both personal and public, intimate and global.
Is that approaching dog friendly or will it bite me? (Consult past dog experiences, compare with approaching dog’s behavior, pattern-match and decide.)
Will a decision to engage in war lead to catastrophe or righteous victory? (I lived through this during the post-9/11 Afghanistan and Iraq debate, with dueling voices invoking Vietnam and World War II analogies.)
This isn’t the only epistemological framework out there, of course. Douglas Hofstadter is a celebrated cognitive and computer scientist who’s been thinking about artificial intelligence for more than a quarter-century. Emmanuel Sander is a cognitive and developmental psychologist who specializes in analogy-making and categorization. So they view the world through their own specialized lens.
The reason I find Surfaces and Essences helpful is that it provides a foundation for my own ongoing construction of understanding around AI. The competing voices around AI right now are engaged in a battle fought largely with dueling analogies. OpenAI CEO Sam Altman envisions AI leading humanity into a glorious future. Eliezer Yudkowsky and Nate Soares warn against building AGI, or Artificial General Intelligence—an AI system that would have truly human-like cognitive abilities—because, as the title of their current bestseller predicts, If Anyone Builds It, Everyone Dies.
Is AI the contemporary equivalent of the polio vaccine and the moon landing? Or is it a nuclear weapon with malevolent sentience? You pay your money, you make your choice.
It’s an essential cognitive function. But it’s not perfect.
The thing about analogies and pattern prediction is that they often lead us to the correct conclusion—but sometimes they’re spectacularly wrong.
Years ago the tech world recognized a pattern in Dean Kamen’s overhyped mystery transportation device. Kamen, an inventor famous for his many life-saving medical devices, had the backing of Steve Jobs and many other really smart Silicon Valley types. This was going to change the world! When his Segway was finally revealed, it was…fine. But not exactly the flying car the world expected.
To be clear: I don’t think AI is the Segway. (See? Analogy.) But it’s sobering to know that today, in 2025, we know the Iraq War was indeed a catastrophic quagmire. And disheartening to remember how those who warned us with the Vietnam analogy were roundly mocked and reviled in 2002.
Recognize the analogies as they fly overhead.
As all of us navigate this new age of AI, it’s helpful to be reminded that the terms of the debate are set by the language we use. When AI hypesters like Sam Altman talk about AI machines learning from their massive datasets, recognize that the use of the word learning is a purposely chosen analogy. Is the machine really learning, or is it merely copying and carrying out a pattern-recognition program?
The coming months and years are going to see a storm of debate over machine intelligence, AI autonomy, concepts like “the right to learn” and serious conundrums over harm, justice, liability, ownership, and the very definition of life itself. The analogies will fly like bullets.
Or like missiles?
The mind, it grasps.
Further reading
For all you Doug Hofstadter fans out there: Yes, I know I should read Gödel, Escher, Bach: An Eternal Golden Braid, Hofstadter’s 1979 Pulitzer Prize-winning rumination on cognition, computers, and human intelligence.
I’m tracking down a copy as you read this. Can you believe that book was published nearly a half-century ago?
Seriously. It was the Carter Administration. Did men walk around in top hats and muttonchops? I can’t remember.

Enjoy what you’re reading?
Become an AI Humanist supporter.
MEET THE HUMANIST
Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.
A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.
Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.
