I once lost an argument at work to a colleague who justified a bad decision by challenging me, mid-meeting. “My data tells me my decision is right,” he declared. “What does your data say?”

His data was specious, his decision a disaster, and he was flushed out in the next round of layoffs. But in the corporate 2010s there was no higher virtue than to be data-driven.

Anyone who’s suffered through Statistics 101 can tell you all the ways data can be flawed, incorrect, biased, and downright deadly. But back then twirling the caveman’s club of data was a cheap way to appear smart, forward-thinking, and bloodlessly efficient.

A decade later, I see the same thing happening with artificial intelligence.

AI and automation bias

AI models can do amazing things. The facade of magical omniscience that ChatGPT, Gemini, and Claude present to the world is dazzling.

It’s also dangerous. At this early stage of AI’s rollout it’s easy to assume that because ChatGPT knows more than you, ChatGPT knows better than you.

The people who study these things call this automation bias: The propensity to defer to automated systems, despite warnings or contradictory information from other sources. “In other words,” the authors of a 2023 paper on the subject wrote, “human actors are found to uncritically abdicate their decision making to automation.”

The most insidious aspect of automation bias comes when humans assume the machine possesses an infallible objectivity far superior to our own. After all, numbers don’t lie.

Except when they do.

What could go wrong?

In 1999, the British Post Office adopted a new software system. Horizon, developed by the Japanese company Fujitsu, was meant to streamline tasks like inventory and accounting. To borrow a phrase from the great Eddie Izzard: The software was crap. It tended to falsely report shortfalls in the till, among other bugs. The Post Office branch managers knew Horizon was crap, and they complained.

Instead of listening, upper management leaned into their automation bias and assumed the branch managers were peevish, thievish, or thick. The software couldn’t be wrong—it was built by Japanese engineers! Who are known to be cracking smart chaps!

Post Office executives took a hard line: The problem isn’t the machine, the problem is you. Branch managers tried covering the false shortfalls with their own money. Some went bankrupt doing so. Others were criminally charged.

Between 1999 and 2015, more than 900 Post Office employees were wrongly prosecuted because of incorrect data produced by the Horizon system. Many went to prison for fraud and theft. Others were financially ruined. Marriages broke under the stress. As many as 13 Post Office officials committed suicide in part due to the false accusations.

The nightmare ended only in 2019 after a group of branch managers took legal action in a landmark lawsuit that proved the culpability of Fujitsu and its broken machine.

Push back against automation bias

With their natural-ish language and confident tone, chatbots like Gemini and ChatGPT are purposely designed to instill trust in their accuracy even when they’re dead wrong. And they’re dead wrong a lot.

That’s why state lawmakers have begun to pass laws to prohibit AI systems from posing as lawyers, doctors, nurses, or therapists. These are critically necessary measures, but they also represent the low-hanging fruit. The deeper problem is the spreading assumption of AI’s infallibility—or AI’s acceptable fallibility—in industries where machine errors aren’t immediately recognized as life-threatening.

Just one example: In a Seattle restaurant a few weeks ago I overheard a conversation between two software engineers in their late twenties. One was lamenting the younger generation of coders in his office. “They’re vibe coding with AI and the product lead loves it,” he told his friend. “All he sees is twice the work done in half the time. He doesn’t realize the code is trash. Or if he does he doesn’t care. It’s going to crash but when it does it’ll be somebody else’s problem. In the meantime he’s golden because his team looks like they’re killing it.”

Well, whatever. It’s just software, right? Probably some boring accounting system meant for a bunch of hayseed postal clerks.

Join us! Become an AI Humanist supporter today.

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading

No posts found