What is a person?

Simple question. Complicated answer. The boundary of “personhood” has defined societies, separated freedom from slavery, delineated child from adult, and sparked conflict for as long as humans have walked the earth.

The emergence of AI adds a new thread to this tangled knot.

State legislatures convened across America this month. I track all the AI-related bills for the Transparency Coalition (we publish a weekly analysis here), and already I see patterns emerging.

One of the most intriguing: In Ohio, Missouri, and Washington, lawmakers are considering bills that declare AI systems “non-sentient” and prohibit them from obtaining legal personhood.

The first of these bills was introduced last fall by Rep. Thaddeus Claggett, who represents Licking County, Ohio. His aim, he said, is to prevent people from claiming “it wasn’t me, the AI did it” as a legal defense.

“No AI system shall be granted the status of person, nor be considered to possess consciousness, self-awareness, or similar traits of living beings,” declares the Ohio bill, which also prohibits marriage between a human and an AI system.

“It makes no difference the ability of a donkey to speak,” Claggett told the Ohio Capital Journal. “That does not make the donkey a human, okay?”

Claggett came in for some late-night mocking—“No wonder he’s against technology, his name is 200 years old,” cracked Stephen Colbert—but I’ll defend him. I don’t know that his bill is worth passing, but the gentleman from Licking County raises a lot of deep and worthwhile questions.

Let’s unpack a few of them.

In the 2014 film ‘Ex Machina,’ a tech billionaire creates an android named Ava, who fights for her freedom and life.

Personhood and the Turing test

Let’s start with the techno-philosophy angle.

You may have heard of the Turing test. It’s sometimes called the imitation game, the title of the film about British mathematician Alan Turing and his team of WWII code breakers.

To answer the question “When can we say a machine thinks?” Turing in 1950 invented a practical test: If a computer acts, reacts, and interacts like a sentient being, then it’s sentient.

The late-2022 release of ChatGPT sparked debate about whether the Turing test had finally been passed. The answer wasn’t yes or no. The answer was: The Turing test is insufficient. Clearly, ChatGPT interacted like a sentient being. Just as clearly, ChatGPT was not sentient.

An AI chatbot doesn’t think like a human. A few years ago Emily M. Bender, director of the University of Washington’s Computational Linguistics Lab, introduced the term “stochastic parrot” to describe what’s actually happening. In a famous paper, she and her colleagues argued that a large language model (LLM) like ChatGPT stitches together “sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine.”

In other words, ChatGPT is a massive pattern-matching system “without any reference to meaning.” Bender called it “a stochastic parrot.”

Subscribe and support The AI Humanist today.

AGI and the Singularity (are not the same thing)

Even as those of us who use ChatGPT are coming to recognize its stochastic shortcomings, the AI hypesters assure us that AGI (artificial general intelligence) is just around the corner.

AGI refers to the moment an AI machine equals or surpasses the cognitive abilities of the human brain. Most AI researchers believe we’re 15 to 30 years away from reaching AGI. The leaders of Google, OpenAI, X, Anthropic, and Nvidia, who need investors to remain irrationally exuberant, tell us it’s coming in the next one to five years.

AGI is not to be confused with the Singularity, which is a term coined by the inventor, author, and futurist Ray Kurzweil. The Singularity is when human brains merge with the global computing cloud. “We’re going to be a combination of our natural intelligence and our cybernetic intelligence,” Kurzweil predicted in a 2024 Guardian interview. “Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go non-invasively into our brains through the capillaries.”

As all this is going on, our social media feeds send us video clips of janky humanoid robots running half-marathons and break dancing, and we can’t help thinking of the android replicants in Blade Runner and the AI-powered humanoid in Ex Machina.

It doesn’t take a genius to see where this is heading.

Old school: In 1982’s ‘Blade Runner,’ Rutger Hauer portrayed a corporate-manufactured android running to extend his life.

Blaming the machine lets the humans off the hook

As AI systems begin to operate more autonomously, they create a remove from their human controllers. This creates a legal accountability gap.

Think of it like a dog bite. If my dog bites you while I’m taking it out for a walk, I bear some responsibility for the dog’s action. But it’s not the same as if I personally attacked you with my teeth. That would be criminal assault.

AI systems today operate mostly on-leash. A year from now they may not. AI agents are being designed specifically to operate off-leash, doing things for their owners without the owner’s direct supervision.

Consider self-driving taxis like Waymo. If a self-driving cab injures a pedestrian, who is at fault? Will Waymo executives just shrug and claim the magic box acted on its own?

Some states are already closing the gap without raising questions of sentience or personhood. California last year passed AB 316, which prohibits a defendant who used AI from asserting that the AI autonomously caused harm to the plaintiff or victim.

The android - cyborg split

Laws like AB 316 are a pragmatic solution to the immediate problem of the “black box” defense. But we’re still left to ponder the difficult personhood boundary hinted at by Ray Kurzweil.

The AI-powered machines these state lawmakers are calling non-sentient would fall into the category of android. That is, a fully artificial human-like robot. A cyborg, on the other hand, is a biological human who receives a technological upgrade. Anybody walking around today with an artificial heart or a titanium hip might be considered a cyborg.

The future Kurzweil envisions—the Singularity—doesn’t involve AI sentience or android personhood. Rather, it falls into the cyborg category: A biological human implanting AI technology into their body. Which leads us into an even thornier thicket. At what point does a human cease to be biological and cross over into machine?

And so it falls to Thaddeus Claggett of Licking County, Ohio

I love the fact that these tech conference conversations about AGI, the Singularity, and AI sentience crash into hard reality in the person of Rep. Thaddeus Claggett, an industrial contractor from Licking County, Ohio.

Rep. Thaddeus Claggett (R-Licking County), the Ohio state legislator who introduced the first ‘non-sentient’ bill, is an industrial contractor and president of the Licking County Library Board.

Android, cyborg, it makes no difference to Thad. He just wants to make sure some charlatan out there isn’t teaching a parrot to talk and calling it human enough to marry, drive, and vote.

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading

No posts found