Ghost workers are having a moment, and it’s about time.

Until recently, I was unaware of the term “ghost labor” and its role in the development of artificial intelligence. Ben Suriano, who teaches business ethics at Seattle University, brought it to my attention last year. He mentioned something about the role low-wage contract workers in African and Asian countries played in the splashy rollout of AI.

The what now?

It’s kind of a modern version of the mechanical Turk, he explained. AI systems are presented as magical thinking machines but in many cases there are hidden humans monitoring, adjusting, answering, and controlling the machine—often from a continent away.

(Mechanical Turk: A supposedly autonomous chess-playing machine invented in 1770. The Turk played a masterful game but the pieces were actually moved via magnets by a human player hidden in its lower cabinet. Napoleon and Ben Franklin were among those whose kings were downed by the famous “machine.”)

AI isn’t a complete fraud, of course. Those billion-dollar models are incredibly powerful and very good at certain tasks. But they’re being sold to corporate America and our military as magical machines that dispense with the need for humans and their pesky wages. And that’s simply not true.

The contract workers monitoring Meta’s AI Ray-Bans

That hidden workforce was revealed earlier this month when reporters for the Swedish newspaper Svenska Dagbladet discovered that video and images recorded by users’ Meta AI Ray-Bans and Oakleys are being sent to a subcontractor in Nairobi, Kenya, for data annotation.

Workers for Sama, the tech subcontractor Time Magazine once dubbed “Facebook’s African sweatshop,” monitor people using the toilet, undressing, having sex, or reading sensitive documents, unaware they were being recorded and surveilled. Automated systems designed to blur faces often failed, the contractors said, giving them a front row seat to somebody’s most intimate moments.

This ghost workforce is not limited to Ray-Ban video. In buildings on Nairobi’s Mombasa Road, thousands of local workers teach AI systems to recognize and interpret the world.

“They are called data annotators,” wrote the Swedish journalists, “and they are the manual labourers of the AI revolution. On the screens they draw boxes around flower pots and traffic signs, follow contours, register pixels and name objects: cars, lamps, people. Every image must be described, labelled and quality assured.”

Watching all the time everywhere

Two weeks before the Svenska investigation, Meta announced plans to incorporate facial recognition technology into the next version of its AI-powered Ray-Bans.

Per The New York Times: “The feature, internally called ‘Name Tag,’ would let wearers of smart glasses identify people and get information about them via Meta’s artificial intelligence assistant.”

So Americans were already on edge about that, on top of the new normal anxiety of living in a Palantir-driven, Flock-captured, ICE-abducting surveillance state.

Add to that the news that Nairobi sweatshop workers are now watching us poop, fuck, and answer our email—it was all too much.

Get yer perv glasses outta here

Almost overnight the internet chatzone dubbed Meta’s Ray-Bans “pervert glasses,” and the spot-on phrase stuck. Perv glasses they are.

This may seem a minor point of resistance but it can have a major effect. Ten years ago Google’s infamous “Google Glass” eyewear failed in part because individual humans called out early adopters as “glassholes.” Bars and restaurants banned the device and social stigma killed the product.

Underpaid, exploited human labor powers AI. We just don’t see it.

We’ve long known that a certain amount of human labor and trauma underlies the seamless digital experience. Back when social media companies actually employed humans to moderate harmful content, the psychological toll on the workers was well publicized but most consumers dismissed it as someone else’s problem.

Remember the stories four years ago about Facebook contractors moderating content, watching horrific videos hour after hour, and coming away traumatized? Guess what. Those are the same Kenyan workers, paid $2.20 an hour by Sama, who now watch Meta Ray-Ban video intake and log it for the AI machine.

Other examples abound. Executives at Waymo recently admitted that contract workers in the Philippines “provide guidance” to its taxis sold and marketed as fully autonomous. In 2024, Amazon was embarrassed when the magic behind its Amazon Fresh “just walk out” stores was revealed to be a back office of 1,000 ghost workers in India who monitored, via surveillance video, every item a shopper picked up.

A recent report on AI ghost workers published by the Communications Workers of America union found: “Research examining global AI supply chains has revealed how data workers in the global South, including Kenya, India, Venezuela, the Philippines, and elsewhere, perform the human labor behind purportedly automated systems, often for low wages under traumatizing conditions.”

What to do about it?

Start local. I mean really local. When you see a person wearing Meta AI Ray-Bans or Meta AI Oakleys, try to say something. Ask them to remove the perv glasses. Point out to the wearer that their intake of your personal image is intrusive and a violation of your personal autonomy. Remind them that your image and data, not theirs, is being fed to Meta via sweatshop workers in Nairobi.

Make them uncomfortable.

I know it ain’t easy. Conflict aversion is my middle name. But the more we each, individually, object to our images being fed to Meta, the more the glasses become a problem for the wearer.

If each of us does this, together we can make perv glasses uncool. And that, more than any organized boycott or protest, will kill the product.

Take a deeper dive into AI ghost workers:

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading