A curious lawsuit dropped earlier this week. The headline:

“Amazon sues to stop Perplexity from using AI tool to buy stuff”

Amazon’s new federal lawsuit demands that Perplexity AI stop using Comet, its AI agent, to purchase items on Amazon for Comet clients. Apparently shopping on Amazon as a non-human is a violation of its terms of service.

(An AI agent is a chatbot assistant that you set up to carry out tasks for you. Like buying stuff on Amazon.)

Amazon says Comet is effectively trespassing in its store. Perplexity’s CEO argues that AI agents should have “all the rights and responsibilities” as a real human user.

That’s a heavy assertion even within the strict context of Amazon shopping. I’ll save the broader implications for future posts. For now, let’s dive into the forces that led to Jeff Bezos putting a NO BOTS ALLOWED sign in his shop window.

Silicon Valley’s big assumption: One bot to rule us all

This will require a wider lens.

Are you a Gemini user or do you prefer Grok? Are your twenty-bucks-a-month going to Anthropic or to OpenAI? Today’s AI investment bubble is driven in part by a massive race to grab market share—which is to say, your loyalty to a certain AI chatbot.

Silicon Valley’s life-or-death AI race is founded in part on this assumption: In the near future AI agents tied to our smartphone and credit card will carry out 90% of our day-to-day tasks.

By this way of thinking, whatever AI chatbot we choose in these early days will become an indelible part of our identity. Just as we think of ourselves as an iPhone or Android person, Mac or PC, soon we’ll be a Grok or Claude person. (Don’t be a Grok person, BTW. That thing is trash.)

In this near-future world, chatbot loyalty will be worth trillions of dollars. Once you start loading your life onto a platform owned by OpenAI or Anthropic, your life will become more and more enmeshed with that company. You think it’s hard to change your mobile carrier? Wait ‘till OpenAI is managing the entire business of your day-to-day life.

AI developers depend on online retailers

AI agents can’t manage your life without your personal information (name, home address, credit card number) and a relationship with online retailers. For Perplexity’s AI agent, Amazon and Target and Ticketmaster represent pinch points. Amazon has the actual stuff. Comet is nothing but code. Without the retailer, Comet can’t deliver on its promise.

Amazon doesn’t want you to use Comet or any other third-party shopper. Amazon wants a direct one-to-one relationship with you. Amazon wants to be your Comet—and your Claude, and your ChatGPT.

The problem is, Amazon doesn’t offer a general-purpose LLM chatbot or AI agents. It has shopping bots, Alexa and Rufus, which are not at all the same thing. This is perplexing—you’re welcome—because Amazon runs the world’s largest cloud computing system (AWS) and owns a mind-blowing stash of high-quality data. Compute power and data are the two key ingredients in any LLM recipe.

But instead of creating a better chatbot, Amazon is stringing razor wire around the store.

Will Amazon’s strategy work?

In the short run, maybe. As the sign says, legally, “We reserve the right to refuse service to anyone.” But that refusal must be based on behavior, not on characteristics like race, national origin, or gender. Digital agents are not a recognized or protected class. Yet.

There’s some legal gray area here. Some restaurants refuse orders through DoorDash or UberEats, based on high platform commissions or reputational damage due to slow delivery.

Is that what Amazon’s stance is? Or is Perplexity creating a thin wedge that may open legal space for AI rights?

Here’s where things get interesting

Let’s imagine Perplexity’s Comet as a human agent. These already exist. They’re called personal assistants. They use the boss’s credit card to order stuff for her on Amazon. Are they “trespassing”? Of course not.

Do the terms of this purchasing relationship change if the personal assistant is digital instead of a carbon-based?

Let me put it another way. Roughly two million Americans thrive every day with the use of a prosthetic limb. That mechanism acts with and for them. It is an extension of their body and a tool of autonomy. Federal law requires brick-and-mortar businesses to make accommodations for wheelchairs.

There could be an argument that an AI agent is no more disruptive to Amazon than a personal assistant. Or that Comet allows a person with limited mobility to perform tasks that they might not otherwise accomplish, much like a wheelchair or prosthetic leg.

Maybe the longer-term answer here is to think of the AI agent as a tool rather than an autonomous rights-holding entity. The rights holder is the human client who is aided by the AI tool.

Bad clients can make good law, and vice-versa

Don’t go looking for a good guy-bad guy scenario in this fight. I’m no great fan of Amazon, though I watch Prime shows and buy too much of its stuff. Perplexity is currently facing four copyright infringement lawsuits for its alleged use of pirated content to train the AI model that powers Comet. Yeesh.

A good outcome here isn’t one party winning or losing. It’s the establishment of good case law. If this lawsuit has legs, a wise judge may issue early rulings that establish a solid foundation of thought about artificial intelligence, what exactly it is, whether an intelligent machine can or should have rights, or whether we’re asking the right questions altogether.

In the meantime, try to hit your local farmer’s market this weekend. Jeff Bezos already has enough money.

Enjoy what you’re reading?
Become an AI Humanist supporter.

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading

No posts found