A lot of scary warnings began popping up in my feeds late last month. Apparently Google had decided to scrape everybody’s Gmail messages to train its AI model, Gemini. Change your settings now!

I saw a lot of posts like this:

Actually, don’t retweet that.

Spoiler: Google is not reading your Gmail messages in order to train its Gemini model. Not yet at least. The initial panic has been largely walked back.

Here’s what’s going on.

Google has been using various forms of AI as part of its Gmail software for years. When I say “using AI,” I mean AI technology is embedded in Gmail and has been reading the content of your emails for years. AI is how Gmail separates spam from your important mail. AI is what powers those annoyingly useful Smart Reply response options.

So yes, Google is reading your Gmail. That’s not the same as scraping your Gmail and using its content to train the Gemini AI model. Does that difference matter? Yes.

Gmail “response” as AI in its purest form

Gmail’s Smart Reply feature, introduced in 2017, is one of the easiest ways to understand how AI works. Smart Reply makes predictions about what responses typically follow the question or suggestion contained in the message.

Gmail’s Smart Reply feature makes predictions based on the text it reads in your Gmail message.

That’s essentially how LLM models work: By making predictions based on the massive datasets the model has been trained on.

That’s kind of how the big AI models, which are Large Language Models (LLM), work. They predict the most likely pattern of response words based on the billions of words-in-sequence the LLM has been trained on.

Currently, Google keeps your AI-ingested information within the Google Workspace universe. That’s the suite of apps that includes Gmail, Meet, Chat, and Google Drive. The company does not export your information to use it as training data for Gemini.

However.

There’s nothing preventing Google from using your private Gmail messages as training data in the future.

Other tech companies already do this

The reason the Gmail panic looked so real is because a number of its competitors have scraped your content to train their AI models. X’s Grok AI model is trained on millions of Twitter posts. Meta’s AI model, known as Llama, is trained on millions of Facebook posts. They didn’t ask your permission, they just scooped and scraped and carried on. Why? Because they could. Because nobody stopped them.

In Europe, Facebook has been forced by EU privacy laws to offer consumers the option to opt out of Meta’s AI training. It’s as simple as clicking a toggle on the settings page. Facebook doesn’t offer the same option to American consumers because they don’t have to. Our government allows tech companies to steal our data because tech companies spend roughly $60 million every year to kill data privacy bills. Getting all that data for $60 million a year is—literally—a steal.

How to keep your data from being used to train AI

A while back I wrote this explainer for the Transparency Coalition: How to stop your images and data from being used to train AI.

It offers step-by-step instructions to opt out of AI training for LinkedIn, Microsoft Word and Excel, Adobe Creative Cloud, Pinterest, and Gemini itself. (Google will use your Gemini prompts as data to fine-tune the AI system.) I urge you to change the settings you can change. Keep your data your data.

I can’t quit you, baby

Look, there’s no getting around it: Gmail can be one of the most difficult tech apps to quit. That’s because it actually works. Google did what Yahoo, AOL, Hotmail, and others could never do: Create a perfect balance of cheap, functional, and respectable. Your Gmail account is connected to all your contacts. Switch to a more private option like Proton Mail and you’ve got to alert everyone and you’ll lose people along the way.

We Gmail users are captured, in the same way telecom companies once trapped consumers by forcing them to change their phone number if they switched carriers. That ended when Congress enacted the Telecommunications Act of 1996, which mandated phone number portability.

In the meantime, do this

You do have the ability to go into your Gmail settings and turn off various Smart features. Here’s the catch. I tried this. What happens is you lose the functions that are actually useful, like spam filtering. Want to know what email inboxes were like in 2006? You’re about to find out.

One of the best pieces of advice I ever heard was this: Always compose your emails as if they are going to be read aloud in a court of law or printed on the front page of The New York Times. No exceptions.

There is no absolute privacy in email. Even the most secure system can be hacked. Lawsuits have this thing called discovery that’s much more common than hacking—and potentially more damaging.

When in doubt, send this message: “Hey, let’s hop on the phone and talk about it.”

Join us! Become an AI Humanist supporter today.

MEET THE HUMANIST

Bruce Barcott, founding editor of The AI Humanist, is a writer known for his award-winning work on environmental issues and drug policy for The New York Times Magazine, National Geographic, Outside, Rolling Stone, and other publications.

A former Guggenheim Fellow in nonfiction, his books include The Measure of a Mountain, The Last Flight of the Scarlet Macaw, and Weed the People.

Bruce currently serves as Editorial Lead for the Transparency Coalition, a nonprofit group that advocates for safe and sensible AI policy. Opinions expressed in The AI Humanist are those of the author alone and do not reflect the position of the Transparency Coalition.

Portrait created with the use of Sora, OpenAI’s imaging tool.

Keep Reading

No posts found