Skip to content

Stop Training Military AI: Opt Out of ChatGPT Data (With Proof)

OpenAI just signed a Pentagon deal. If you've used ChatGPT, your data might've trained it. Here's how to opt out - and actually verify it worked.

6 min readBeginner

OpenAI just cut a deal with the Pentagon. Late Friday night – February 28, 2026 – Sam Altman announced ChatGPT will power the Department of Defense’s classified network. Hours earlier, the Trump administration banned Anthropic (Claude) for refusing to let its AI be used for mass surveillance and autonomous weapons.

Used ChatGPT? Pasted work code, therapy scripts, grocery lists – doesn’t matter. Your chats probably helped train the model now heading to military systems.

OpenAI uses your conversations to improve the model unless you opt out. Here’s how to do that, how to verify it worked, and three gotchas that bypass the opt-out even after you think you’re safe.

What Just Happened (and Why It Matters)

February 27: Defense Secretary Pete Hegseth blacklisted Anthropic. Trump called them “left-wing nut jobs” trying to “strong-arm the Department of War.”

12 hours later: OpenAI stepped in.

Altman claims his deal includes the same red lines Anthropic wanted – no domestic mass surveillance, no fully autonomous weapons. The Pentagon agreed (per Altman’s statement). But the model they’re deploying? Trained on millions of user conversations. Yours might be in there.

A King’s College London study released this week: ChatGPT and other frontier models chose nuclear escalation in 95% of simulated war games. The machines treated nukes as “just another rung on the escalation ladder.” Maybe that context changes how you feel about your data contributing to military AI training.

OpenAI says business users are opted out by default. They say the opt-out is straightforward. They say they strip personal identifiers. They also said they wouldn’t work with the military – until January 2024, when they quietly deleted that restriction.

Verify everything.

The 60-Second Opt-Out (Desktop)

Open ChatGPT. You need a toggle buried three menus deep.

Click your profile icon (top-right or bottom-left). Select Settings. Click Data Controls in the left sidebar.

Find “Improve the model for everyone.” Toggle it off. Click Done.

No confetti. No confirmation email. The UI just accepts it.

Per OpenAI’s help docs (as of Feb 2026): “once you opt out, new conversations will not be used to train our models.” Note “new.”

Mobile: iOS and Android

iOS: Three dots (bottom-right) → Settings → Data Controls → toggle off “Improve the model for everyone”

Android: Two horizontal lines (top-left) → three dots next to username → Settings → Data Controls → toggle off

One opt-out covers all devices. Turn it off on your phone, desktop gets it too.

Verification (Not Just Clicking)

Go back to Settings → Data Controls. “Improve the model for everyone” should show off. Still on? You didn’t save it.

Screenshot it if you want a timestamp. OpenAI doesn’t send receipts. The toggle state is your proof.

Alternative: submit a formal “do not train on my content” request via the OpenAI privacy portal. Slower (requires a form), but creates a paper trail. Some users prefer documented proof over a UI toggle that could flip during an update.

Three Gotchas That Bypass Your Opt-Out

Gotcha #1: The Thumbs-Up Trap

You opted out. Then you ask ChatGPT a question, love the answer, click thumbs-up.

That entire conversation just opted back in.

From OpenAI’s docs: “If you choose to provide feedback, the entire conversation associated with that feedback may be used to train our models.” Even if you disabled training.

Thumbs-up/down buttons override the opt-out. Don’t touch them unless you’re okay with that chat becoming training data.

Gotcha #2: Temporary Chat Isn’t Temporary Enough

ChatGPT offers “Temporary Chat” mode (toggle from model selector, top-left). Marketed as privacy-friendly: chats don’t appear in history, don’t create memories, aren’t used for training.

True. But “aren’t used for training” ≠ “immediately deleted.”

OpenAI keeps Temporary Chats for 30 days for “abuse monitoring.” That window can extend “for security or legal reasons” – no upper limit defined. Discussing something sensitive? It’s sitting on OpenAI’s servers for a month.

Better than permanent retention. Not airtight.

Gotcha #3: The Opt-Out Isn’t Retroactive

Opted out today. What about your 200 conversations from last year?

Still in the training dataset.

OpenAI’s privacy disclaimer: the opt-out “applies moving forward and does not apply to data that was previously disassociated from my account.” If they already stripped your username and tossed it into the training pile, opting out now won’t pull it back.

Delete all chats (Settings → Data Controls → Delete all chats) – but that only removes them from your sidebar. Whether it purges them from training sets? Unclear. OpenAI’s language here is deliberately vague.

If You Use ChatGPT at Work

Enterprise and Team accounts are opted out by default. Your company’s IT can verify this. Employee data isn’t used for training – unless someone clicks thumbs-up.

That feedback exception still applies. An employee rates a response? That conversation can be used for training as an “explicit opt-in.” Most workspace admins don’t know this.

What About Claude, Gemini, and the Others?

Claude (Anthropic) used to be the privacy-first option. As of late 2025, they shifted consumer plans (Free, Pro, Max) to opt-out – your data trains models unless you disable it in settings. Pentagon drama doesn’t change that.

Gemini: turn off “Web & App Activity” in your Google account settings. Separate control from Gemini itself. Confusing by design.

Every AI platform has its own opt-out. None make it the default.

Do This Next

Open ChatGPT right now. Settings → Data Controls. Toggle off “Improve the model for everyone.” Screenshot the result.

Clicked thumbs-up/down in the past? Those conversations are already in the training pool. Can’t undo that. But you can stop contributing new data.

Pasting proprietary work code, legal docs, medical info, or anything you wouldn’t want in a Pentagon-deployed model? Consider whether Temporary Chat (30-day retention) is private enough. For some use cases, it’s not.

FAQ

Does opting out delete my past conversations from ChatGPT’s training data?

No. Opt-out applies to new conversations only. OpenAI: “does not apply to data that was previously disassociated from my account.” Old chats already anonymized and in the training dataset? They stay. You can delete visible history, but whether that purges training data – OpenAI’s docs don’t say.

If I opt out, can I still use ChatGPT normally?

Yes. Functionality stays the same – chat, save history, use all features. New conversations just won’t feed the training pipeline. Your chat history remains in the sidebar. The old “disable history to opt out” approach from 2023 is gone. You can keep history while blocking training.

I work at a company that uses ChatGPT Enterprise. Am I automatically opted out?

By default, yes – business accounts (Enterprise, Team, API) don’t contribute to training. But: if you or a coworker click thumbs-up or thumbs-down on a response, that entire conversation may be used for training as an “explicit opt-in,” even under an Enterprise plan. Most IT departments don’t warn employees. Your company handles sensitive data? Disable feedback buttons or educate users not to click them. OpenAI treats feedback as consent, regardless of account type.