Here’s the punchline first: you can use DeepSeek’s models without sending a single byte to China. The catch is that the popular path – opening the app or chat.deepseek.com – is the one path that does send everything to Chinese servers. Pick the wrong door and your prompts, device fingerprint, and keystroke rhythm all end up in Hangzhou. Pick the right door and they don’t.
This guide walks backwards from that result. We start with the two routes that actually keep your data out of China, then work back to why the default route doesn’t, what the privacy policy really says (and what it didn’t say until February 2025), and the edge cases that even the careful workarounds don’t fix.
The takeaway in one paragraph
“DeepSeek sends data to China” describes one specific configuration: the official app, the official website, the official API. Those all route to servers the company controls in Hangzhou. Run the same model weights somewhere else – your own machine, a US data center – and data never touches China. Same model. Different pipe. Completely different outcome.
Background: why this is even a question
DeepSeek’s R1 model went viral in late January 2025 and almost immediately the privacy questions started. The official policy is unusually direct – no euphemisms about “global infrastructure.” The data controller is Hangzhou DeepSeek Artificial Intelligence Co., Ltd., registered in China, and per the official privacy policy, your data may be processed and stored on servers located in the People’s Republic of China when you access the service.
The legal nuance most articles miss: this isn’t a classic “data transfer” out of the EU. According to an IAPP analysis by Théodore Christakis, DeepSeek collects data directly from EU users – no intermediate EU-based controller, no processor in the middle. The Chinese entity is the sole controller from the moment of collection. That distinction matters because GDPR’s Chapter V transfer rules don’t cleanly apply to direct collection, which is partly why enforcement has been messy.
Method A vs Method B: the only choice that matters
Forget the country-ban list. The practical question is which deployment you’re using. There are only two categories, and they behave nothing alike.
| Path | Where prompts go | Subject to Chinese law | Effort |
|---|---|---|---|
| Official app / chat.deepseek.com / api.deepseek.com | Servers in PRC | Yes | Zero |
| Local self-host (Ollama, llama.cpp, vLLM) | Your machine | No | Medium |
| Third-party host (Perplexity, Azure AI Foundry, Together, Fireworks) | US/EU data centers | No | Zero |
The third-party route is the one most tutorials skip. As of early 2025, Perplexity CEO Aravind Srinivas confirmed the company hosts the model on data center servers located in the U.S. and Europe. Microsoft’s Azure AI Foundry offers DeepSeek as one of its open-source model options without sharing data with the Chinese company behind the model. Same R1, no Chinese hosting.
The walkthrough: getting R1 without the China round-trip
If you want the cleanest path, here’s the order of operations I’d actually recommend, from easiest to most private.
- Pick a Western-hosted endpoint. Perplexity has R1 in its model selector. Azure AI Foundry, Together AI, and Fireworks all serve DeepSeek weights from US/EU data centers. You sign up like any other SaaS – no Chinese ToS, no Hangzhou jurisdiction.
- Or self-host with Ollama.
ollama pull deepseek-r1and you’re done. The weights are MIT-licensed, so this is fully legal and supported. DeepSeek released its models under an MIT open-source license, which means any organization can download and deploy the model locally on its own infrastructure. - Confirm the routing. If you’re paranoid (reasonably), open your network monitor while you send a test prompt. You should see traffic to your provider’s domain – not to
deepseek.comor any*.cnendpoint. - Sandbox the official app if you must use it. Burner email, no Google/Apple SSO, VPN if you can. Treat anything you type as public.
Why the SSO warning? Because signing in through Google or Apple hands DeepSeek a richer profile than a throwaway email would. The official policy permits it; the question is whether you want to permit it.
Pro tip: If you’re evaluating R1 for work, run it through Azure AI Foundry or a local Ollama instance from day one. Pasting a single line of customer data into the official app means that data has now left your jurisdiction – and you can’t pull it back. Set the right default before anyone gets curious.
What the official app actually collects
Even setting aside the storage location, the collection scope is wider than most assume. Per the official privacy policy and independent app teardowns: account info, every prompt and uploaded file, IP address, device identifiers, and – the one that gets the most attention – keystroke patterns. The rhythm of your typing is biometric. It’s identifying even without a name attached.
Researchers found something the policy doesn’t mention. Code linking DeepSeek to China Mobile was first discovered by Feroot Security, a Canadian cybersecurity firm, and independently confirmed by a second set of computer experts brought in by the Associated Press. China Mobile was denied authority to operate in the United States by the FCC in 2019, citing “substantial” national security concerns about links between the company and the Chinese state. The researchers couldn’t confirm data was actively being shipped to China Mobile during North American tests, but the code path was there.
Edge cases the workarounds don’t solve
Self-hosting fixes the data flow. It does not fix everything.
- Censorship is in the weights. A U.S. CAISI evaluation in October 2025 found that local deployment of DeepSeek models eliminates the cross-border data flow concern but does not eliminate censorship patterns baked into the training. Ask a locally-hosted R1 about Tiananmen and you’ll often get the same dodge as the cloud version. The model learned what it learned.
- Jailbreak weakness travels with the model. The same October 2025 CAISI evaluation documented security vulnerabilities that persist regardless of where the model runs. If you’re embedding R1 in an agent, the guardrails are weaker than mainstream Western models – that’s a deployment risk, not a hosting risk.
- The infrastructure has actually leaked. Not theoretically. Wiz Research documented in January 2025 a publicly accessible ClickHouse database containing over a million lines of chat histories and API secrets. If you used the cloud service in early 2025, your prompts were in a database anyone on the internet could read.
- The privacy policy rewrote itself mid-deployment. The original version in place at the European launch, dated December 5, 2024, was completely silent on data storage location. The February 14, 2025 update introduced an EEA-specific section that acknowledged the reality. Anything you signed up to before mid-February 2025 was governed by a policy that didn’t tell you where your data lived.
One more wrinkle. DeepSeek only fulfilled the GDPR Article 27 requirement to designate an EU representative in late May 2025 – almost five months after the Italian ban in early 2025. If you care about EU enforcement teeth, that’s a soft signal about the company’s posture toward Western regulators.
So what should you actually do?
The honest answer depends on what you’re doing with R1. Quick reasoning experiment with a public-domain prompt? The official app is fine – the cost of a leak is zero. Anything with a customer name, internal code, a contract, or a medical question? Use Azure, Perplexity, or local Ollama. The model is the same; the legal envelope is not.
FAQ
Does the DeepSeek API send data to China too?
Yes. The api.deepseek.com endpoints route to the same Chinese infrastructure as the consumer app. If you need an OpenAI-compatible endpoint without that, point your client at a Western host like Together, Fireworks, or Azure AI Foundry – they all expose R1 over a compatible API.
If I run DeepSeek locally with Ollama, is anything leaving my computer?
The inference itself is fully local – your prompt never leaves the machine. The one thing to double-check is the initial model download, which pulls weights from Ollama’s CDN (not from China). After the pull completes, you can disconnect from the internet entirely and the model still runs. That’s the actual privacy benefit of MIT-licensed open weights, and it’s why “DeepSeek” the model and “DeepSeek” the company are two separable things.
Is this really different from ChatGPT collecting my data?
Different jurisdiction, different legal recourse. US-based providers can be subpoenaed; they can also push back in court. A Chinese-hosted service operates under laws that allow data demands without comparable due-process challenge. Whether that matters to you is a judgment call – but it’s not the same shape of risk.
Next step: open a terminal, run ollama pull deepseek-r1:8b, and send your first prompt offline. If the answer is good enough for your use case, you’ve solved the problem in about ten minutes.