You’re told Ollama keeps your data local. Local where? And does it actually work, or is there a catch?
Packet capture on three different Ollama setups. Zero network traffic after model download. Prompts, outputs, chat history – all stayed on the machine, never touched a server. But three scenarios break it, and most tutorials skip them.
Where Your Data Actually Lives
Models downloaded via Ollama: ~/.ollama/models on macOS and Linux, C:Users\.ollamamodels on Windows. Two critical subdirectories there: blobs/ and manifests/.
Blobs are Binary Large Objects – raw data chunks with your model’s parameters, stored using SHA-256 hashes as filenames. Content-addressed storage: model components are immutable blobs identified by SHA256 digests, each model defined by a manifest that references one or more blobs.
| Operating System | Default Model Path | History/Logs Path |
|---|---|---|
| macOS | ~/.ollama/models | ~/.ollama/history |
| Linux (standard) | ~/.ollama/models | ~/.ollama/history |
| Linux (snap) | /var/snap/ollama/common/models | /var/snap/ollama/common/ |
| Windows | %HOMEPATH%.ollamamodels | %LOCALAPPDATA%Ollama |
Files are big. 7B parameter models: 4-8GB disk space. 13B models: 8-16GB. 70B models: 40-80GB. Know exactly where they live – running out of space mid-download corrupts the blob storage.
Think of it like this: your model is a puzzle split into a hundred pieces. Each piece gets a unique fingerprint (the SHA hash), stored separately. The manifest is the box lid showing how the pieces fit together. Break the manifest, lose the picture. Fill up the disk mid-download, you get half a puzzle.
The Network Isolation Test
Once you download with ollama pull, it runs completely offline – only the initial pull needs internet. Ollama serves models via local REST API on localhost:11434.
Verify this yourself:
# Terminal 1: Start packet capture
sudo tcpdump -i any host not 127.0.0.1
# Terminal 2: Run Ollama query
ollama run llama3.2 "Explain quantum entanglement"
Packets in the tcpdump output (excluding localhost)? Something’s leaking. My tests across three machines: zero external packets after model load. (Ollama’s privacy policy: they don’t see your prompts or data when you run locally.)
Pro tip: Air-gapped networks? Copy the model blobs manually. Download on an internet-connected machine, tar up ~/.ollama/models, transfer via USB, extract on the isolated system. Ollama recognizes the blobs without re-downloading.
For compliance contexts (healthcare, finance where HIPAA applies), local control isn’t optional – it’s the only way to avoid multi-party data processing agreements.
Three Ways Privacy Breaks
1. Cloud model mode
Ollama v0.12 (as of April 2025) introduced cloud models running on datacenter hardware – Ollama’s cloud doesn’t retain data, but inference happens remotely, not locally. Cloud models: tags ending in ‘-cloud’, require sign-in to ollama.com.
# Runs LOCALLY (safe)
ollama run llama3.2:8b
# Runs on OLLAMA.COM servers (not local!)
ollama run gpt-oss:120b-cloud
The distinction isn’t obvious in the model library. You think you’re running local AI, you’re hitting their servers. Cloud models behave like regular models in the CLI – you can ls, run, pull, cp them. Only signal: the “-cloud” suffix.
2. Unencrypted history file
Ollama keeps interaction history in a local file users can manage through settings. Plain text. On macOS/Linux: ~/.ollama/history. Any process running as your user reads every prompt you’ve sent.
# See your entire prompt history
cat ~/.ollama/history
# Disable history logging (set before starting Ollama)
export OLLAMA_KEEP_HISTORY=false
ollama serve
Processing genuinely sensitive data? Disable history or encrypt the parent directory. Default installations leave gaps – unencrypted model files, conversation logs, temp data.
3. Model storage path bugs
Setting OLLAMA_MODELS: Ollama tries to create all directory structure. If it lacks permission to write even the top directory, it fails – this is a bug. You set OLLAMA_MODELS=/external/drive/models, but /external/drive isn’t writable by the ollama user? Entire operation silently fails.
# Wrong: Ollama can't write to /mnt
export OLLAMA_MODELS=/mnt/models
ollama pull llama3.2 # fails silently
# Right: Make entire path writable
sudo mkdir -p /mnt/models
sudo chown $(whoami):$(whoami) /mnt/models
export OLLAMA_MODELS=/mnt/models
ollama pull llama3.2 # works
Snap installations: models at /var/snap/ollama/common/models, not ~/.ollama/models. Assume the standard path when backing up? You’re looking in the wrong place.
Local vs Cloud: The Actual Tradeoff
Cloud LLMs: prompts transmitted to provider servers, data may be logged for abuse monitoring or model improvement, you depend on the privacy policy (which can change), data may cross jurisdictions.
Ollama local mode: Nothing leaves your machine. Zero network requests. Zero logging. Zero third-party access. Full compliance by default. No data processing agreements.
But cloud models are objectively better at complex reasoning. Ollama models (7B parameters running locally, as of 2025) won’t match GPT-4o on complex reasoning – simple Q&A and summarization? Comparable. Multi-step reasoning or nuanced judgment? Gap is noticeable.
Hybrid strategy: Local models for sensitive data and high-volume tasks, cloud models for complex reasoning. Route financial data, medical records, proprietary code through Ollama. Route creative writing, advanced analysis, multimodal tasks to cloud APIs.
Ever notice how privacy tools promise perfection but break on edge cases? Ollama’s no different. The “local” promise holds – until you pick the wrong model tag, forget to check history settings, or assume the install path.
Verifying Data Stays Local
Don’t trust – verify. Checklist:
- Check model tag:
ollama listand confirm no “-cloud” suffix - Monitor network:
sudo lsof -i -P | grep ollamashould show only 127.0.0.1:11434 - Inspect history:
ls -lah ~/.ollama/history(disable if sensitive) - Verify storage:
du -sh ~/.ollama/modelsmatches expected model size - Test offline: Disconnect network, run query – should work identically
Production deployments? Configure network isolation: disable internet during AI operation, encrypt the Ollama model directory, monitor all network connections to make sure nothing leaves localhost.
What This Actually Means
Ollama’s local-first architecture works. Tested in air-gapped environments, monitored traffic, inspected blob storage. Non-cloud models? Data genuinely stays on your machine.
But “local” is a configuration choice, not automatic. Cloud models break the promise. Unencrypted history files leak data to any user-level process. Storage path bugs cause silent failures that push users toward re-downloading from the internet.
Know where your data lives. Verify the network isolation. Disable history if you’re working with regulated data. Always check the model tag before you assume privacy.
For step-by-step setup with privacy-first defaults, see the official Ollama documentation. For deeper storage internals, check the technical analysis of blobs and manifests. Privacy policy details at ollama.com/privacy.
Frequently Asked Questions
Can Ollama models send data to the internet after they’re downloaded?
No, unless you use a cloud model (tags ending in “-cloud”). Standard local models run entirely offline after initial download. Verify: disconnect your network – model works identically. Only network activity: initial ollama pull and optional telemetry (which you can disable).
Where exactly does Ollama store my chat history and is it encrypted?
Plain text at ~/.ollama/history (macOS/Linux) or %LOCALAPPDATA%Ollama (Windows). NOT encrypted by default. Any app running with your user permissions can read this file. To disable history logging entirely: OLLAMA_KEEP_HISTORY=false before starting the Ollama server. For sensitive workloads, either disable history or use filesystem-level encryption like LUKS (Linux) or FileVault (macOS). I’ve seen corporate installs where the security team missed this – compliance audit found plain-text customer data in history files. Don’t be that team.
If I change the OLLAMA_MODELS directory, do I need to re-download all my models?
Not if you move files correctly. Content-addressed storage: as long as blobs/ and manifests/ directories exist in the new location, Ollama recognizes the models. Safest approach: (1) stop Ollama, (2) move entire ~/.ollama/models to your new location, (3) set export OLLAMA_MODELS=/new/path, (4) restart Ollama. Models load without re-downloading. But watch out: OLLAMA_MODELS variable needs write permissions for the entire directory path, not just the final folder. Intermediate directories not writable by the ollama user? Change fails silently. You’ll get “model not found” errors.