So you want to deploy an agent that survives a server restart mid-conversation. That’s the actual reason most people end up at LangGraph – and it’s also where most install tutorials stop being useful.
This guide is specifically about agent orchestration: the install, the runtime, the checkpointer config that makes persistence real, and the version traps that have cost production teams hours in the past few months.
What you’re actually installing
LangGraph isn’t one package – it’s four, versioned independently: the runtime (langgraph), the CLI (langgraph-cli), a prebuilt agents layer (langgraph-prebuilt), and a checkpoint backend (langgraph-checkpoint). That independence is the source of about half the install bugs you’ll hit. No single pip install langgraph command pulls a coherent, tested combination.
LangGraph 1.0 went GA on October 22, 2025 – first stable major release. As of early 2026, the latest stable is langgraph 1.1.10 paired with langgraph-prebuilt 1.0.12; 1.2.0a5 is in alpha. The team has committed to no breaking changes until 2.0.
Under the hood, LangGraph draws from Pregel and Apache Beam, with a public interface modeled on NetworkX. What that means in practice: your agent is a graph with cycles, and the runtime checkpoints state between every node transition – which is why recovery and human-in-the-loop pauses come for free.
System requirements
Python 3.10 minimum – 3.9 was dropped because it hit end-of-life in October 2025. The official docs don’t publish RAM minimums for the library itself, but running langgraph dev with the in-memory runtime needs roughly 1-2 GB of headroom on top of your LLM client (rough estimate – your mileage will vary by model size).
| Component | Minimum | Recommended |
|---|---|---|
| Python | 3.10 | 3.12 |
| OS | macOS, Linux, Windows | Linux for production |
| RAM | 2 GB free | 4 GB+ for local Studio |
| Disk | ~200 MB for deps | 2 GB+ if using Postgres checkpointer locally |
Production state needs a real database. SQLite works for single-process apps. Postgres is the standard for anything multi-worker.
Install for agent orchestration
Minimum viable install – runtime, CLI, agents layer, checkpoint backend:
python -m venv .venv
source .venv/bin/activate # Windows: .venvScriptsactivate
pip install -U "langgraph>=1.1,=1.0"
"langgraph-cli[inmem]"
"langgraph-checkpoint-sqlite"
# LLM provider - pick one
pip install langchain-openai
# or: pip install langchain-anthropic
Two things. The CLI gets the [inmem] extra – skip it and langgraph dev throws a misleading error about a package that doesn’t exist. And the version pin on langgraph isn’t paranoia: pip auto-resolves langgraph-prebuilt to latest, and “latest” has shipped breakage twice in the past six months.
Lock it: Pin
langgraph-prebuiltexplicitly in yourrequirements.txtorpyproject.toml. Treat it like a database driver – the patch version matters.
A graph that actually persists
Default tutorials show a ReAct agent with in-memory state. Kill the process – state gone. For real orchestration, attach a SQLite checkpointer:
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.sqlite import SqliteSaver
checkpointer = SqliteSaver.from_conn_string("./agent_state.db")
agent = create_agent(
model=ChatOpenAI(model="gpt-4o-mini"),
tools=[],
checkpointer=checkpointer,
)
config = {"configurable": {"thread_id": "user-42"}}
result = agent.invoke(
{"messages": [("user", "Remember my favorite color is teal.")]},
config,
)
The import matters: from langchain.agents import create_agent. Copy from a 2024 tutorial and you’ll get the old langgraph.prebuilt path – which is now deprecated. More on that in error #4 below.
The thread_id is the orchestration primitive. Different IDs, different conversations, each checkpointed independently. Kill the process, restart, invoke with the same thread_id – the agent picks up exactly where it stopped.
Verify the install
Three checks:
python -c "import langgraph; print(langgraph.__version__)"→ should print 1.1.xlanggraph --help→ should showdev,up,build,newsubcommands- Run the agent snippet above twice with the same thread_id. Second run should remember the color.
Want a UI? Run langgraph dev – it binds to 127.0.0.1:2024 by default and opens LangGraph Studio in your browser, with a visual graph view, state time-travel, and a debugger. Production deployment uses langgraph up instead (Docker stack, Postgres, port 8123).
The four errors that will eat your afternoon
None of these appear in standard tutorials. They’re documented somewhere on GitHub or the LangChain forum – but you have to already know the right search terms to find them.
1. ModuleNotFoundError: No module named ‘langgraph._internal’
Turns out pip’s version resolution works against you here. When you install langgraph, pip pulls the latest langgraph-prebuilt automatically. Starting with langgraph-prebuilt 1.0.9, the prebuilt package began importing new runtime classes (ExecutionInfo, ServerInfo) that don’t exist in langgraph 1.0.x – and the mismatch breaks both local imports and cloud deployments silently at import time. Fix: if you’re on langgraph 1.0.x, pin langgraph-prebuilt==1.0.5. On 1.1.x, match the prebuilt minor that shipped with your langgraph release.
2. “Required package ‘langgraph-api-inmem’ is not installed”
You ran pip install langgraph-cli without the extra. The error message tells you to install langgraph-api-inmem, which is the wrong package name – the actual fix is pip install -U "langgraph-cli[inmem]". Annoying precisely because the error message misleads you.
3. Studio shows “Failed to load assistants” in Safari or Brave
Safari blocks plain-HTTP traffic on localhost – langgraph dev runs plain HTTP. The fix: upgrade to langgraph-cli ≥0.2.6 and run langgraph dev --tunnel, which outputs a Cloudflare tunnel URL that Safari accepts. Brave does the same when Shields are enabled; either disable Shields for the LangSmith domain or use the tunnel flag. Chrome and Firefox work without any workaround.
4. ImportError on langgraph.prebuilt.create_react_agent
LangGraph v1 deprecates langgraph.prebuilt, with functionality moved to langchain.agents. Replace from langgraph.prebuilt import create_react_agent with from langchain.agents import create_agent. They’re not 1:1 though – some teams have found that message-history rewriting that worked in create_react_agent doesn’t have a clean equivalent in create_agent. If you relied on that pattern, model it as middleware instead.
On what “orchestration” actually requires
Most frameworks call themselves orchestrators. Few earn it. The thing that qualifies LangGraph is the checkpointer – every state transition is a row in a database, which means failures are recoverable, multi-day workflows are possible, and human-in-the-loop pauses don’t require custom infrastructure.
Is any of this worth it for a chatbot that lives 30 seconds? No – use plain LangChain. But here’s the honest question: how long does your agent actually need to run? If the answer is “until a human approves something” or “until an external API responds, which might be tomorrow”, then the version-pinning ceremony is the cheapest part of the build.
Upgrading and uninstalling
Coming from 0.x: update Python to 3.10+ first. Then:
pip install -U langgraph langchain
The create_react_agent import is deprecated (see error #4). No other breaking changes until 2.0 – that’s the backward-compatibility promise made at the 1.0 GA announcement.
Clean uninstall:
pip uninstall langgraph langgraph-cli langgraph-checkpoint
langgraph-checkpoint-sqlite langgraph-prebuilt langgraph-sdk
rm -rf ./agent_state.db ./.langgraph_api
The .langgraph_api directory is where langgraph dev caches local runtime data (based on observed behavior – not formally documented as of early 2026). Wipe it if corrupted state is causing startup errors.
FAQ
Do I need LangChain installed if I only want LangGraph orchestration?
Technically no – LangGraph runs standalone with raw StateGraph primitives and any HTTP client. But create_agent now lives in langchain.agents, and the LLM client libraries (langchain-openai, langchain-anthropic) are the path of least resistance for connecting to models. You can avoid LangChain entirely, but you’re writing glue code that those packages already handle. Most teams don’t bother avoiding it. One case where it makes sense: if you’re building a very thin agent on top of a single model’s SDK and want to minimize dependency surface area – then raw StateGraph is worth the extra code.
What checkpointer should I use in production?
Postgres. SQLite locks aggressively under concurrent writes – fine for a CLI tool, wrong for anything serving multiple users simultaneously.
Is LangGraph free for commercial use?
Yes – LangGraph is MIT-licensed. The runtime, CLI, and all orchestration primitives are free. What costs money is LangGraph Platform – managed deployment, autoscaling, hosted Studio. That’s optional. Everything in this guide is free.
Next step: stand up a SQLite-backed agent on your machine right now, kill the process mid-conversation, restart it with the same thread_id. If it remembers – your install is correct. If it doesn’t, the checkpointer isn’t wired up. Check the checkpointer= argument on create_agent.