Skip to content

Install LangGraph 1.1: Setup Guide for Production Agents

LangGraph 1.1 turns AI agents into production software. Here's the real install process - from pip to Docker - plus 3 config gotchas the docs skip.

6 min readIntermediate

You want an AI agent that survives server restarts. Not a toy that works once, then forgets everything.

LangGraph 1.1 gives you that. Two install paths: pip for prototyping (state dies on restart), Docker for production (state persists in PostgreSQL). Most tutorials stop at pip. That’s step one. The real question: will your agent remember the conversation when your container crashes?

Think of LangGraph as a control system for LLMs. Your agent is a graph: nodes run logic (call an LLM, invoke a tool), edges route between nodes, state flows through. Unlike chains, graphs support cycles – your agent can loop back, retry, wait for human approval mid-execution. Like a flowchart that actually executes.

What You’re Installing

LangGraph is a Python framework for building stateful AI agents. As of April 2026, version 1.1.6 is the latest stable release. Klarna, Uber, J.P. Morgan use it for production workflows (official website).

Three things it handles: durable execution (state persists through crashes), human-in-the-loop (pause for approval anywhere), built-in memory (conversation history plus long-term facts). MIT-licensed – no vendor lock-in.

System Requirements

Local development (pip):

  • Python 3.10+ (3.9 dropped in v1.1, 3.14 now supported as of April 2026)
  • 4GB RAM
  • Stable internet (LangGraph orchestrates locally, offloads inference to cloud APIs like OpenAI)
  • pip or uv package manager

Production (Docker):

  • 16GB RAM, 4 CPUs for LangGraph API
  • 8GB RAM, 2 CPUs for PostgreSQL
  • 4GB RAM, 2 CPUs for Redis
  • Docker + Docker Compose

These are minimums from the official FAQ (as of April 2026). Kubernetes scales higher.

Install Method 1: pip (60-Second Setup)

Speed over persistence. No production readiness – just fast iteration.

Create a virtual environment. LangGraph and LangChain ship updates weekly; isolation prevents conflicts.

python -m venv langgraph-env
source langgraph-env/bin/activate # Windows: langgraph-envScriptsactivate

pip install -U langgraph

The -U flag pulls the latest version (1.1.6 as of April 2026).

Verify:

python -c "import langgraph; print(langgraph.__version__)"

You’ll need an LLM provider. LangGraph doesn’t care which – OpenAI, Anthropic, local models via Ollama. Most people use LangChain integrations:

pip install langchain-openai # or langchain-anthropic
export OPENAI_API_KEY="sk-..."

Done. You can build graphs now. When your script ends? All state vanishes.

Install Method 2: Docker (The Real Deployment)

An agent that remembers across restarts, handles concurrent users, streams responses. Three services required: LangGraph API, PostgreSQL (checkpoints), Redis (streaming pub/sub).

The LangGraph CLI builds your Docker image:

pip install langgraph-cli
langgraph build

This reads langgraph.json and packages your graph. Example config:

{
 "dependencies": ["langchain-openai", "./your_package"],
 "graphs": {
 "agent": "./your_package/graph.py:graph"
 },
 "env": ".env",
 "python_version": "3.11"
}

Create docker-compose.yml to wire up all three services. Working template from the official docs:

version: '3'
services:
 langgraph-redis:
 image: redis:6
 healthcheck:
 test: redis-cli ping
 interval: 5s

 langgraph-postgres:
 image: postgres:16
 environment:
 POSTGRES_USER: postgres
 POSTGRES_PASSWORD: postgres
 POSTGRES_DB: postgres
 volumes:
 - langgraph-data:/var/lib/postgresql/data
 healthcheck:
 test: pg_isready -U postgres

 langgraph-api:
 image: your-langgraph-image:latest
 ports:
 - "8000:8000"
 depends_on:
 langgraph-redis:
 condition: service_healthy
 langgraph-postgres:
 condition: service_healthy
 environment:
 REDIS_URI: redis://langgraph-redis:6379
 DATABASE_URI: postgres://postgres:postgres@langgraph-postgres:5432/postgres?sslmode=disable
 OPENAI_API_KEY: ${OPENAI_API_KEY}

volumes:
 langgraph-data:

Start:

docker-compose up

API runs on http://localhost:8000. State persists in PostgreSQL. Streams flow through Redis.

First Graph: Stateless vs Stateful

Minimum viable graph. Save as graph.py:

from langgraph.graph import StateGraph, MessagesState, START, END
from langchain_openai import ChatOpenAI

def call_model(state: MessagesState):
 model = ChatOpenAI(model="gpt-4o")
 response = model.invoke(state["messages"])
 return {"messages": [response]}

graph = StateGraph(MessagesState)
graph.add_node("agent", call_model)
graph.add_edge(START, "agent")
graph.add_edge("agent", END)
app = graph.compile()

Run:

from langchain_core.messages import HumanMessage

result = app.invoke({"messages": [HumanMessage(content="Hello")]})
print(result["messages"][-1].content)

Works. But every invocation starts from scratch.

Add a checkpointer for persistence:

from langgraph.checkpoint.sqlite import SqliteSaver

checkpointer = SqliteSaver("checkpoints.db")
app = graph.compile(checkpointer=checkpointer)

config = {"configurable": {"thread_id": "user-123"}}
app.invoke({"messages": [HumanMessage(content="Hi")]}, config=config)

Conversations survive restarts. SQLite for local dev. Production? Swap to AsyncPostgresSaver.

Verify the Install

Check version:

python -c "import langgraph; print(langgraph.__version__)"
# Output: 1.1.6 (or latest)

CLI installed?

langgraph --version

Run dev server:

langgraph dev

Starts API on http://localhost:8123 with hot reloading. Test:

curl http://localhost:8123/ok
# Response: {"ok":true}

3 Breaking Changes That Kill First Deploys

Breaking Change 1: langgraph.prebuilt module gone

Deprecated in LangGraph 1.0 (October 2025). Functionality moved to langchain.agents. Old code breaks:

# Breaks in 1.0+
from langgraph.prebuilt import create_react_agent

# Fix
from langchain.agents import create_agent

Can’t refactor yet? Pin to 0.x:

pip install langgraph==0.2.28

Breaking Change 2: ExecutionInfo import error

Version mismatch between langgraph and langgraph-prebuilt. Broke in early 2026 when a patch introduced breaking changes. Community workaround (March 2026):

pip install langgraph==1.0.5 langgraph-prebuilt==1.0.5 langgraph-checkpoint==3.0.1

Temporary fix – GitHub issue still open.

Breaking Change 3: Python 3.9 dropped

LangGraph 1.1.x requires Python 3.10+. Added 3.14 support (April 2026). If you’re on 3.9:

python --version # Check current version
pyenv install 3.11 # Or upgrade via your package manager

Bonus gotcha: langgraph dev uses in-memory storage

The dev server doesn’t persist state across restarts. Production requires PostgreSQL checkpointer. Caught me once – debugged a “missing conversation” bug for 20 minutes before realizing dev mode was the issue.

Optional: LangGraph Studio (macOS Only)

Visual IDE for debugging agents. Shows your graph as a flowchart, step through execution node-by-node, replay runs from checkpoints.

Currently macOS-only (as of April 2026). Download from GitHub releases.

Install CLI plugin:

pip install langgraph-cli
langgraph dev

Studio auto-connects to localhost:8123 and visualizes your agent in real time. Code changes hot-reload instantly.

Linux/Windows users: no GUI, but langgraph dev still works – interact via API instead.

Upgrade from 0.x

Two breaking changes:

1. langgraph.prebuilt removed → migrate imports to langchain.agents
2. Python 3.9 dropped → need 3.10+

Upgrade:

pip install -U langgraph langchain

Find deprecated imports:

grep -r "from langgraph.prebuilt" .

Replace with from langchain.agents import create_agent. Full migration guide in the LangGraph 1.0 announcement.

Uninstall

Remove packages:

pip uninstall langgraph langchain-openai langchain

Delete virtual environment:

rm -rf langgraph-env

Docker cleanup:

docker-compose down -v

The -v flag deletes volumes (PostgreSQL data included). Omit if you want to keep data.

Deploy a test agent to your local Docker stack. Invoke it via SDK. If it responds and the conversation persists after docker-compose restart, you’re production-ready.

Does LangGraph require LangChain?

No. LangGraph is standalone. Most people use LangChain’s LLM integrations (langchain-openai, langchain-anthropic) because they’re convenient. You can use any LLM library – just pass responses into your graph’s state manually.

Can I run LangGraph without cloud APIs?

Yes. Use Ollama or llama.cpp for local LLMs. LangGraph doesn’t care where the model runs – it just orchestrates calls. The 4GB RAM requirement assumes cloud inference. Running models locally? 8-24GB depending on model size.

Why both PostgreSQL and Redis?

PostgreSQL: stores checkpoints (agent state snapshots). Redis: streaming pub/sub (clients receive token-by-token output in real time). You can skip Redis if you don’t stream responses, but you lose the “agent is typing” UX. One debugging session without streaming taught me why it matters – users thought the agent froze when responses took 8+ seconds.