If you copy a LangChain tutorial from 2023 and run it today, half the imports will break. You’ll open the code, paste it in, and get a wall of deprecation warnings, ModuleNotFoundError, and broken chains. That’s not a bug – it’s the v1.0 namespace split, and most install guides haven’t caught up.
This guide deploys LangChain v1.x on a clean Python environment, gets create_agent running, and fixes the dependency conflicts you’ll actually hit in production. No “what is LangChain” preamble – you searched for an install guide because you already know.
Why v1.0 changes how you install LangChain
LangChain 1.0 went GA on October 22, 2025 – the first major stable release, with a commitment to no breaking changes until 2.0. The catch: legacy functionality – LLMChain, AgentExecutor, ConversationBufferMemory, MultiQueryRetriever, indexing utilities – moved out of the main package and into a separate langchain-classic package (per the official v1 migration guide).
That means any code importing those names now needs a second install. Worth knowing the scale: 90 million monthly downloads and production deployments at Uber, JP Morgan, Blackrock, and Cisco – a lot of stale tutorial code is in the wild.
System requirements (verified for v1.x)
| Component | Minimum | Recommended |
|---|---|---|
| Python | 3.10 | 3.11 or 3.12 |
| RAM | 2 GB free | 8 GB+ (for local embeddings) |
| Disk | ~500 MB for core + provider | 2 GB for full stack |
| OS | Linux / macOS / Windows | Linux or macOS for production |
| Package manager | pip 23+ | uv (much faster resolver) |
langchain-community 0.4.1 – released October 27, 2025 – requires Python >=3.10.0, <4.0.0 (PyPI). Older articles citing Python 3.8 or 3.9 are stale; those versions were dropped in the 0.3.x line. Python 3.14 support is listed as coming soon on the LangChain blog, so 3.14 may work but isn’t officially supported as of this writing (April 2026).
Install LangChain v1 step by step
Use a virtual environment. Always. The dependency tree of LangChain plus a model provider plus LangSmith pulls in dozens of packages, and a global install will break something within a month.
# 1. Create and activate a virtualenv
python -m venv .venv
source .venv/bin/activate # macOS/Linux
.venvScriptsactivate # Windows PowerShell
# 2. Upgrade pip
python -m pip install --upgrade pip
# 3. Install LangChain core + a provider extra
pip install -U "langchain[openai]"
# 4. Optional: legacy code support
pip install langchain-classic
# 5. Verify
python -c "import langchain; print(langchain.__version__)"
Turns out the [openai] extra pulls in langchain-openai automatically – no separate install needed. The PyPI page lists 18 provider extras: anthropic, aws, azure-ai, baseten, community, deepseek, fireworks, google-genai, google-vertexai, groq, huggingface, mistralai, ollama, openai, perplexity, together, and xai. Pick whichever matches your model.
On the resolver: If you’re starting fresh,
uvbeats pip here. The official upgrade docs useuv pip install --upgrade langchain– it resolves LangChain’s dependency graph in seconds rather than minutes, and its stricter resolver surfaces conflicts that pip silently swallows.
First-time configuration: the create_agent flow
Set your provider key, then run a real agent – not the LLMChain hello-world that every other 2024 tutorial still recycles.
export OPENAI_API_KEY="sk-..."
# agent.py
from langchain.agents import create_agent
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="openai:gpt-4o",
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
result = agent.invoke(
{"messages": [{"role": "user", "content": "Weather in Warsaw?"}]}
)
print(result["messages"][-1].content_blocks)
Two things in that snippet didn’t exist before v1. create_agent is the new primary agent constructor, built on the LangGraph runtime (LangChain blog, October 2025). And message.content_blocks – confirmed in the migration guide – gives you a provider-agnostic typed view of response content, so the same code works whether you swap to anthropic:claude-sonnet-4 or google_genai:gemini-2.0.
Verifying the install actually works
The version check passes? Good. Now confirm the runtime, not just the import.
- Run
python -c "from langchain.agents import create_agent; print('ok')". If this errors, you’re on v0.x – uninstall and reinstall. - Run the agent above with a real API key. A successful response means the provider package, langchain-core, and LangGraph runtime all resolved correctly.
- Optional: enable tracing with
export LANGSMITH_TRACING=trueand a LangSmith key – useful for debugging tool calls later.
Common install errors and the fixes that actually work
This is where tutorials hand-wave. Real errors, real fixes:
1. ResolutionImpossible when adding a second framework. Mix LangChain v1 with crewai or nemoguardrails and pip will choke – crewai pins to an older langchain-core, producing “Cannot install crewai and langchain-ollama because these package versions have conflicting dependencies” (documented in crewai GitHub issue #1911). For Azure ML environments specifically: azureml-rag==0.2.36 requires langchain>=0.1.20,<0.3 and langchain-community<0.3.0 – that pin keeps you on v0.x, so you either wait for the third-party framework to bump its langchain bound or open an issue upstream.
2. ModuleNotFoundError: No module named 'langchain.chains'. You’re on v1, your code is from 2024. Install langchain-classic and rewrite the import: from langchain_classic.chains import LLMChain. The same applies for MultiQueryRetriever, CacheBackedEmbeddings, indexing utilities, and hub imports – all moved to langchain-classic per the official migration guide.
3. TypeError when passing a Pydantic state schema to create_agent. Docs confirm: create_agent only supports TypedDict for state schemas. Pydantic models and dataclasses are no longer supported. Convert your schema to a TypedDict.
4. DeprecationWarning on message.text(). Drop the parentheses – text became a property in v1. The method form still runs but emits a warning; it will be removed in v2.
Upgrading from v0.3 (or earlier)
Don’t run pip install -U langchain on a production v0.x project and pray. The migration touches imports, agent construction, and message handling across multiple release steps. Based on the official changelog and migration guide: 0.1.0 (January 2024) deprecated LLMChain and recommended LCEL; 0.2.0 (May 2024) deprecated AgentExecutor; 0.3.0 (September 2024) dropped Python 3.8/3.9 and required Pydantic v2; 1.0.0 (October 2025) introduced create_agent, built-in middleware, and moved legacy APIs to langchain-classic.
Migration playbook for an existing v0.3 codebase:
- Pin
langchain>=1.0,<2.0in a fresh branch. - Run your test suite – every failed import points to a
langchain_classic.*rewrite. - Replace
create_react_agentfromlanggraph.prebuiltwithcreate_agentfromlangchain.agents. - Convert any Pydantic state schemas to TypedDict.
- Search-and-replace
.text()→.texton message objects. - Check the official migration guide for any provider-specific changes.
Worth asking before you upgrade: does your team actually need v1’s middleware and content_blocks today, or is v0.3 stable enough for another quarter? The LangChain blog notes the 1.0 release commits to no breaking changes until 2.0 – but that’s a forward guarantee, not a migration shortcut. Sometimes the honest answer is “not yet.”
Uninstall cleanly
If you’re nuking the install:
pip uninstall -y langchain langchain-core langchain-classic langchain-openai langchain-community langgraph langsmith
Then delete the virtualenv directory entirely. LangChain leaves cached model profiles in ~/.cache/langchain/ on Linux and macOS – remove that too for a truly fresh slate.
FAQ
Do I still need LangGraph if I install LangChain v1?
It’s bundled. LangChain’s agent layer is built on LangGraph (per the LangChain blog), so installing langchain pulls it in. You only install langgraph directly when you need custom graph-based workflows below the create_agent abstraction – interrupt handling, custom state machines, that kind of thing.
What’s the difference between langchain, langchain-core, and langchain-community?
Three packages, three jobs. langchain-core holds the base abstractions – messages, runnables, tool interfaces – and gets installed automatically as a dependency. langchain is the main package: agents, orchestration, create_agent. langchain-community carries third-party integrations without a dedicated partner package – useful for obscure vector stores or document loaders, but most apps don’t need it once they’ve picked a specific provider package like langchain-openai.
Can I run LangChain entirely offline with a local model?
Yes. Install langchain-ollama alongside Ollama running locally, point create_agent at "ollama:llama3.1", no cloud required. One real constraint: a 7B model needs roughly 8 GB RAM and a recent CPU or GPU to be tolerable inside agent loops – the latency per tool-call round-trip adds up fast on slow hardware.
Next step: spin up a virtualenv, run the create_agent snippet above with your real API key, and watch the response come back as content_blocks. Once you see a tool call resolve correctly, add middleware – the built-in human-in-the-loop one is the right first choice for any agent touching production data.