Skip to content

Continue: GitHub Copilot Alternative – Install Guide v1.3.21

Deploy Continue, the open-source GitHub Copilot alternative, in VS Code with your own model. Install steps, config.yaml setup, and real fixes for known errors.

7 min readIntermediate

Here’s something most tutorials skip: Continue isn’t a model. It’s a thin transport layer between your editor and whatever LLM you point it at. That means the same extension can run Claude Sonnet, a local Qwen Coder on your laptop, and Codestral via Mistral’s API – all from one config file. If you’re shopping for a GitHub Copilot alternative and you want to actually own the stack, this is the one to deploy.

This guide installs the latest VS Code build, wires up a working autocomplete model, and lists the install errors that will bite you in the first hour. No marketing fluff.

What you’re actually deploying

Continue is an open-source IDE extension for VS Code and JetBrains. It’s licensed Apache 2.0 and the company shipped the 1.0 milestone on February 26, 2025 alongside what they then called the Continue Hub (now Mission Control). At that launch they reported over 20,000 GitHub stars and 10,000+ Discord members – so this isn’t a toy project.

Latest stable as of the October 21, 2025 changelog: v1.3.21 for VS Code and v1.0.50 for JetBrains. The repo ships rapid releases, so check the changelog before you start – version numbers move weekly.

System requirements

Component Minimum (approximate) Recommended (approximate)
OS macOS 11+, Windows 10+, Linux (any modern distro) same
VS Code recent build – check the extension page for the current minimum latest stable
RAM 4 GB (cloud models) 16 GB+ (local models via Ollama)
Disk ~200 MB extension +10-40 GB if you run local 7B-14B models
Node.js (CLI only) 20+ 22 LTS

The CLI is a separate beast. If you only want chat and autocomplete inside VS Code, you don’t need Node at all – the extension bundles its own runtime. Node 20+ only matters if you also install the cn CLI for CI checks.

Install the extension

Open VS Code, go to Extensions, search Continue, install the one published by Continue.continue. Or grab it directly from the VS Code Marketplace. A new sidebar icon appears once it’s loaded.

If you also want the CLI for PR checks in CI:

# macOS / Linux
curl -fsSL https://raw.githubusercontent.com/continuedev/continue/main/extensions/cli/scripts/install.sh | bash

# Windows PowerShell
irm https://raw.githubusercontent.com/continuedev/continue/main/extensions/cli/scripts/install.ps1 | iex

# verify
cn --version

JetBrains users: install from the JetBrains Marketplace inside Settings → Plugins. Same config file, same behavior.

First-time configuration

The first time you open the Continue panel, it auto-generates ~/.continue/config.yaml on Mac/Linux or %USERPROFILE%.continueconfig.yaml on Windows. That’s where everything lives – models, rules, MCP servers, context providers. The official configuration docs confirm that saving the file triggers an automatic reload.

A minimum viable config – chat via Anthropic, autocomplete via Codestral:

name: Local Assistant
version: 1.0.0
schema: v1
models:
 - name: Claude Sonnet
 provider: anthropic
 model: claude-sonnet-4-5 # check Anthropic's current model list - names change
 apiKey: ${{ secrets.ANTHROPIC_API_KEY }}
 roles:
 - chat
 - edit
 - name: Codestral
 provider: mistral
 model: codestral-latest
 apiKey: ${{ secrets.CODESTRAL_API_KEY }}
 roles:
 - autocomplete
context:
 - provider: code
 - provider: diff
 - provider: terminal
 - provider: codebase

The ${{ secrets.X }} syntax matters more than it looks. The Continue FAQ is explicit: IDE extensions cannot read your shell’s environment variables. Running export ANTHROPIC_API_KEY=... in your terminal does nothing – the IDE process never sees it. Keys go in a .env file inside ~/.continue/ (add it to .gitignore):

# ~/.continue/.env
ANTHROPIC_API_KEY=sk-ant-...
CODESTRAL_API_KEY=...

Model choice for autocomplete: Continue’s docs recommend Codestral, StarCoder, and Qwen Coder – all trained for fill-in-the-middle (FIM). Even 3B-parameter models perform well in this role. Large chat models like GPT-4-class often produce worse autocomplete results despite being more capable in other contexts. Don’t put Claude in the autocomplete role.

Verify it works

Start with chat. Open the Continue sidebar (Cmd/Ctrl + L), type “hello” – a response within a few seconds means the chat model and API key are wired. Then open any source file and start typing a function: ghost text should appear inline (Tab to accept, Esc to dismiss). If @codebase context is slow the first time, that’s normal – indexing runs in the background and shows a progress indicator.

No autocomplete suggestions? Almost always one of three things: missing autocomplete role on a model, the API key not loading from .env, or the autocomplete model doesn’t support FIM. Fix one at a time.

Common install errors

Most of these cluster around the config file, not the extension itself. Continue is essentially a config-driven router – if the config is wrong or stale, errors surface immediately. That also means fixes are usually fast once you know where to look.

1. Unable to load schema from ... config-yaml-schema.json: No content

Fix first, ask questions later: close VS Code, delete config.yaml and config.json from ~/.continue, reopen. The extension regenerates both. The cause, documented in GitHub issue #5545: after a version bump (say 1.0.7 → 1.0.8), the old extension folder gets removed but VS Code keeps trying to load the schema from the previous path.

2. Failed to parse config.json: ENOENT: no such file or directory

Happens when Continue still expects the legacy JSON config but you’ve already moved to YAML. The fix isn’t satisfying: create an empty config.json alongside your config.yaml. The extension prefers YAML when both exist but won’t crash on the missing file.

3. spawn ENAMETOOLONG when starting an MCP server (macOS)

A too-large environment when spawning the MCP process – the troubleshooting page covers this. Use the absolute binary path in your config instead of relying on PATH resolution:

mcpServers:
 - name: Memory MCP server
 command: /usr/local/bin/npx # full path, not just "npx"
 args:
 - -y
 - "@modelcontextprotocol/server-memory"

4. YAML anchors silently break

Try to deduplicate API keys with &anchor / <<: *anchor syntax and the parser rejects it without a clear message. The config.yaml reference requires a YAML version directive at the top:

%YAML 1.1
---
name: My Config
version: 1.0.0
schema: v1
...

5. Local Ollama “Unable to connect”

Run ollama serve, not just ollama run llama3. Hit http://localhost:11434 in a browser – “Ollama is running” means it’s up. Running Ollama on a separate machine? Point apiBase at that machine’s IP and open port 11434 in the firewall.

Upgrade and uninstall

VS Code handles extension updates automatically. If a bug fix landed but the stable release is still days away, look for the pre-release toggle on the Continue extension page in VS Code.

Migrating from the old config.json? Drop a config.yaml in ~/.continue/ alongside it. When both exist, YAML wins. Per the config migration docs: contextProviders becomes context, systemMessage becomes rules, and tabAutocompleteModel moves into models with roles: [autocomplete].

Clean wipe – use this when you’ve corrupted the config or want to start from zero:

# macOS / Linux
rm -rf ~/.continue
# Then uninstall the extension via VS Code, restart, reinstall

The ~/.continue directory holds configs and local data (on Windows: %USERPROFILE%.continue). Deleting it before uninstalling is the only way to actually start fresh – per the FAQ, just uninstalling the extension leaves all that data behind.

FAQ

Does Continue work fully offline?

Yes – pair it with Ollama and a local coder model like Qwen Coder or StarCoder. Zero data leaves your machine.

Why is autocomplete slower than GitHub Copilot?

Copilot runs on GitHub’s own infrastructure with a model tuned for low-latency completion. With Continue, the bottleneck is whatever provider and model you’ve chosen. The most common trap: using a large chat model in the autocomplete role. Chat models aren’t trained for fill-in-the-middle tasks, so they’re noticeably slower and produce worse suggestions. Swap the autocomplete role to a small FIM-trained model – Codestral or a 3B Qwen Coder – and the difference is immediate. Continue itself adds minimal overhead.

Can my team share one config?

Yes, and the setup is straightforward. Drop a .continue/ folder at your repo root, add a config.yaml, commit it. Everyone with the extension picks it up when they open the workspace. One important rule: API keys stay in each developer’s local ~/.continue/.env, never in the committed file. The ${{ secrets.NAME }} syntax in config.yaml is exactly what makes this split possible – the committed config references key names, not key values. Rotate a key? Each developer updates their own .env. Nothing touches version control.

Next step: install the extension, drop the minimum config above into ~/.continue/config.yaml, add your API keys to ~/.continue/.env, and run the three verification checks. If autocomplete fires on the first function you type, you’re done.