Skip to content

How to Build Browser Extensions with AI Help (2026 Guide)

Learn how to build browser extensions with AI help in 2026 - Manifest V3 traps, the Claude vs ChatGPT vs Gemini test, and what publishing actually costs.

8 min readIntermediate

Here’s a fact most AI tutorials skip: as of July 24, 2025, Manifest V2 became disabled everywhere – the “re-enable” option was removed. Yet when you ask ChatGPT to build a Chrome extension today, it still often generates manifest_version: 2 by default. Your AI-generated extension may load fine on your machine and silently break for everyone else.

That single gotcha tells you everything about how to build browser extensions with AI help in 2026: the AI writes plausible code, the browser doesn’t accept plausible code, and you’re the one stuck reading error logs at 1 AM.

The takeaway, upfront

If you only read this far: use Claude (or Claude Code) as the primary builder, force Manifest V3 in your first prompt, and assume any DOM access in the background script is broken until proven otherwise. Everything else is detail.

A 60-second background

manifest.json is the spine of every extension – it tells Chrome what files to load, what permissions to request, and which sites the extension touches. The catch: a service worker now replaces the old background page. Service workers can’t touch the DOM, can’t keep variables in memory indefinitely, and can’t run remote code. Most AI-generated tutorials still pretend it’s 2022.

Think of a service worker like a contractor who shows up, does the job, and goes home. No desk. No filing cabinet. Anything left on the floor when they leave is gone. That mental model explains roughly half the bugs in AI-generated extension code.

Method A vs Method B: which AI actually builds working extensions

You have two real options. Method A is the chat interface (ChatGPT, Claude, Gemini in a browser tab). Method B is an agentic IDE or CLI (Cursor, Claude Code, Codex). For a first extension, Method A wins on simplicity – but only with the right model.

A MakeUseOf reporter ran the cleanest public test I’ve seen: same prompt, three models, same target. Claude built the extension the quickest, with the fewest messages required, and it’s the only tool that managed to build something fully functional by the end. ChatGPT built something workable, and Gemini just didn’t. (Test run on specific model versions in 2025 – results will shift as models update.)

Tool Outcome in a real test Best for
Claude (chat) Fully functional, fewest prompts First-time builders
ChatGPT (chat) Workable after multiple patch rounds If you’re already paying for Plus
Gemini (chat) Did not produce a working extension Skip for this task
Claude Code / Cursor Better for iterating past version 1 Once you have something to refine

Fair warning though: that’s one test, one extension idea, run on specific model versions. The pattern across community reports is consistent – Claude tends to produce closer-to-working MV3 code on the first try. But your task might flip the rankings.

Walkthrough: building a working extension with Claude

I’m going to skip the obligatory “build a to-do list” example because every tutorial uses it. Pick something you’d actually use. For this walkthrough, I’ll reference a context-menu-based text helper, since it exercises three things AI gets wrong most often: permissions, content scripts, and message passing.

Step 1 – The opening prompt that prevents 80% of bugs

Don’t say “build me a Chrome extension that does X.” Say this:

Build a Chrome extension using Manifest V3 only.
Requirements:
- manifest_version must be 3
- Use a service worker for background logic, not background.scripts
- Request the minimum permissions necessary - no <all_urls> unless I confirm
- If you need DOM access in the background, use an offscreen document and explain why
- Persist state with chrome.storage, never in module-scope variables

What it should do: [your description]
Return the full file tree and contents.

Those five constraints map directly to the five most common failures in AI-generated extension code. Skip them and you’ll spend the next hour pasting errors back into chat.

Step 2 – Load it unpacked and watch what actually happens

Go to chrome://extensions, flip on Developer Mode, click “Load unpacked,” select your folder. Syntax error in the manifest? Chrome tells you immediately. Logic bug? You won’t find out until you click the icon and nothing happens – so click it on purpose, right now, before you write another line.

Best debugging move: right-click the extension icon → Inspect popup → Console tab. For service worker bugs, use the “service worker” link on the extensions page itself. Two different consoles. Don’t mix them up.

Step 3 – When the AI hands you broken code (it will)

Paste the exact error message back. Don’t paraphrase. Don’t explain what you think went wrong. The model is better at parsing stack traces than you are at describing them.

Pro tip: when an AI keeps generating the same broken pattern, open a fresh chat. Conversation memory in long sessions makes models double down on early mistakes – especially with manifest versions and deprecated APIs.

The edge cases nobody warns you about

The DOM-in-service-worker trap

Ask any AI for a background script that parses HTML or uses DOMParser, and it’ll happily generate one. It will not run. Per Chrome’s official MV3 migration docs, service workers can’t access the DOM or the window interface – those calls must move to an offscreen document. The fix:

// In your service worker:
await chrome.offscreen.createDocument({
 url: 'offscreen.html',
 reasons: ['DOM_PARSER'],
 justification: 'Parse fetched HTML'
});
chrome.runtime.sendMessage({ target: 'offscreen', data: html });

The Offscreen API lets your extension use DOM APIs in a hidden document – no new tabs, no user-visible interruption. Tell your AI explicitly: “this needs an offscreen document.” It rarely volunteers that information.

The disappearing-state trap

If your AI puts a counter, a Map, or any user data in a top-level let or const, that data evaporates the moment the service worker idles out. Use chrome.storage.local for anything that needs to survive between events. Reports from Chromium’s own extension forum describe service workers going inactive on lower-spec Windows machines specifically – sometimes mid-session, no warning. The only fix is to re-enable the extension, re-install it, or restart the browser.

The permissions-rejection trap

You ship to the Chrome Web Store. Two days later: rejected. Reason? You requested <all_urls> because the AI thought it was easier. Per Extension Radar’s 2025 publishing guide, this is the #1 rejection reason – if you request <all_urls> but only need access to youtube.com, you’ll be rejected. Open your manifest before submitting and audit every permission line.

The ad-blocker shape of things

Building something that intercepts network requests? The old webRequest blocking API is gone. Chrome’s MV3 migration replaced it with declarativeNetRequest – rules declared upfront, no dynamic request interception. AI models still write webRequest.onBeforeRequest code from training data. Reject it immediately.

Publishing: what it actually costs and how long it takes

The Chrome Web Store developer dashboard charges a one-time $5 registration fee (as of 2025), which covers up to 20 extensions on a single account. Google’s review process typically takes 1-3 business days (as of 2025). Keep permissions tight and write a description that actually says what the extension does, and first-try approval is realistic.

One more thing the AI won’t tell you: MV3 removes the ability for an extension to use remotely hosted code – this is a hard policy rule, not a recommendation. If your prompt asked the AI to fetch a script from a CDN, that path is closed. Bundle everything inside the extension package, or your submission will be rejected on policy grounds.

FAQ

Do I need to know JavaScript to build an extension with AI?

No – but reading skills matter. Being able to spot “that variable is null” in a stack trace saves hours. You don’t need to write code; you need to recognize when something’s wrong.

Why does my extension work locally but break for users?

Almost always one of three things: a manifest_version: 2 file that loads in your dev mode but fails on a fresh Chrome install, a service worker that depends on in-memory state that gets cleared, or a permission you have granted manually that other users haven’t. Try installing your own packaged extension on a completely different Chrome profile before you ship – it surfaces the bugs your dev environment hides because you’ve been clicking through permission prompts all week without noticing.

Should I use Claude Code, Cursor, or just the chat?

For your first extension, the chat interface is enough. Extensions are small projects – often under 10 files. Move to Cursor or Claude Code when you’re iterating past version 3 and want the model to read your whole project at once instead of you copy-pasting files back and forth.

Next step: open Claude, paste the five-constraint prompt from Step 1 above with one extension idea you actually want, and load the result unpacked. The fastest way to learn this stack is to break it once.