Can AI actually generate a working extension? Yes – I got one running in under 5 minutes. Did it pass Chrome Web Store review on first try? No. That gap is where most tutorials stop.
The part nobody mentions: the code works locally, but the manifest.json AI generates requests three permissions you don’t need. Chrome flags it for manual review. Three weeks later, you’re still waiting.
Why AI-Built Extensions Fail Review (And How to Fix It Before Submission)
The Chrome Web Store rejects extensions for permission mismatches more than any other reason. Requesting permissions your code doesn’t actually use triggers manual review, which can stretch approval from 24 hours to 3 weeks.
I tested this with Claude, ChatGPT, and Gemini. All three generated a manifest requesting "tabs", "storage", and "activeTab" for a simple text highlighter that only needed "activeTab". The AI hedges – includes permissions for edge cases your implementation might not hit.
Before you submit, open manifest.json and cross-check every permission against what your code actually calls. You’ll need to write a justification statement for each one, and vague answers get rejected. “Needed for functionality” doesn’t cut it. “Required to access chrome.storage.local for saving user highlight preferences” does.
Pro tip: After AI generates your extension, load it in Chrome DevTools with the Console open. Trigger every feature. If you don’t see API calls for a permission (like
chrome.tabs.queryfor “tabs”), remove that permission from the manifest. This alone cuts your review time by 80%.
Which AI Actually Works: Testing Claude vs. ChatGPT vs. Gemini
I built the same extension – a YouTube comment sentiment analyzer – three times, once with each major AI.
Claude consistently produced better backend logic and API integration code. Asked it to connect to the OpenAI API for sentiment analysis? Got proper error handling and rate limiting. ChatGPT’s version worked but didn’t handle API failures. Gemini’s backend code failed to implement async/await correctly, causing the extension to freeze Chrome tabs.
UI? Gemini generated cleaner, more modern popup HTML and CSS than Claude. Claude’s UI was functional but looked dated. ChatGPT landed somewhere in the middle – decent UI, decent backend, exceptional at neither.
If your extension is logic-heavy (API calls, data processing, complex state management), start with Claude. UI-focused (popup design, content script styling)? Let Gemini draft the frontend, then have Claude handle the JavaScript.
How to Prompt AI for Extensions That Don’t Break
- Describe the functionality precisely. “Build a Chrome extension that highlights text on any webpage and saves highlights to local storage” beats “make a highlighter extension.” The AI needs boundaries.
- Specify Manifest V3.Service workers replaced background pages in Manifest V3, but many AI models default to V2 patterns. Say “Use Manifest V3 with a service worker” in your first prompt.
- Request minimal permissions upfront. Tell the AI “only request permissions that are absolutely necessary.” Won’t eliminate over-requesting entirely, but reduces it.
- Ask for inline comments so you can actually understand what breaks when something fails.
After the AI generates the code, load it unpacked in Chrome: go to chrome://extensions, enable Developer Mode, click Load Unpacked. Test every feature before you touch the manifest.
The Manifest V3 Traps AI Doesn’t Understand
Manifest V3 changed how extensions run at the architecture level. The Declarative Net Request API replaced blocking webRequest listeners to improve performance, and extensions can no longer execute remote code – all logic must be self-contained in the package.
AI tools trained on pre-2023 documentation still suggest V2 approaches. I’ve seen Claude recommend chrome.webRequest.onBeforeRequest for blocking ads, which no longer works in V3. You’ll load the extension, click the icon, and nothing happens. No error. Just silence.
Three fields to check in your AI-generated manifest:
"manifest_version": 3 – Obvious, but I’ve debugged extensions where the AI set this to 2.
"background": {"service_worker": "background.js"} – Not "scripts", not "page". Service worker only.
In V3, host permissions are a separate field: "host_permissions": [...] – not lumped into "permissions".
If the AI generates a "background": {"scripts": ["background.js"]} block, you’re looking at V2 code. Service workers don’t persist – they wake on events and go dormant when idle, so you can’t rely on global variables staying in memory.
Chrome’s Built-In AI APIs: The Feature AI Builders Forget
Chrome now ships with built-in AI APIs – Prompt API, Summarizer API, Writer API – using Gemini Nano on-device. Your extension can run AI locally without sending data to OpenAI or Anthropic.
Privacy. Speed. Cost. If you’re building a text summarizer, you don’t need an external API key. The Summarizer API can generate summaries in varied lengths and formats, and it runs entirely in the browser.
The catch: if the API is in an origin trial, you must register your extension for the origin trial. Most AI code generators don’t know this exists, so they won’t include the registration step. The first time a user interacts with these APIs, the model must be downloaded to the browser – takes time and storage. Call availability() to check readiness before using the API.
If your extension is a summarizer, translator, or writing assistant, check Chrome’s AI API documentation before building with external services. The on-device option might be faster and simpler.
But there’s a deeper question here: when does the convenience of AI code generation actually slow you down? When you spend more time debugging hallucinated API methods than reading the official docs would have taken. I’ve been there.
| AI Tool | Best For | Weaknesses | Cost |
|---|---|---|---|
| Claude (Pro) | Backend logic, API integration, complex state | UI design is functional but dated | $20/month |
| ChatGPT (4) | Balanced code, decent at both UI and logic | Doesn’t excel at either; safe but bland | $20/month |
| Gemini (free/Pro) | Frontend UI, popup design, CSS styling | Backend async handling often breaks | Free / varies |
| Chrome AI APIs | On-device summarization, translation, writing | Requires origin trial registration; limited language support | Free (built into Chrome) |
Common Pitfalls: What Breaks After “It Works Locally”
Your extension runs perfectly on your machine. You submit to the Chrome Web Store. Rejected. Four killers I’ve hit repeatedly:
1. Privacy policy missing.If your extension shares user input with a server (even for AI processing), you must link a privacy policy in the manifest. AI doesn’t generate privacy policies. Write one, host it somewhere public, then add the URL to the Developer Dashboard.
2. Icons in wrong sizes. Chrome Web Store requires 16×16, 48×48, and 128×128 PNG icons. AI often generates just one size or uses JPEG. Chrome auto-rejects. Create all three sizes before submission.
3. Obfuscated or minified code without source maps.Minification is allowed but makes review harder; obfuscation is banned. Using a build tool that minifies? Include the original source or a source map, or expect delays.
4. Single-purpose violation.Your extension must have a single, narrow, easy-to-understand purpose. If the AI built a “productivity tool” that blocks ads AND tracks time AND summarizes articles, Chrome will reject it. One extension, one job.
Performance Reality: Does AI Code Actually Ship?
AI gets you 70-80% there. I’ve shipped two extensions built with AI. One took 4 hours from prompt to Chrome Web Store approval. The other took 3 days – had to rewrite the content script because Gemini’s version caused memory leaks on sites with infinite scroll.
The last 20%: debugging permissions, fixing race conditions in service workers, writing the metadata (description, screenshots, privacy policy) that the store actually reviews.
Most extensions are reviewed within three days, assuming you didn’t trigger manual review. The $5 developer fee is real, one-time, non-refundable. Once you pay it, you can publish unlimited extensions.
When NOT to Use AI for Chrome Extensions
AI is fast. It’s not universal. Skip it if:
Deep Chrome API integration. Extensions that manipulate DevTools, modify network requests with declarativeNetRequest rules, or interact with the omnibox require precision. AI hallucinates API methods that don’t exist. You’ll spend more time debugging than if you’d read the Chrome Extensions API reference first.
Handling sensitive data. AI-generated code rarely implements proper encryption, token handling, or secure storage. Building a password manager or payment tool? Write it by hand or hire a developer who understands chrome.storage.local vs. chrome.storage.sync and why that matters.
Enterprise deployment.Team and Enterprise plans have admin controls for allowlists and blocklists. Enterprise extensions often need custom policies. AI doesn’t understand ExtensionSettings JSON schemas. Build this manually.
If your extension is straightforward – highlight text, change page styles, inject a sidebar, scrape visible content – AI will save you hours. Complex? AI gives you a skeleton. You’re doing the surgery.
After your extension is live, don’t update the listing description repeatedly. Multiple rapid edits to the listing can delay updates. Make your metadata changes in one batch, then submit.
Frequently Asked Questions
Can I really build a Chrome extension without knowing JavaScript?
Yes. For simple extensions – text manipulation, UI overlays, basic storage. You won’t write JavaScript, but you’ll need to read it well enough to verify the AI didn’t request unnecessary permissions or use deprecated APIs.
Why did my extension get rejected even though it works locally?
Chrome Web Store reviews policy compliance, not just functionality. Most common: (1) Privacy policy missing when your extension sends data to external servers. Example: I built a translator that hit Google’s API. Worked locally. Got rejected – no privacy policy linked. Added a one-page policy, resubmitted, approved in 18 hours. (2) Permissions requested but not used in the code. (3) Vague permission justifications. (4) Icons in wrong sizes. (5) Extension does multiple unrelated things, violating the single-purpose requirement. Check the rejection email for specifics, fix those exact issues, resubmit. Don’t change anything else or you’ll restart the review clock.
Should I use Claude, ChatGPT, or Gemini for extension development?
Depends on what your extension does. Claude Pro ($20/month) – best backend code. API integration, error handling, complex logic. Gemini (free tier works) – cleaner, more modern UI and CSS. Struggles with async JavaScript and service workers. ChatGPT: middle ground. Decent at everything, exceptional at nothing. My workflow: Gemini drafts the popup HTML/CSS, Claude writes the background service worker and content scripts. Using one tool? Claude has the fewest failure modes for Manifest V3 code. But test everything. AI makes confident mistakes. Next step: pick one simple extension idea, generate the code with Claude, strip out unused permissions, test it locally for 10 minutes, then submit. The fastest way to learn the real friction points is to ship something small.