Skip to content

Build AI Automations with n8n Free: Tested Setup & Limits

Free n8n tier gives 2,500 executions - but execution counts reset weirdly. Here's what works, what breaks, and how to avoid hitting limits in production.

10 min readIntermediate

Can you actually build production AI automations on n8n’s free tier, or does “2,500 executions per month” evaporate the moment you connect OpenAI to a real workflow?

Execution counting: it breaks most cost estimates. Sub-workflows count separately. A polling trigger checking an API every 15 minutes? 2,880 executions monthly even when it finds nothing. Your “free” tier: gone by day 10.

Calendar-booking tutorials won’t mention this. n8n’s free offerings (Cloud trial vs self-hosted Community Edition) optimize for opposite problems. Pick wrong and you’re either paying money or debugging Docker for three weekends.

Why Execution Counting Determines Everything

n8n charges only for full workflow executions, not individual steps. A 50-node workflow costs the same as a 3-node workflow – one execution. That’s the pitch.

What counts as “one workflow” though? Sub-workflows count separately – your main workflow calls an Execute Workflow node five times, you just burned six executions (1 main + 5 sub). Modular design, the pattern everyone recommends for maintainability, directly conflicts with execution limits.

Polling triggers: worse. n8n’s Retry On Fail setting adds pauses between API request attempts, useful for rate limits. But a Schedule node firing every 15 minutes? One execution per check whether it processes data or not. 96 runs daily × 30 days = 2,880 executions – your entire free tier, spent on a workflow that mostly returns “no new items.”

Pro tip: Use webhook triggers instead of polling. Webhooks fire only when data exists, consuming zero executions during idle periods. Not every API supports webhooks, but when they do, your execution count drops by 90%.

The self-hosted Community Edition sidesteps this entirely – unlimited executions, no per-workflow fees, access to all core features. “Free” infrastructure still costs though: VPS hosting, maintenance time, and the operational weight of running your own automation engine.

Cloud Trial vs Self-Hosting: The Real Tradeoff

n8n Cloud Starter costs €20/month and includes 2,500 workflow executions. Free trial: 14 days to test before payment kicks in. Starter includes unlimited steps per execution, so a 100-step AI workflow costs the same as a 2-step Slack ping.

Option Monthly Cost Execution Limit Hidden Costs
Cloud Starter €20-24 2,500 None – managed infra
Self-Hosted (VPS) $5-20 Unlimited Maintenance, backups, monitoring
Self-Hosted (Local) $0 Unlimited Uptime risk, no public webhooks without tunnel

Self-hosting sounds cheaper. Calculate time though. n8n may display errors like ‘Allocation failed – JavaScript heap out of memory’ and requires manual restart when running via npm. Docker auto-restarts, but you’re still responsible for updates, security patches, and database backups.

The “free” part: the license. The operational burden: yours.

For workflows processing fewer than 2,500 runs monthly (roughly 83 per day), Cloud makes sense. High-frequency polling, multi-tenant use cases, or teams that already run infrastructure? Self-hosting pays off fast.

What About n8n’s Free OpenAI Credits?

n8n Cloud trial includes ‘n8n free OpenAI API credits’ as a credential option for testing. Convenient – until the undocumented quota runs out mid-workflow. No error message, just silent failures where the AI node returns empty strings.

Community forums report this constantly. Bring your own OpenAI API key from day one.

Build an AI Workflow That Handles Errors

Most tutorials show the happy path: webhook fires, AI Agent processes input, workflow ends. Real automations fail. APIs time out, LLMs return malformed JSON, rate limits hit without warning.

Here’s a workflow structure that survives production.

Step 1: Set Up the Trigger

Start by adding a Chat Trigger node, which listens for messages and can hook up to n8n’s public chat interface or be embedded in another site. Non-chat workflows? Use a Webhook node instead – exposes a URL that external services POST data to.

// Webhook trigger receives JSON payload
{
 "user_query": "Summarize this article",
 "article_url": "https://example.com/post"
}

Webhook triggers don’t poll. They consume exactly one execution per incoming request, making them execution-efficient.

Step 2: Add the AI Agent Node

The AI Agent node is the core of adding AI to workflows and requires a chat model to process prompts. Click the + button under Chat Model and select your LLM provider. n8n supports OpenAI, DeepSeek, Google Gemini, Groq, Azure, and others.

For OpenAI: select GPT-4o for complex reasoning or GPT-4o-mini for speed and cost savings. OpenAI charges per token, with different rates for input and output, and GPT-4 costs more than GPT-3.5.

System prompt: be specific. “Be helpful” produces vague outputs. “Extract exactly 3 bullet points, max 15 words each” – that reduces hallucination and token waste.

Step 3: Handle Rate Limits and Failures

When a node hits an API rate limit, n8n displays the error in the node output panel including the service’s error message. Without retry logic, your workflow dies.

  1. Enable Retry On Fail in node settings (under “Settings” tab). Set “Max Tries” to 3 and “Wait Between Tries” to 2000ms.
  2. High-volume workflows? Use the Loop Over Items node to batch requests. The HTTP Request node has built-in Batching options to send multiple requests with pauses between them.
  3. Add an Error Trigger node to catch failures. Route errors to a Slack notification or a Google Sheet log – silent failures are worse than loud ones.

Example: AI Agent calls OpenAI 100 times in a burst? You’ll hit rate limits. Split that into 10 batches of 10, with 1-second pauses. Slower, but it completes.

Step 4: Test with Real Data Volume

Split data into smaller chunks – instead of fetching 10,000 rows per execution, process 200 rows per execution. Manual test runs with large datasets consume excessive memory. Avoid manual executions when processing large amounts of data – use scheduled triggers instead.

The workflow editor shows input/output for each node. Click a node after execution to inspect the JSON. AI Agent returns 50KB of data and you’re looping over 100 items? You just moved 5MB through memory. Starter plan limits: 320MiB RAM per execution. You’ll hit that.

The Execution Traps Nobody Warns You About

1. Sub-workflow loops. Execute Workflow node sits inside a Loop Over Items node processing 50 items? You just consumed 50 executions for the sub-workflow alone. Single parent execution spawned 50 children. Your 2,500-execution quota: gone in 50 parent runs.

2. Polling without data filtering. Schedule node checking an RSS feed every 10 minutes doesn’t know if new posts exist until after it runs. 4,320 executions monthly (6 per hour × 24 × 30). You can add timestamp comparison inside the workflow, but the execution already fired.

3. Code node memory leaks.n8n documentation explicitly recommends avoiding Code nodes where possible, but AI workflows need JSON manipulation that native nodes can’t handle. Community reports show Code nodes processing large arrays can balloon memory usage, triggering the “heap out of memory” crash. Fix: chunk data before it enters the Code node, or rewrite the logic using native nodes (Set, Merge, Split In Batches).

Self-Hosting: What It Actually Takes

“Free and open-source” – the operational reality involves more than docker run.

Docker command to run n8n: docker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n. Works for local testing. Production needs:

  • Persistent storage. The n8n_data volume holds workflows, credentials, execution history. Lose it – lose everything. Set up automated backups.
  • HTTPS + domain. Webhook triggers need public URLs. Use Caddy or nginx with Let’s Encrypt. Cloudflare Tunnel works if you don’t want to expose ports.
  • Database. Default SQLite works for single-user setups. Multi-user or high-concurrency? PostgreSQL. Configure connection pooling or face lock timeouts.
  • Monitoring. n8n doesn’t alert you when workflows break. External monitoring (Uptime Kuma, Healthchecks.io) is mandatory.

Estimated maintenance time: 2-4 hours monthly for updates, backups, and incident response. Workflow fails at 2 AM because the VPS ran out of disk space? That’s your problem.

Worth it? Execution volumes above 10,000/month – absolutely. Side project running 500 workflows monthly? Cloud’s €20 buys you back those 4 hours.

Actually – before we get to tool limitations, here’s the uncomfortable truth about free-tier AI automation: you’re always trading one constraint for another. Execution limits vs maintenance burden vs API costs. No option is truly “free.”

When n8n Isn’t the Right Tool

n8n works best for complex, multi-step automations where you need custom logic. It struggles with:

Non-technical teams.n8n presents the most technical interface of the three major platforms, with a node-based approach similar to development tools like Node-RED, offering exceptional flexibility but requiring deeper understanding of automation concepts. If “JSON” and “API authentication” are foreign terms? Zapier’s linear interface is faster.

Niche SaaS integrations.Zapier has 8,000+ integrations; n8n has around 1,500 including community nodes. Your workflow depends on a specific HR tool or obscure CRM? Check n8n’s integration list first. Building custom API connections works, but adds dev time.

Guaranteed uptime.Zapier handles infrastructure, monitoring, and error recovery – for business-critical automations where downtime costs money, the managed service has value. Self-hosted n8n puts availability risk on you.

Start Here

Testing AI automation ideas? Use n8n Cloud’s 14-day trial. Build 3-4 workflows, measure actual execution counts, then decide if 2,500/month fits or if you need self-hosting.

Processing high volumes or want data control? Self-host on a $5-10/month VPS (Hetzner, DigitalOcean, Linode). Follow n8n’s official Docker setup guide, configure HTTPS, and set calendar reminders for monthly backups.

Either way: start with webhook triggers, not polling. Add error handling before you add features. And track execution counts weekly – the quota sneaks up faster than you expect.

Frequently Asked Questions

Does n8n’s free tier actually support AI workflows or is it just for testing?

2,500 executions supports production if frequency is low. Customer support bot answering 80 queries daily? Fits (80 × 30 = 2,400). Sentiment analysis workflow polling Twitter every 5 minutes? Burns through the quota in 9 days.

How do I know if I should self-host n8n or use the Cloud version?

Calculate monthly executions first. Under 2,500? Cloud is simpler. Between 2,500-10,000? Cloud Pro (€50/month) vs self-hosting ($10-20 VPS) comes down to whether you value time over money – self-hosting saves cash but costs 2-4 hours monthly in maintenance. Above 10,000 executions, self-hosting wins financially unless you need enterprise features (SSO, audit logs, SLA). Also: if your workflows handle sensitive data or you’re in a regulated industry, self-hosting gives you control that Cloud can’t match due to US-based servers. One thing nobody mentions though – self-hosting maintenance spikes during the first month (SSL setup, backup testing, monitoring config) and drops to ~2 hours after that. Cloud looks expensive until you calculate what 4 hours of your time costs.

Can I use n8n with OpenAI without paying for both n8n AND OpenAI API access?

You’re paying for one or the other. Self-hosted n8n: free (unlimited executions), but you pay OpenAI directly for API usage – typically $0.002-0.03 per 1K tokens depending on the model. n8n Cloud: €20-24/month for 2,500 executions, and you still pay OpenAI separately for LLM calls. The “n8n free OpenAI credits” in Cloud trial are temporary and undocumented in quota size – assume they’ll run out within days of testing. Budget for both: n8n infrastructure (Cloud subscription or VPS hosting) plus OpenAI API costs based on your expected token volume. A workflow generating 100 AI responses daily at ~1,000 tokens each costs roughly $6-90/month in OpenAI fees alone, separate from n8n.