Skip to content

Create Telegram Bots with AI Help: The Honest Reality

AI coding assistants won't build your Telegram bot for you, but they'll handle the boilerplate you hate. Here's what actually works when combining Claude or ChatGPT with BotFather.

9 min readIntermediate

Here’s the uncomfortable truth: telling ChatGPT or Claude “build me a Telegram bot” will get you code that runs for exactly 30 seconds before breaking. Not because AI is bad at coding – it’s actually pretty good – but because Telegram bot development is a minefield of version conflicts, undocumented rate limits, and API changes that happened after most models were trained.

That doesn’t mean AI is useless. You just need to know what to ask for.

You Already Know You Want a Bot. Here’s What Goes Wrong.

You have a use case in mind – maybe a notification system, a chat moderator, or an interface for your app. You find a tutorial. It tells you to use python-telegram-bot. You paste the example into ChatGPT and ask it to customize. It spits out code. You run it. Nothing happens.

The problem? Since v20.0, python-telegram-bot is built on asyncio. Every tutorial written before 2023 uses the old synchronous API. ChatGPT’s training data is full of that old code. Claude does slightly better because its training cutoff is more recent, but it still mixes patterns.

Before you write a single line, check the library version you’re installing. If it’s v20 or higher (as of March 2026, the current version is 22.7), you need async/await everywhere. If the AI gives you updater.start_polling(), it’s wrong. You want application.run_polling().

Setting Up: BotFather, Tokens, and the Part AI Can’t Do for You

Creating the bot account is manual. No API can automate this – Telegram requires you to interact with @BotFather directly.

  1. Open Telegram and search for @BotFather. Look for the blue verification checkmark.
  2. Send /newbot and follow the prompts. You’ll pick a display name (can have spaces) and a username (must end in “bot”).
  3. BotFather will give you a token – a string that authenticates your bot on the API. Copy it immediately.

Treat that token like a password. Anyone with your token has full control of your bot. Don’t commit it to GitHub. Don’t paste it into ChatGPT’s web interface if you’re on a shared account. Use environment variables or a .env file.

AI can generate the code to load the token from an environment variable. Ask: “Show me how to load TELEGRAM_BOT_TOKEN from a .env file in Python.” It’ll give you python-dotenv and os.getenv(). That part works every time.

The First Bot: What to Ask AI and What to Fix Yourself

Start with something trivial. Not because you need a trivial bot, but because you need to verify the setup works before adding complexity.

Prompt to use with ChatGPT or Claude:

Pro tip: “Write a Python async Telegram bot using python-telegram-bot v22+ that responds to /start with a welcome message. Use environment variables for the token. Include requirements.txt.”

The AI will give you something close. Here’s what you’ll need to fix manually:

  • Async syntax: If it generates def instead of async def for handlers, rewrite them. Handlers in v20+ must be async.
  • Application object: Look for Application.builder().token(token).build(). That’s correct. If you see Updater or Bot instantiated directly, it’s using pre-v20 patterns.
  • Handler registration: Should be application.add_handler(CommandHandler("start", start_function)). If it’s using decorators, that’s from a different library (probably aiogram or pyTelegramBotAPI).

Run the bot. Send it /start in Telegram. If it replies, you’re good. If not, check the console for errors – usually a malformed token or a missing await.

File Structure AI Gets Right

AI is actually good at boilerplate. Ask it to structure a multi-file bot with separate handlers, config, and a main entry point. It’ll give you:

my_bot/
├── bot.py # Main application
├── handlers/
│ ├── start.py # /start command
│ ├── messages.py # Text message handlers
├── config.py # Load env vars
├── requirements.txt
└── .env.example

This is one of AI’s strengths – it knows conventional project layouts. Use it.

Where AI Breaks: Rate Limits, File Sizes, and Error Handling

Now for the edge cases that aren’t in the tutorials.

Rate Limit Reality

You ask ChatGPT to write a broadcast function that sends a message to 1,000 users. It gives you a loop. You run it. After 30 messages, the API responds with HTTP 429 and a retry_after field. Your bot is now frozen for 35 seconds.

The issue: Starting with Bot API 7.0 (January 2025), Telegram replaced the legacy soft guideline with a token-bucket algorithm. The old advice of “30 messages per second” is outdated. Limits are now dynamic.

AI-generated code will add a time.sleep(1) between messages. That’s not enough. When you hit 429, your bot is completely blocked for retry_after seconds – not just for that user, but for ALL users.

Fix: Add exponential backoff. Ask AI: “Add retry logic for Telegram 429 errors with exponential backoff using the retry_after value.” It’ll give you a try/except block that reads error.retry_after and sleeps. Verify the code actually reads the error payload – early AI attempts often hard-code the sleep duration.

File Upload Traps

User sends your bot a 60 MB video. AI-generated upload code fails silently. Why? Bots can send files up to 50 MB via the standard API. Anything larger requires running your own local Bot API server, which allows uploads up to 2000 MB.

Most AI code assumes the 50 MB limit exists universally. It doesn’t check file size before attempting upload. You’ll get a cryptic error hours into debugging.

Ask AI: “Add file size validation before upload. Reject files over 45 MB with an error message.” That 45 MB buffer accounts for encoding overhead.

Using Claude Code with Telegram (the Actually Interesting Part)

Anthropic released an official Telegram plugin for Claude Code. This is different from asking ChatGPT to write bot code – it lets you interact with Claude Code itself through a Telegram bot.

Setup requires Bun (not Node.js). The official plugin specifically requires Bun; Node.js or Deno may cause unexpected errors. Install it with:

curl -fsSL https://bun.sh/install | bash

Then install the plugin via Claude Code:

/plugin install telegram

DM your bot on Telegram – it replies with a 6-character pairing code. Approve it in your Claude Code session. Now you can message Claude through Telegram and get code suggestions, file edits, or terminal commands executed.

This is useful when you’re away from your desk but need to tweak a bot’s behavior. You describe the change in Telegram; Claude Code edits the file and restarts the service.

Approach Best For Limitation
ChatGPT/Claude prompts Generating initial boilerplate, fixing syntax errors Outdated patterns, no runtime access
Claude Code Telegram plugin Remote coding, on-the-go fixes Requires Bun, pairing setup, only works with Claude Code sessions
GitHub Copilot in editor Inline autocomplete, refactoring No Telegram-specific troubleshooting, subscription required

Hosting and Deployment: Where AI Gives Outdated Advice

Ask ChatGPT how to deploy a Telegram bot. It’ll say Heroku. Heroku ended its free tier in November 2022. AI models trained before that still recommend it.

Current options as of March 2026:

  • Railway.app: Free tier with 500 hours/month, $5/month after. Good for small bots.
  • Fly.io: Free tier covers most hobby bots. More complex than Railway but scales better.
  • VPS (DigitalOcean, Linode): $5-6/month. Full control, manual setup. Use systemd or Docker.
  • AWS Lambda: Free tier covers 1M requests/month. Requires webhook setup (not polling). AI will give you outdated Lambda runtime code – double-check Python version support.

AI can generate Dockerfiles and deployment configs. The Docker setup it produces usually works. The AWS Lambda handler it gives you probably needs the runtime version updated to Python 3.12 or 3.13.

What AI Actually Saves You Time On

After fighting with rate limits and async syntax, here’s what AI tools genuinely accelerate:

  • Inline keyboards and button layouts: Describing the structure in plain language gets you the InlineKeyboardMarkup code instantly. This is tedious to write manually.
  • Webhook setup: Configuring Flask/FastAPI to receive Telegram updates is boilerplate. AI nails it.
  • Regex and text parsing: “Extract URLs from messages” or “validate phone numbers” prompts give you working regex 90% of the time.
  • Database schema: “Design a SQLite schema to track user subscriptions with expiration dates” produces a usable starting point.
  • Error messages: Paste a Telegram API error; ask for an explanation. Often faster than searching docs.

Where it fails: anything involving recent API changes, undocumented behavior, or library version conflicts. That’s still human debugging territory.

Realistic Limits: What You Still Do Manually

Telegram’s API has quirks AI doesn’t know about because they’re not in the official docs – they’re in GitHub issues and Stack Overflow threads from the last six months.

Example: privacy mode in groups. By default, bots with privacy mode enabled only receive messages explicitly meant for them. If your bot doesn’t respond in a group chat, it’s probably privacy mode. You disable it via @BotFather with /setprivacy.

AI won’t suggest this. You’ll spend an hour debugging before finding the answer in a Reddit comment.

Another: Telegram’s Bot API exposes neither message history nor search – bots only see messages as they arrive. If you ask AI to “fetch the last 10 messages from a chat,” it’ll generate code that doesn’t work. The API doesn’t support it.

FAQ

Can I use ChatGPT to write a Telegram bot in Node.js instead of Python?

Yes. The node-telegram-bot-api library is mature and well-documented. AI handles it fine, but watch for the same async/await issues – newer versions are Promise-based. Ask specifically for “async Node.js Telegram bot” to avoid callback hell in generated code. JavaScript bots deploy easily to Vercel or Netlify serverless functions.

What happens if my bot token leaks?

Anyone with the token controls your bot – they can send messages as it, access user data it stores, or delete it entirely. Revoke the token immediately via @BotFather (/mybots → select bot → API Token → Revoke). Generate a new one and update your code. If the bot was public, announce the security incident to users. No way to track who accessed it with the old token.

My bot works locally but crashes on the server with ‘RuntimeError: Event loop is closed’. Why?

You’re mixing sync and async code, or the hosting platform’s Python environment doesn’t play well with asyncio. First, verify all handlers are async def and you’re using await on API calls. If that’s correct, the issue is often the event loop being closed prematurely – wrap your application.run_polling() in a try/finally block and explicitly close resources. On AWS Lambda, use application.run_webhook() instead of polling; Lambda’s execution model conflicts with long-running event loops. AI rarely catches this because it’s deployment-environment-specific.