Skip to content

How to Use AI to Write Business Proposals (Without the Junk)

A practical guide to using AI to write business proposals that actually win - discovery briefs, two-pass prompting, and the parts AI gets wrong.

8 min readBeginner

Here’s something every AI-proposal tutorial skips: in 2025, Deloitte refunded the Australian government over $60,000 after delivering an AI-generated report stuffed with fabricated citations and an invented quote attributed to a federal court judge. Public reporting on the incident notes Deloitte admitted using GPT-4o without proper oversight. If a Big Four firm can ship that, your sales proposal can too.

Most guides on how to use AI to write business proposals pretend this risk doesn’t exist. They’ll show you a one-line prompt, a pretty template, and call it done. This article goes the other direction – the prompting technique that actually produces a proposal worth sending, and the three places AI quietly sabotages you if you don’t watch it.

Why the standard “prompt and pray” approach fails

Open any vendor blog and you’ll see the same demo: a single sentence prompt (“Write a proposal for a web design project”) and a glossy output. It looks great in a screenshot. It loses deals in real life.

The reason is structural. LLMs don’t actually “know” facts – they predict the next word based on patterns learned from massive text data. When you give a model almost no context, it fills the void with the statistical average of every proposal it’s ever seen. You get something that reads like every other proposal – because it is.

Worse, the model invents specifics to sound credible. Recent 2025-2026 studies confirm that even advanced models like GPT-4o and Claude 3.7 still exhibit 15-20% hallucination rates on factual citation tasks, rising sharply (to 35-55%) on niche or recent topics. Drop a fake “case study” into a sales proposal and you’ve got the corporate version of the Schwartz case (2023) – a lawyer fined $5,000 for submitting six ChatGPT-invented court cases – except your prospect just quietly stops returning calls.

The discovery brief: the only prompt that matters

Stop asking the AI to “write a proposal.” Ask it to write your proposal. The difference is a discovery brief – a structured block of context you paste before any drafting prompt.

A solid brief has seven inputs. Bookipi’s docs recommend nailing down company name, location, type of business, core services, unique selling proposition, mission, and how customers benefit before you start. That’s the floor, not the ceiling. For a real proposal, you also want the prospect’s pain points (in their words), the scope you actually agreed to, your real rates, and any objections raised on the discovery call.

You are drafting a sales proposal. Use ONLY the facts below.
If a fact is missing, write [TBD] - never invent.

CLIENT: Northwind Logistics, 140-person 3PL in Rotterdam
PAIN: Manual SKU reconciliation eating 22 hrs/week per warehouse
QUOTED ON CALL: Pilot at 2 warehouses, 6 weeks, fixed €18,400
OBJECTION RAISED: "Last vendor over-promised on integration time"
MY DIFFERENTIATOR: SAP Business One connector already built
DESIRED TONE: Direct, no jargon, German business formality
SECTIONS NEEDED: Executive summary, scope, timeline, pricing,
 risk addressed, next step

Draft the proposal in plain prose, max 800 words.

Two things to notice. The “if a fact is missing, write [TBD]” instruction is doing heavy lifting – it’s the cheapest hallucination guardrail available. And the objection from the call gets a dedicated section, which forces the AI to address what’s actually blocking the deal instead of generic value-prop fluff.

The two-pass workflow

One prompt won’t get you there. Here’s the loop that does.

  1. Pass 1 – Draft. Run the discovery brief above. Ask for plain prose, no bullet salad. Klariti’s guide adds a useful nudge: instruct the model to be concise, avoid repetition, write in paragraph format rather than lists, and use formal business English.
  2. Pass 2 – Adversarial critique. Open a fresh chat. Paste the draft. Prompt: “You are the prospect’s procurement lead. List every claim in this proposal that isn’t backed by a source, every place the scope is ambiguous, and every sentence that sounds like marketing filler. Be ruthless.”
  3. Pass 3 – Rewrite. Feed the critique back in and ask for a tightened version. Stop here. A fourth pass starts homogenizing the tone.

The two-pass method works because LLMs are noticeably better at criticizing text than generating it. You’re using that asymmetry on purpose.

A worked example: turning meeting notes into a draft

Here’s the part competitors skip – actual messy input. Imagine these are the notes you scribbled after a discovery call:

- Client: Brava Coffee, 14 cafés in Lisbon
- Owner Marta wants loyalty app, frustrated with current punch-card
- Budget hint: "under €15k for v1, more if it works"
- Timeline: wants soft launch by Q3
- Tech: POS is Square, must integrate
- Concern: "don't want another app nobody downloads"
- My team: 2 devs + designer, 8-week build
- Comparable I shipped: similar app for Padaria Lis, 31% repeat-visit lift in 4 months

Paste that block straight into the brief template, add the “never invent” guardrail, and you’ll get a draft that mentions Square integration, addresses the adoption fear directly, and uses the Padaria Lis result as a credibility anchor. That last bit only works because you supplied the number – if you’d let the model invent a stat, you’d be one screenshot away from a fabricated case study going out the door.

Pro tip: If you ever see a percentage, client name, or testimonial in the AI’s draft that you didn’t put in the brief, delete it on sight. Don’t “verify later” – verification gets skipped at 11pm. The rule is: numbers in only if numbers in first.

Three things AI gets wrong on proposals

Even with a great brief, watch for these:

What goes wrong Why it happens What to do
Fake case studies and statistics Model fills credibility gaps with plausible-sounding inventions Strip every number/name not in your brief; require [TBD] tags
Underpriced scope Model averages “typical” market rates from training data, often years old Lock pricing in the brief; never let AI suggest a number
Soft, hedged language on commitments Safety training pushes models toward “may,” “could,” “approximately” Pass 2 prompt: “flag every hedged commitment and rewrite as concrete”

The pricing one is sneakiest. The model has no idea what your rates are, so it pattern-matches to whatever was common in its training corpus – which can quietly bake in a discount you never agreed to.

What about Proposify, PandaDoc, Bookipi, and friends?

Dedicated proposal platforms have a real edge: branded templates, e-signature, analytics on which sections prospects actually read. Proposify, for example, starts at $19/month with a 14-day free trial (as of early 2026). Better Proposals’ tracking data shows clients spend 67% of their proposal reading time on the introduction and pricing sections – which is the kind of insight a raw ChatGPT session can’t give you (figure from Better Proposals’ most recent published data; check their site for updates).

That said, the writing inside those tools is still LLM output. Same hallucination risks, same need for a discovery brief, same need for the critique pass. Picking a platform doesn’t replace the workflow above – it just gives the workflow a nicer wrapper. As of early 2026, no consumer-grade proposal tool I’ve seen guarantees zero fabrication.

Data security is a separate issue worth flagging here. Free-tier consumer LLMs commonly store the prompts you give them – paste a real RFP and you may be training tomorrow’s model on your client’s confidential scope. Better Proposals explicitly warns about this and recommends modifying real names before pasting. Default to dummy names in free-tier chats; use enterprise tiers (with data-retention turned off) when you need to paste real RFP language or contract values.

FAQ

Can AI write a full business proposal end-to-end with no editing?

No. Treat the output as a draft your future self has to defend in a meeting.

Which model should I use – ChatGPT, Claude, or Gemini?

For most sales proposals, Claude is the safest starting point – it follows long structured prompts most faithfully. ChatGPT edges ahead on the adversarial critique pass (Pass 2); it’s more willing to be brutal. Gemini is worth considering only if your source material already lives in Google Docs and you want native integration. Honestly, the brief matters more than the model. Run your last winning proposal through each as a Pass 2 critique and see which catches more weak spots – that tells you more than any published benchmark.

Is it safe to paste an RFP into ChatGPT?

Depends on the tier. Free and Plus consumer accounts may use your inputs to improve models unless you turn off data sharing in settings. Enterprise and Team plans contractually exclude that. If the RFP contains anything covered by an NDA, default to enterprise tiers or a self-hosted model – and if you’re not sure, redact client names and contract values before pasting.

Now go pull up your last lost proposal. Run it through a Pass 2 critique with the prompt above. Whatever the AI flags is probably what the prospect saw too.