There are two ways beginners try to control ChatGPT’s behavior. The first is what most people do: type instructions like “act as a senior copywriter, keep replies short” into the chat window every time. The second is to save those instructions once, in the right place, and forget about them. The second approach wins – and not by a small margin.
Why? Because every new chat starts as a blank slate. Re-typing your preferences burns tokens, introduces drift between sessions, and trains you to do work the tool was built to skip. ChatGPT system prompts exist exactly so you don’t have to repeat yourself. This guide is the beginner version – practical, no 85-prompt copy-paste list at the end.
The Scenario: Why You’re Even Reading This
You’ve used ChatGPT for a few weeks. It forgets context between chats, defaults to wordy replies you didn’t ask for, and keeps explaining what an LLM is when you already know. You’ve heard the term system prompt thrown around but the docs talk about API roles and the tutorials all say “you are a helpful assistant” without telling you where you actually put that.
Short version: a system prompt is a set of instructions that sits above your normal messages – invisible, persistent, and weighted more heavily by the model. OpenAI introduced the Custom Instructions feature in July 2023, specifically because users kept complaining about the friction of starting each conversation from scratch. If you’ve ever wished ChatGPT just knew you write Python and hate emojis, you want a system prompt.
The Layered Prompt Chain (the Part Most Tutorials Skip)
Before writing anything, understand what you’re plugging into. Your message isn’t the only thing the model sees. There’s a stack, and it runs in this order (as of mid-2025, per community analysis of ChatGPT’s architecture):
- OpenAI’s hidden pre-prompt – the baseline “you are ChatGPT…” instructions OpenAI applies globally.
- Custom GPT instructions (only if you’re chatting with a Custom GPT).
- Your Custom Instructions – the personal system prompt you set once.
- Your actual message.
In the ChatGPT UI, your instructions land at layer 3. That matters: anything higher in the chain can override yours. If a Custom GPT says “always answer in JSON” and your Custom Instructions say “use plain English,” expect a mess. When using the API directly you own the whole stack – more control, but also more responsibility for defining the system prompt from scratch.
Where ChatGPT System Prompts Actually Live
Three real places. Different scope, different character budgets, different management overhead (specs as of mid-2025):
| Location | Scope | Char limit | Best for |
|---|---|---|---|
| Custom Instructions | All chats, all models | 1,500 per field × 2 | Personal defaults |
| Custom GPT instructions | One specific GPT | ~8,000 | Specialized tools |
| API system message | One API call | Model context window | Developers |
Custom Instructions is the right answer for most beginners. Free tier included – available on all plans across Web, Desktop, iOS, and Android, applied immediately to all chats (per OpenAI’s Help Center). To find them: Settings → Personalization → Custom Instructions, then toggle Enable customization on.
Two fields appear. The first asks what ChatGPT should know about you – your role, what you’re working on, how you think. The second asks how it should respond: tone, length, format. Each field caps at 1,500 characters. That’s roughly 250 words. Treat it as a constraint you have to work within, not a target to fill.
Writing One That Actually Works
Most beginner system prompts fail the same way: they’re too polite and too vague. “Please be helpful and clear” tells ChatGPT nothing it doesn’t already attempt to do.
Use behavior, not adjectives. Compare:
BAD
Be professional and concise.
GOOD
Default to ≤120 words. Skip preambles like "Great question!".
No bullet lists unless I ask. If you're unsure, ask one
clarifying question instead of guessing.
The difference shows up immediately. Paste both into a fresh chat, then ask the same question – the “BAD” version produces the same wordy, hedge-filled response ChatGPT gives by default. The “GOOD” version fails occasionally (models drift on long sessions) but the first reply is almost always shorter and more direct. The good version is testable. The bad version is a vibe. Models can’t follow vibes consistently.
One rule that matters: Write instructions in the imperative and put your hardest constraint at the top. ChatGPT’s attention drops off in long instruction blocks – the same way yours does on a long email. If you only have one rule that matters, make it line one.
Splitting Across Layers
Hit the ceiling on Custom Instructions fast enough and you’ll want different behavior for coding versus writing versus research. Stuffing all three personas into 1,500 characters produces a confused mash.
Custom GPTs solve this. Each one holds roughly 8,000 characters of instructions (per OpenAI’s GPT authoring guidelines), plus uploaded files for reference material. Build one GPT for code reviews with strict formatting rules, another for marketing drafts. They don’t share state, so the instructions stay clean.
Paid users have a third option. Projects in paid ChatGPT let you group conversations with their own bespoke instructions – and those instructions overrule the global Custom Instructions for anything inside that project. Useful for compartmentalizing work. But if you ever wonder “why is ChatGPT acting strange in this one chat,” check whether you’re inside a Project with a forgotten instruction that contradicts your globals.
For developers, there’s the original mechanism: in the Chat Completions API, you set {"role": "system", "content": "..."} as the first message. There’s no API endpoint for Custom Instructions – the API system message is the equivalent, and you control it fully per call.
Honest Limitations
Four things worth knowing before you trust system prompts with anything important.
They eat your context. Every character in your Custom Instructions counts against the conversation’s token budget. That’s not a minor footnote – long instructions quietly shrink the space left for your actual conversation. The 1,500-character-per-field limit isn’t just a UX decision; it’s also roughly the point where the token cost starts to sting.
Models apply them inconsistently. Turns out not every model in ChatGPT’s lineup treats Custom Instructions the same way. Community testing on the OpenAI Developer Forum (as of mid-2025) found that some models – GPT-4o, o1, o3-mini, and others – incorporate the settings differently; some ignore them outright or apply them inconsistently. If you switch models mid-session and behavior changes sharply, your Custom Instructions may be the cause.
OpenAI’s own announcement admits imperfection. From the original July 2023 blog post: ChatGPT won’t always interpret custom instructions perfectly – at times it might overlook them, or apply them when not intended. Plan for occasional drift. If a reply ignores your rules, regenerate rather than writing a longer prompt to fix the model.
Edits don’t apply retroactively. Per OpenAI’s Help Center, updates to your instructions only show up in future conversations. Old chats run on the version that was active when they started. To fully remove old behavior from a previous chat, you’d need to clear it from your history entirely.
One Thought Before You Go
The more interesting question isn’t “what should my system prompt say.” It’s whether you should write one at all yet. If you’ve used ChatGPT for less than a month, you probably don’t know what you actually want it to do differently – and writing instructions before you’ve felt the real friction just bakes in wrong assumptions.
Use it raw for two weeks. Note every reply that frustrated you. Then write the prompt. The instructions will be sharper because they’re solving real problems instead of imagined ones.
FAQ
Are system prompts and Custom Instructions the same thing?
Technically no, functionally yes. “System prompt” is the API concept; “Custom Instructions” is OpenAI’s UI wrapper for chatgpt.com users.
Can ChatGPT ignore my Custom Instructions entirely?
Yes – and OpenAI has said so directly. The original Custom Instructions announcement explicitly noted that ChatGPT might overlook instructions or apply them at the wrong moment. This isn’t a bug people discovered later; it was disclosed upfront. In practice, it happens most often when you switch to a different model mid-session, or when your instruction block is long enough that the model’s attention drifts before it reaches the important rules. Putting your hardest constraint at the top of the second field helps, but it doesn’t eliminate the issue entirely.
Should I bother if I’m a casual user?
Only if one specific thing keeps annoying you. Fix that one thing and stop. Stuffing the field with every preference you’ve ever had backfires – the model loses focus across long instruction blocks, and you end up with something that half-follows six rules rather than reliably following two. One sharp rule beats ten fuzzy ones.
Your next move: open Settings → Personalization right now, write a single line in the second field – “Default to under 120 words unless I ask for detail. Skip preambles.” – and save. Use ChatGPT for the rest of the day. You’ll know within three replies whether you want more rules or fewer.