I spent two hours trying to recreate a character I’d generated. Same prompt, tweaked parameters, different seeds – nothing worked. The face kept changing.
Then I tested two approaches side by side: seed numbers (the old method everyone used to recommend) versus character reference (–cref). One produced four completely different people. The other nailed the same face three times in a row.
Why Seed Numbers Don’t Actually Work
What actually happens with seed numbers: thematic consistency. Same vibe, same color palette, similar composition. But the character’s facial features, proportions, details? They drift every time.
Seed numbers control the initial noise pattern – the random starting point for image generation. Useful for recreating a style or mood. Terrible for locking down a character’s identity across multiple scenes.
The –cref parameter (introduced by Midjourney in March 2024) works differently: analyze this character’s features and reproduce them. Face structure, hair, clothing. Not just the aesthetic feel.
You need the same protagonist in five different scenes? The difference matters immediately.
The Moment I Realized Base Image Quality Beats Prompt Precision
I kept writing longer prompts. “Sharp jawline, green eyes, wavy auburn hair to shoulders, freckles across nose, wearing navy blazer.” Still inconsistent.
The problem wasn’t my prompt – it was my reference image. Dramatic lighting. Head turned 45 degrees. Part of the face in shadow. I was asking Midjourney to guess what the rest of my character looked like.
I regenerated the base character: simple, flat studio lighting. Neutral pose. Clean white background. Front-facing. Used that as my –cref reference.
Consistency jumped. Three generations, three recognizable versions of the same person.
Pro tip: Your reference image quality determines everything. Use “editorial photo, [character description], studio lighting, white background” for your base shot. No dramatic angles, no complex backgrounds, no shadows obscuring features. Midjourney needs a clean read of the face.
How –cref Actually Works (The 3-Step Method)
Step 1: Generate your base character
Prompt structure that works:
editorial photo, [age] [gender] [ethnicity], [hair description], [key facial features], [simple clothing], studio lighting, neutral expression, white background --ar 2:3 --v 6.1
Portrait aspect ratio (2:3 or 4:5) captures facial features better than square. Generate four options – pick the one that matches your vision. That’s your anchor.
Step 2: Copy the image URL
Right-click your chosen image → Copy Image Address. Or use Midjourney’s “Copy > Image Address” option in the web interface.
Step 3: Use –cref in new scenes
[new scene description] --cref [your-image-URL] --cw 100
The –cw parameter is character weight (as of Midjourney V6 docs). Default 100: maximum consistency for face, hair, clothing. Set it to 0 if you only want the face to match while changing the outfit.
Example:
woman walking through Tokyo street at night, neon signs, rain --cref https://s.mj.run/yourURL --cw 100
Midjourney takes your base character and drops her into that scene. Same face, same hair, same general clothing style (intricate details will drift – more on that next).
The Three Things Midjourney’s Docs Don’t Tell You
1. V7 changes the workflow and costs 2x more
Character reference (–cref) works on Midjourney V6 and Niji V6. If you’re using V7, the parameter changes to –oref (Omni Reference).
Omni Reference costs 2x GPU time compared to regular V7 generations (per Midjourney’s official Omni Reference documentation as of April 2026). It’s also incompatible with Vary Region, Pan, Zoom Out, and Fast Mode.
If you’re on the Basic plan ($10/mo as of April 2026, ~200 images), V7’s 2x cost effectively cuts your character consistency work in half. Stick with V6 unless you need V7’s other improvements.
2. Clothing details fail even at –cw 100
The official docs warn that “intricate details like specific freckles or logos on clothing might not come out exactly right.” Community testing (documented by Christy Tucker) shows patterned fabrics, textured materials, and jewelry fail just as often.
I tested a character in a green blouse with a subtle stripe pattern. The pattern changed every generation – sometimes solid, sometimes textured, sometimes a different shade entirely.
Solid colors work. Complex patterns don’t. If your character’s outfit matters for brand consistency, keep it simple or plan for Photoshop cleanup.
3. Multi-character scenes break the standard workflow
You can’t use two –cref parameters in one prompt. Midjourney will merge the faces into a weird hybrid.
The workaround (not documented in the official character reference guide): generate your scene with one character first. Then use the Vary Region tool, select the area where you want the second character, and add the second character’s –cref tag in that region’s prompt box.
Clunky. But it’s the only way to get two distinct consistent characters in the same image.
The Character Sheet Method (When You Need More Than 3 Shots)
A pattern from the community that works for longer projects:
You can reliably get 3-5 consistent shots from a single –cref reference (based on testing documented in SimilarLabs’ 2026 Midjourney review). Beyond that, features drift. A comic artist’s assessment: “I can get 3-4 consistent shots, then it starts drifting.”
If you need 10-15 scenes for a children’s book or marketing campaign, build a character sheet first.
- Generate 3 base angles: front view, side profile, three-quarter view. Same lighting, background, neutral expression.
- Use all three URLs as blended references:
--cref URL1 URL2 URL3. Midjourney averages the information from all three. - Generate your scenes using that blended reference. Multi-angle input gives Midjourney more data, reducing drift over longer sequences.
This doesn’t eliminate drift entirely – but it stretches the consistency window from 3-5 shots to 8-10 before you need to course-correct.
When –cref Isn’t the Right Tool
Some projects don’t need character reference.
Generating one-off concept art or exploring visual ideas? Seed numbers and style references (–sref) are faster. You’re not maintaining identity – you’re iterating on a vibe.
Need frame-by-frame animation consistency? Midjourney isn’t the right platform. Even with –cref, you get recognizable characters, not pixel-identical ones. Animation needs tools with structural conditioning (ControlNet, IP adapters) or fine-tuned models trained on your specific character.
But for visual narratives – comics, storyboards, brand mascots across multiple marketing assets – character reference is the first Midjourney feature that actually solves the problem.
A Note on What This Doesn’t Give You
Consistency has a ceiling.
I tested this with a children’s book illustrator who needed the same rabbit character across 12 scenes. We got 9 usable shots before the character’s proportions shifted noticeably. Ears grew longer. Body got rounder. Still recognizably the same rabbit, but not identical.
She used Midjourney for concept and composition, then traced over the outputs in Procreate to lock down the final design. That’s a valid workflow. AI gives you the 80%, you handle the last 20%.
The question isn’t whether –cref is perfect. It’s whether it’s better than starting from scratch every time. And it is.
FAQ
Can I use –cref with photos of real people?
No. According to Midjourney’s official announcement, character reference “works best when using characters made from Midjourney images” and is “not designed for real people/photos (and will likely distort them).”
What’s the difference between –cw 0 and –cw 100?
–cw 100 (the default) matches face, hair, and clothing from your reference image. –cw 0 focuses only on facial features, letting you change the character’s outfit or hairstyle while keeping the face consistent. Generating the same character in different costumes? Use –cw 0 and describe the new outfit in your prompt.
How many consistent images can I generate before the character drifts?
Community testing shows you can reliably get 3-5 consistent shots from a single reference image before features start drifting. Using a character sheet with multiple angles (front, side, three-quarter) extends this to 8-10 shots. Beyond that, you’ll need to manually correct drift or use one of your better generations as a new reference point. For projects requiring dozens of images, plan for periodic cleanup work – like the rabbit example earlier, where 9/12 worked before proportions shifted.
Next step: Generate your base character right now. Use the studio lighting prompt structure from Step 1. Get a clean reference image. Then test –cref on three different scenes. You’ll know within 10 minutes whether this workflow solves your consistency problem.