Here’s the mistake: you find three gorgeous reference images, stack them in your prompt with --sref urlA urlB urlC, crank the style weight to 800, and expect Midjourney to blend them into something coherent.
It doesn’t.
What you get instead is a visual tug-of-war where the AI pulls composition from image A, color palette from image B, and texture from image C – except it weights them equally by default, and one reference always dominates in ways you didn’t predict. The result looks muddled. Not “artfully blended.” Just confused.
The correct approach? Start with one reference. Test it. Then add a second only if you know exactly which visual element you need from it – and assign explicit weights.
Why the Standard Advice Falls Short
Every tutorial tells you the mechanics: add --sref followed by an image URL, adjust --sw between 0 and 1000, done. That’s technically true but practically useless.
What they don’t tell you is that Midjourney rewrote the entire style reference system on June 16, 2025. If you’re using sref codes you found in a library curated before that date, they now produce completely different results unless you add --sv 4 to revert to the legacy algorithm.
Most libraries haven’t updated their examples. You copy a code expecting moody cyberpunk, you get pastel watercolor instead. The code didn’t “break” – it just points to a different internal style now.
Then there’s the weight problem. According to Midjourney’s official documentation, --sw has “more impact when used with style codes than with images” as of V7. Translation: --sw 200 applied to a numeric sref code will warp your output far more aggressively than the same value applied to an image URL. They’re not equivalent operations.
The Workflow That Actually Works
Forget the shotgun approach. Here’s the process I use after testing this feature across 200+ generations since the V7 update.
Step 1: Choose Your Reference Type
You have two options: image URLs or numeric sref codes. Not interchangeable.
Image URLs (--sref https://...) pull style from a specific visual – your own photo, a movie still, someone else’s artwork. Midjourney analyzes colors, lighting, texture, composition. Per the official docs, this works by dragging an image into the Style Reference section on the web UI, or adding --sref plus the URL on Discord.
Numeric codes (--sref 2847361) tap into Midjourney’s internal style library. These are discovered, not created – you can’t generate a code from your own image. Use --sref random to explore, then save codes you like.
Pick one method per prompt initially. Mixing images and codes in your first attempt adds variables you can’t control.
Step 2: Handle the Version Parameter
This is where everyone stumbles. As of March 2026, the default is --sv 7. But if you’re working with sref codes discovered before June 2025, you need --sv 4 to get the original style.
/imagine prompt a cyberpunk street vendor --sref 4729183 --sv 4
Without --sv 4, that code now routes to a different aesthetic. The V7 system is “smarter” – it reduces subject leakage (more on that next) – but it fundamentally changed what each code represents.
New to sref codes entirely? Stick with the default. The issue only affects legacy codes.
Step 3: Start with Default Weight, Then Adjust
Default style weight is --sw 100. Don’t touch it until you see what 100 gives you.
Run your prompt. Evaluate. Ask: does the reference style overpower my subject description, or is it too faint?
- Style too weak? Bump to 150-200 for images, 120-150 for codes (remember, codes react more aggressively).
- Style too strong? Drop to 50-70 for images, 60-80 for codes.
- Want the reference to dominate? 300-500 for images. Codes above 200 get unpredictable fast.
According to community testing documented by Midlibrary’s sref guide, the “sweet spot” for most use cases sits between 65 and 175. Beyond that, you’re in experimental territory.
Pro tip: If you’re using Moodboards, don’t add
--swat all – it’s silently ignored. The official docs confirm--swis incompatible with Moodboards, but Midjourney won’t error out. It just won’t work, and you’ll waste time wondering why your weight adjustments do nothing.
Step 4: Write Prompts for Content, Not Instructions
The reference handles style. Your text prompt should describe what you want, not how it should look.
Bad: “a dog in the style of this image but make it more painterly”
Good: “a golden retriever sitting in a meadow at sunset”
The style reference already contains “painterly.” Repeating it in text creates conflicting signals. Midjourney’s documentation explicitly advises: “Use your text prompt to describe what you want to see, not how Midjourney should modify the reference image.”
A Real-World Example: Fixing Subject Leakage
Let’s say you’re using a portrait photograph as your style reference – maybe a cinematic headshot with dramatic lighting and rich color grading. You want that look applied to a fantasy character.
Pre-V7, this often caused “subject leakage.” The AI would pull not just the lighting and color, but facial features, clothing details, even hairstyle from the reference photo into your fantasy character. Your elf wizard ends up wearing a leather jacket because the reference subject wore one.
The June 2025 update claims to fix this. Per the official announcement, the new system is “much less likely to get undesired subject leakage.” In practice? It’s better, not perfect.
Here’s how I work around it when it still happens:
- Lower the style weight to 60-80. Less influence = less leakage.
- Be hyper-specific in your text prompt about the subject. Don’t say “a wizard.” Say “an elderly wizard with a long white beard, wearing flowing purple robes.”
- If the reference is a close-up portrait, switch to a reference with more environmental context. Portraits leak subjects more than environmental shots.
One photographer who tested this extensively noted that “close-ups bring better results than photographs featuring more complex scenes” for style consistency, but the tradeoff is higher leakage risk. You’re choosing between style accuracy and subject independence.
What Happens When You Combine Multiple References
You can add multiple image URLs: --sref urlA urlB urlC. By default, Midjourney weights them equally.
This rarely produces what you expect. Image A might have a color palette you love. Image B might have a texture you want. Image C might have a composition style. Midjourney doesn’t know which elements you care about from each – it mashes them together and averages.
The fix: explicit weighting using the multi-prompt syntax.
/imagine prompt mountain landscape --sref urlA::2 urlB::1 urlC::0.5
Now image A has twice the influence of B, and C is just a hint. You’re telling the AI which reference matters most.
Better yet: combine references strategically by role. One community user documented using grain textures, light leaks, and color gradients as separate references – each contributing a distinct technical effect rather than competing aesthetics. That works because the references aren’t fighting over the same visual territory.
The Hidden Versioning System Nobody Explains
Here’s something most tutorials skip entirely: V7 has six different style reference algorithms, selectable via --sv 1 through --sv 7.
| Version | Behavior | When to Use |
|---|---|---|
| –sv 4 | Old V7 model (pre-June 2025) | Legacy sref codes, vintage aesthetics |
| –sv 6 | Current default | General use, balanced interpretation |
| –sv 7 | Latest (as of March 2026) | Moodboards, least subject leakage |
| –sv 1, 2, 3, 5 | Experimental variations | When default results feel wrong |
According to one in-depth comparison test, versions 1 and 3 were originally described as “vibey,” offering looser artistic interpretation. Version 6 is conservative and predictable. Version 7 is the most recent and works best when you want the style without any subject contamination.
The practical takeaway: if your reference isn’t translating the way you expect, try a different --sv value. This is especially true for illustrations versus photographs – different versions treat each input type with varying levels of creative liberty.
Note that --sref random and numeric sref codes only work with --sv 4 and --sv 6. The other versions require image URLs.
What I Wish I’d Known from the Start
Style reference isn’t intuitive. The feature looks simple – add a parameter, done – but the underlying behavior has more gotchas than any other Midjourney feature I’ve tested.
You can’t upload an image and extract its sref code. Codes exist in Midjourney’s internal library; you discover them via --sref random, you don’t create them. This confused me for weeks.
Style weight behaves inconsistently between images and codes. A --sw 150 applied to an image might be subtle. The same value on a code might obliterate your prompt. Test every time.
The V7 update in June 2025 wasn’t just a tweak – it fundamentally changed how the system interprets style. Any advice or code library from before that date is suspect unless explicitly updated.
And here’s the part nobody talks about: sometimes the reference just doesn’t transfer. Midjourney’s style analysis pulls what it considers salient – colors, broad compositional elements, lighting mood. If the aspect you love about a reference image is a subtle detail (a specific brushstroke texture, a nuanced gradient), the AI might not prioritize it. You can’t force granular control. That’s the tradeoff.
Your Next Steps
Pick one reference image or code. Run it at --sw 100. See what transfers. Then adjust one variable – weight, version parameter, or text prompt specificity. Repeat.
Don’t stack three references on your first attempt. Don’t assume old sref codes work the same in V7. Don’t ignore the version parameter if you’re troubleshooting unexpected results.
The feature is powerful when you understand its constraints. Frustrating when you don’t.
Can I use style references with other Midjourney features?
Yes. Style references work alongside personalization, image prompts, and most parameters. The exception: --sw doesn’t work with Moodboards (it’s silently ignored), and Omni Reference costs 2x GPU time when combined with sref. You can layer them, but GPU cost and parameter conflicts are real considerations.
Why does my sref code from a library produce completely different results now?
The V7 style reference system changed on June 16, 2025. Old codes now map to different internal styles. Add --sv 4 to your prompt to use the legacy algorithm and get the original style. If that still doesn’t match, the library’s example might have been generated on V6 – switch your version to --v 6 instead.
What’s the actual difference between style weight and the stylize parameter?
Style weight (--sw) controls how strongly your reference influences the output – only relevant when you’ve added a sref image or code. Stylize (--s) controls how much of Midjourney’s own aesthetic training gets applied, regardless of references. Low stylize = literal prompt adherence. High stylize = more artistic interpretation. They’re independent dials affecting different parts of the generation process.