The end goal of this tutorial is a short alphanumeric string – something like --p a1b2c3d – that you can paste at the end of any prompt and get back images that look like your work. Same colors. Same lighting mood. Same compositional instincts. Reusable for a year. Shareable with a collaborator. That string is the entire payoff of Midjourney personalization, and most tutorials bury it under setup steps.
So we’ll work backwards. First the result, then the setup, then the parts no one talks about – like why your second profile usually beats your first.
What you actually end up with
A trained Midjourney personalization profile produces a code that acts like a style fingerprint. Attach it to any prompt with --p and Midjourney nudges the output toward the aesthetic it learned from you – color palette, lighting, framing, level of realism, the works. By selecting images, Midjourney learns what kind of aesthetics you prefer and uses that to generate images tailored to your taste.
The practical version: a freelance illustrator can train one profile for moody editorial portraits and another for clean product mockups, then swap between them by changing one code in the prompt. You can create multiple Personalization profiles, each with a different style, each one producing a unique code when used in a prompt.
The two training paths (pick one deliberately)
There are two ways to train a profile, and they produce different kinds of styles. Most guides treat them as interchangeable. They’re not.
| Path | What you do | Style it produces |
|---|---|---|
| Ranked Profile | Select favorites from grids of Midjourney-generated images | Broad, taste-driven, slow to shift |
| Moodboard | Upload your own reference images | Narrow, intentional, brand-friendly |
If you want a personal aesthetic that follows you across topics, rank. If you want to lock in a client’s visual identity, build a moodboard. Per the official Moodboards docs, moodboards let you select specific images to express your vision across a wider aesthetic range than Style References, which are more specific to a single look.
How to actually train it (the setup the keyword sent you here for)
Here’s the workflow for Midjourney personalization on the web app, as of mid-2025. Per the official docs, the old image-pair ranking has been replaced – rating image pairs is out; selecting images from a grid is what the feature uses now.
- Open the Personalize page from the left sidebar.
- Click the Global Profile for the version you actually use (V6, V7, or V8.1). Find the thumbnail area, you’ll see a grid of images – click the ones you like best and keep scrolling.
- Select roughly 200 images. This number isn’t a hard requirement in the docs, but it’s the community-converged floor for the feature to work effectively.
- Once unlocked, click New Profile to create a specialized profile (e.g. “editorial-portraits”) so your Global Profile stays general.
- Grab the code: hover the profile, copy the ID, and paste it into a prompt as
--p yourID.
One detail that trips people up: when you submit a prompt using a Personalization profile ID, it automatically converts from --p pID to --p code at submit time. So the code in your prompt history won’t match the ID on the Personalize page. That’s expected, not a bug.
Selecting images is not clicking – it’s curating
The single biggest mistake is treating the 200 selections like a checkbox. Click pretty thing, click pretty thing, done. The profile that comes out will be muddy because your selections were muddy.
Before you start, decide what the profile is for. Write it down in one sentence: “warm, grain-heavy 35mm portraits with shallow depth” or “flat vector illustration with two-color palettes.” Then select against that sentence, not against general appeal. If an image is gorgeous but doesn’t match your sentence, skip it.
Pro tip: Build a throwaway profile first. Spend 50 selections deliberately wrong – pick a style you’d never use. When you generate with that profile’s code, you’ll see exactly how much signal Midjourney is extracting from your clicks. It’s a calibration exercise, and it makes your real profile sharper.
Stacking, weighting, and combining codes
Once you have two or more profiles, the interesting part starts. You can stack codes in a single prompt:
a quiet cafe interior at dawn --p code1 code2 --stylize 250
Multiple profile codes stack when placed one after the other with a space – for example, --p 6odeoas 7enzken. Useful when one profile carries your color sense and another carries your composition habits. They blend.
Intensity control goes through --stylize (alias --s), which accepts values from 0 to 1000 with a default of 100. Push it to 400+ if your profile feels too subtle. Drop to 50 if it’s overpowering the subject.
Personalization also stacks with a Style Reference. Combining --sref and --p in the same prompt blends a one-off mood with your ambient fingerprint – though there’s a parameter catch worth knowing about, covered next.
The gotchas competitors leave out
The official docs get terse here. Community knowledge fills the gap.
Version splits are real. Personalization is compatible with V6 and V7, and each Midjourney version has its own Global Profile – a V6 profile won’t carry its learned taste into V7. Moving to V8.1 adds another wrinkle: your V7 profile technically works there, but only once your V8.1 Global Profile is separately unlocked. That compatibility is conditional, not automatic.
Explore likes silently train you. Liking images on the Explore page influences your Global Profiles. The version of the liked image determines which profile gets updated – like a V6 image and you’re training your V6 Global Profile, not your V7 one. If you’ve been liking everything in sight, your Global Profile already has opinions you didn’t intend.
Moodboards reject –sw. Most guides recommend --sw to fine-tune style intensity. With moodboards, it’s silently ignored – moodboards cannot be used with Style Reference Version (--sv) or Style Weight (--sw). Use --stylize instead.
Deleting doesn’t delete. When you delete a Personalization profile it disappears from the list and you can no longer train it – but any codes already generated from it keep working. So if you shared a code with a client before deleting the profile, they can keep generating with it. Whether that’s a feature or a privacy problem depends on the situation.
The code list isn’t labeled. Older codes are visible via /list_personalize_codes in Discord, but the command doesn’t show which code belongs to which Personalized Profile. You’ll need to match them by the date the code was created. Rename profiles aggressively and keep a separate doc mapping codes to dates.
How do you know your profile is actually good?
This is the uncomfortable question. You spent an hour selecting images. The code works. The images look… fine? Are they actually your style, or are they just Midjourney’s averaged taste with a slight tint?
Honestly – there’s no benchmark. Midjourney’s own announcement described personalization as a feature “constantly in flux” that changes as you do more selections, with the possibility of algorithm updates at any time. That post is older, but the spirit holds: the algorithm under the hood is not a fixed contract.
The practical test I use: generate the same prompt three times with seed locked – once without --p, once with your profile, once with a profile you know is wrong (your throwaway from earlier). If the middle output is closer to the wrong one than the default, your profile isn’t doing much. If it diverges meaningfully in a direction that feels intentional, you have something worth keeping.
FAQ
Can I use someone else’s personalization code?
Yes – paste their code after --p in your prompt. You just can’t keep training their profile; only the original owner adds new selections to it.
Does personalization work across different model versions?
Each Midjourney model version maintains its own Global Profile. Switching between major versions means your existing ranked profile won’t carry over – you’d need to enable and train the new version’s profile separately. This catches people who assume one round of 200 selections covers every model they use going forward.
What’s the difference between –p and –sref?
A Style Reference borrows the look of a specific image or code for one prompt. Personalization applies a learned profile across all your prompts. --sref is targeted; --p is ambient. They stack well together when you want both a one-off mood and your usual fingerprint underneath.
Next step: open the Personalize page right now, name a throwaway profile “calibration-test,” pick 50 images deliberately against your taste, and generate one prompt with its code. Five minutes of doing this teaches you more about how the feature behaves than any tutorial – including this one.