Okay, so yesterday I was messing around with Stable Diffusion, trying to get it to generate some consistent characters. You know, the kind of thing where you can feed it a prompt and get the same person showing up in different scenes. Turns out, it’s a bit trickier than just typing “same person” into the prompt box.

First things first: I hopped onto my usual SD setup, Automatic1111, cranked it up, and started with a basic prompt: “lost little boy, dark hair, scared, in a dense forest”. Hit generate. Got a decent image, but it was just a random kid.
Then I started digging around online, trying to figure out how people were actually doing this. Someone mentioned using “character LoRAs”. I was like, “LoRA what now?”. Turns out, it’s basically a small add-on model that teaches SD what a specific character looks like. I didn’t have one for “lost little boy,” so that was a no-go.
So I backtracked and tried a different approach: seed control. The idea is, every image SD generates has a “seed” number. If you use the same seed number and the same prompt, you should get the same image (or at least something very similar).
Here’s what I did:
- Generated my initial “lost little boy” image.
- Checked the image metadata for the seed number (it’s usually displayed in the UI).
- Wrote that seed number down.
- Changed the prompt slightly: “lost little boy, dark hair, scared, in a dense forest, close-up”. Kept the same seed number.
- Hit generate again.
Result: The new image was a close-up, but it still wasn’t exactly the same kid. Close, but not quite. The nose was different, the eyes were a slightly different shape. Frustrating.

Next, I tried adding more detail to the prompt. Things like “wearing a blue jacket,” “small nose,” “large brown eyes.” Tried the seed thing again. Still not perfect, but getting closer. I think the specific wording really matters here.
Then I thought, “Okay, let’s add a second character to the mix.” I added “a woman with red hair, worried expression” to the prompt. This is where things got really interesting (and messy).
I wanted to create a narrative where the woman was searching for the boy. So, the next few prompts were variations on:
- “lost little boy, dark hair, scared, in a dense forest, blue jacket, small nose, large brown eyes” (seed kept the same)
- “woman with red hair, worried expression, searching for a lost boy in a dense forest” (new seed)
- “woman with red hair hugging lost little boy, reunited, dense forest” (new seed)
The big problem was, the “woman” kept changing! One image she’d have a long nose, the next she’d have a round face. It was a complete mess. I realized I needed a way to keep her consistent too.
Enter: Image-to-Image (img2img). This is where you feed SD an existing image and tell it to create variations on that image.

Here’s what I ended up doing:
- Generated a “base” image of the woman with red hair. I kept the seed number.
- Used that image as the input for img2img.
- Added the “searching for a lost boy” part to the prompt.
- Played around with the “denoising strength” slider. This controls how much the output image is changed from the input image. Lower denoising strength means it sticks closer to the original.
The result: Much better! The woman was now consistently the same person across multiple images. I could then tweak the prompts and the denoising strength to create different scenes of her searching for the boy.
It’s still not perfect. Getting the lighting and the poses exactly right is still a challenge. And sometimes the characters just look… off. But it’s a huge step up from just randomly generating images.
My final “workflow” (if you can call it that) was something like this:
- Generate a “base” image for each character (boy and woman) using text prompts and consistent seeds.
- Use img2img with the base images to create variations and add context to the scene.
- Tweak prompts and denoising strength to fine-tune the results.
- Accept that some images will just be bad and delete them.
Final Thoughts: This whole thing is way more time-consuming than I thought it would be. But it’s also incredibly powerful. The ability to create consistent characters and tell a visual story is pretty amazing. I’m definitely going to keep experimenting with this and see what else I can come up with.

Oh, and one last thing: I ended up naming the woman Rose, and the boy Bernard. Just felt right.