A few more shots from our trip to the islands. The first round are available here if you missed them.
The setting for these images were generated from my second trained Stable Diffusion model based on Chapter 5 from Red Dead Redemption 2. For those interested, you can grab a copy at CivitAI. https://civitai.com/models/169799.
As always, images link to full-size PNGs that contain prompt metadata.
deleted by creator
No controlnet or inpainting. Everything was generated in one go with a single prompt. I’ll sometimes use regional prompts to set zones for head and torso (usually top 40% is where the head goes, bottom 60% for torso/outfit). But even when I have regional prompting turned off, it will still generate a 3/4 / cowboy shot.
I assume you pulled the prompt out of one of my images? If not, you can feed them into pngchunk.com. Here’s the general format I use with regional prompting:
*scene setting stuff* ADDCOMM *head / hair description* ADDROW *torso/body/pose*
The loras that are in the top (common) section are weighted pretty low, 0.2 - 0.3, because they get repeated/multiplied in each of the two regional rows. So I think at the end they’re effectively around 0.6 - 0.8.
prompt example
photo of a young 21yo (Barbadian Barbados dark skin:1.2) woman confident pose, arms folded behind back, poised and assured outside (place cav_rdrguarma:1.1), (Photograph with film grain, 8K, RAW DSLR photo, f1.2, shallow depth of field, 85mm lens), masterwork, best quality, soft shadow (soft light, color grading:0.4) ADDCOMM sunset beach with ocean and mountains and cliff ruin in the background , (amethyst with violet undertones hair color in a curly layers style:1.2), perfect eyes, perfect skin, detailed skin ADDROW choker , (pea green whimsical unicorn print bikini set:1.1) (topless:1.3) cameltoe (undressing, panty pull:1.4) (flat breast, normal_nipples :1.4), (tan lines, beauty marks:0.6) (SkinHairDetail:0.8)
It may be that you’re not describing the clothing / body enough? My outfit prompts are pretty detailed, so I think that goes a long way for Stable Diffusion to determine how to frame things.
deleted by creator
deleted by creator
I hope you’re not saying “reverse engineer” like it’s a negative or shady practice. I freely share all of my prompts to help people see what’s working for me, and I like to explore what’s working for everyone else. I’ve had good success with simpler prompts too, like the one for this parrot: https://civitai.com/images/3050333.
deleted by creator