Prompts are variations on:
An 8k real photo of a woman viewing a statue. ADDBASE
On the left, a (blonde:1.1) college tour guide standing in a museum. ADDCOL
An empty museum. ADDCOL
On the right, a corroded statue in a fountain with a beatific expression. screaming and moaning in pleasure, suddenly (overcome by the most powerful orgasm ever:1.2). Trapped in the instant of climax. Water emerges (from her vagina:1.2), between her spread legs, to fill a basin. Clitoris, spread gushing pussy.
Negative prompt: child, childish, penis, ugly, weird face, distorted face, weird face, dour, bad-hands-5, anime, drawing, CGI render, pointy fingers, tiptoes, high heels, screen, poster, tongue licking, pink tongue, backlit
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2423140287, Size: 768x768, Model hash: c4625c7cd3, Model: qgoPromptingreal_qgoPromptingrealV1, RP Active: True, RP Divide mode: Matrix, RP Matrix submode: Horizontal, RP Mask submode: Mask, RP Prompt submode: Prompt, RP Calc Mode: Attention, RP Ratios: "3,1,3", RP Base Ratios: 0.2, RP Use Base: True, RP Use Common: False, RP Use Ncommon: False, RP Change AND: False, RP LoRA Neg Te Ratios: 0, RP LoRA Neg U Ratios: 0, RP threshold: 0.4, Version: v1.3.0
Again I got good results from the A1111 regional prompter extension, to keep the statue and the human straight. You need it for the ADDCOL
stuff to work.
Exact prompts are embedded in the PNGs.
I tried it with LoRas; it works OK but the LoRa applies over the whole image, and I found that the LoRas seem to make the model less good at reading comprehension, which is what I am fighting with the regional prompter in the first place.
Really what I want is to use the LoRa for some generation steps but not use it for others, but I haven’t worked out how to do that yet.
Have you looked at this LoRa managing extension? I saw it mentioned on the regional prompt GitHub readme.
This extension summary:
Hm. I’ve never really figured out
AND
, but this might be useful. Thanks!Let me know what you find and if it’s useful! Cheers.
OK, I tried it via the maintained fork here, and it does something, but it doesn’t really let you zero out a LoRa. Here is the model not understanding my prompt at all and drawing garbage, and then here it is when I increase the LoRa weight in the prompt from zero and it draws something different. In both cases I am using the extension to tell the LoRa not to come in until step 21 of my 20 step run. In both cases I told the extension to plot the LoRa weight it thinks it is using, and it was 0 at all steps. But clearly having the LoRa in there did something even when it was supposed to be at weight 0.
Some mysterious things are happening here. I’m curious though, why did you set it to come in on step 21 when you only have 20 steps? Wouldn’t that prevent it from running? I’m missing something.
Yeah, that was the point. Not having it come in until a step that doesn’t happen should give identical output to not having it in the prompt at all, but doesn’t seem to.
I see. Strange. I don’t know how it works, clearly.
If you figure this out, I would like to know how! I think it’s possible but I’m not quite there yet.