Havnt still found hot how to use the lora, the best I get is this, and it’s just copy-pasta :/

possative: masterpiece, best quality, (photorealistic:1.4), office background, beautiful woman, skinny, (ginger:1.2) hair, drill hair, detailed face, perfect face, medium breasts, wearing bra, missionary anal, from above, spreading legs, “lora:PovMissionaryAnal-v6:0.7”, cum in pussy, cumdrip, “lora:Creampie_v11:0.5” ,

Negative prompt: publich hair, text, cartoon, painting, illustration, (worst quality, low quality, normal quality:2), FastNegativeV2 Steps: 60, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 512x768, Model hash: b265e017da, Model: perfectdeliberate_v40, Lora hashes: “PovMissionaryAnal-v6: 939d1957d339, Creampie_v11: 9d5a344a5367”, TI hashes: “FastNegativeV2: a7465e7cc2a2”, Version: v1.5.1

  • @DudeWTF
    link
    English
    511 months ago

    It isn’t too hard to read the way the scripts parse prompts. I haven’t gone into much detail when it domes to stable diffusion. The GUIs written in gradio, like Oobabooga for text or Automatic1111 are quite simple python scripts. If you know the basics of code like variables, functions, and branching, you can likely figure out how the text is parsed. This is the most technically correct way to figure this stuff out. Users tend to share a lot of bad information, especially in the visual arts space, and even more so if they use Windows.

    Because the prompt parsing method this is part of the script. If we don’t know what software you are using, it is hard to tell you what to do with certainty. I think most are compatible, bit I don’t know for sure. In the LLM text space, things like characters are parsed differently across various systems.

    With Automatic1111, on the text2img page, there is a small red icon under the image that opens up a menu in the GUI and lists all the LoRAs you have placed in the appropriate folder for LoRAs on your host system where you installed A1111. Most of the LoRAs you download that show up on the text2img page will have a small circled “i” icon in one corner, this will usually contain a list of the text data that was used to train the LoRA. This text data was associated with each image. These are the keywords that will trigger certain LoRA attributes. When you have this LoRA menu open, if you click on any of the entries, it will automatically add the tag used to set the strength of the LoRA’s influence on the prompt. This defaults to 1.0 but this is always too high. Most of the time 0.2-0.7 work okay. You also need the main key word used to trigger the prompt added somewhere in the prompt. This can be difficult to find unless to keep this information from the place you downloaded the LoRA from. Personally, I rename all of my LoRAs to whatever the keyword is. Also, you’re likely going to get a lot of LoRAs eventually. Get in the habit of putting an image relative to what each LoRA does in the LoRAs folder. The image should be named the same as the LoRA itself. A1111 will automatically add this image to each entry you see in the GUI menu. LoRAs are not hard to train too. Try it some time. If you can generate images, you can train LoRAs.

    • @hornydufusOP
      link
      English
      111 months ago

      Stil learning about SD1111, but slowly getting the hang of it learning something new everyday.

      When trying a new Lora I often copy-pasta a promt from their example to see if everything works as it should. Then I find read in detail to find out how it works and combine it with some of “mine” promts. And you right, less promts can be better. The page i used to start this new addiction used A TON of promts, first needed to find out about how it works.

      I’m not quite sure what you mean about the Lora. Yes Ussing 1111, and know about the button that shows the Lora. The Lora’s worked great here. Though must say that the Lora prompt doesn’t include the “<” which is required in the promt, that’s because the content disappear, has something todo with the text formation.