Stumbled upon a trippy effect with animations, where you can load in a wildcard file to change elements randomly for every frame while keeping other elements in the prompt steady. Posting these as MP4 links, hopefully this works better than the previous webm experiment. I’d recommend right clicking the video and looping it if you have that option.

This animation alters each frame’s prompt between different background, arm pose, hair color and style, and outfit. https://files.catbox.moe/fby09r.mp4

This one kept the pose prompt the same between frames, but fed in a new background, hair, and outfit for each frame. https://files.catbox.moe/mmmskp.mp4

This one just cycled through different hair styles and colors for each frame https://files.catbox.moe/yfx27h.mp4

And this final one only changes the hair color and fabric pattern. https://files.catbox.moe/smijys.mp4

  • @[email protected]
    link
    fedilink
    English
    311 months ago

    It changes more than what you tried limiting it to though, like the last one eye color changes one time to blue and the size of the eyes “jiggle” slightly. Still very cool!

    • CavendishOP
      link
      English
      511 months ago

      That’s just the nature of Stable Diffusion. I didn’t prompt anything about eye color, so the models fall back onto internal biases. On average blonde hair = blue eyes, and brown hair = brown eyes.

    • CavendishOP
      link
      English
      611 months ago

      I’ve tried using less than 75 tokens (literally just “woman on beach wearing dress”) and they weren’t coming out much different stability wise than my 300+ token monstrosity prompts that lets me play OCD with fabric patterns and hair length and everything else. So I’m not sure why my experience differs so much from the conventional advice. I think the majority of the jumping is from the dynamic prompts. Here’s one that didn’t change the prompt per-frame (warning: hands!) and it’s much more stable: https://files.catbox.moe/rgjbem.mp4. There’s definitely a million knobs to fiddle with in these settings, and it’s all changing every day anyway, so it’s hard to keep up!

        • CavendishOP
          link
          English
          311 months ago

          I’m running an Intel 12900K, 3090 24gb vram. Part of the hand issue may be that I’m pushing the resolution beyond spec, up to 768x960. At that res, I can do 32 frames, plus it interpolates an additional 2 between each generated frame, for a total of 124 frames in the final output. I can go up to 48 frames before hitting out of memory errors, but I start getting two completely different scenes per clip at 48.

          Haven’t tried adding control net into the mix yet. That’s a whole new bag of options that I’m not mentally prepared for.

            • CavendishOP
              link
              English
              211 months ago

              I have the prompt padding on, without it i get two scenes with just 8 frames. Are you using the v1.5-2 motion model? That one seems to need the additional camera movement loras, otherwise you get very little movement. I went back to the v1.4 motion model, but it kind of stinks for realism. So far, i’ve only been happy with the text2image workflow. I haven’t gotten anything good from img2img.