I use gen AI tools to create and post images that I think are beautiful. 100% solar power. 🌤️
💃 Follow me on Pixelfed
Here’s the full-res starting image I made with Flux some months back.
I turned this one into a short video clip: https://lemmynsfw.com/post/22606879
Thanks! I can already tell that we have a mutually compatible aesthetic.
Beautiful! This whole community is looking really nice.
The image generators seem to have a hard time with continuity when something passes in front, like her arm in front of the bikini straps.
This looks straight out of an 80s B-movie. I dig it.
Definitely. Full nudes are okay, but wacky outfits are fun to experiment with. I like the clear plastic infill panels on this one.
Thank you, I’m glad you like them.
Sometimes the AI just really wants to make nipples.
I agree about the color. I keep refining and iterating as I go, so the images towards the end of a series always seem to be better than the earlier ones.
I really appreciate you noticing. I spent a ton of time and lots of trial and error on fine tuning settings to get that volcano effect.
Thank you. Flux is surprisingly good at doing specific car models.
Would love to do this locally, but my machine would probably choke. This is from an online provider, KlingAI. When I experimented with Stable Diffusion AnimateDiff models, it took nearly 20 minutes for a low res, pretty cruddy looking 4 seconds, and 90% were straight up garbage.
I’ve only used the fp16 Flux models, it’ll do a 2 megapixel image in 2ish minutes. I haven’t tried the compressed models, or the GGUF models geared towards lower VRAM cards.
Thanks, the speed at which things are improving continue to amaze me. This isn’t even the full resolution version, the one I have is a full 1080x1920 60fps uncompressed. It’s stunning.
This is a neat idea. I’ll have to look at segmentation nodes for Comfy.
I’ve been meaning to do this anyway, thanks for the reminder!
Yeah, I couldn’t bring myself to fuzz the pictures up too much more. We’ll just ignore the occasional flat screen too. Hah!
I’m using a Flux 1.D variant called New Reality. https://civitai.com/models/161068?modelVersionId=979329
Thank you! I had to step away from the image ai stuff for a while, but hope to post more soon.
Yes, this is all local and self-hosted.
On the hardware front, I’m running an Intel 12900k, Nvidia 3090, 64GB ram. The workflow is all done with ComfyUI. Starting with a single image from the Flux 1 Dev model (text to image). Then passing that to a new video model called WAN 2.1 for the image-to-video.
The Flux image takes about 2 minutes to generate but I make them big, about 2.5 megapixels with an upscale/finetune pass. The video takes about 3-4 hours to output 10 seconds of 720p at 15fps. Any larger res or more frames and I get out of memory errors. After that, I run it through a script I wrote using a frame interpolator called RIFE to get it up to 30fps.
If you don’t need NSFW and don’t mind a Chinese company doing the video, I’ve gotten great image-to-video results from KlingAI in just a few minutes. The company behind Flux is also working on a local i2v model, and there’s another new one called HunyuanVideo that just came out a few days ago but I haven’t tried it.