My Reference Latent Node including Auto Masking and Timesteps per image is out tomorrow
https://redd.it/1t5s662
@rStableDiffusion
https://redd.it/1t5s662
@rStableDiffusion
Clippy Reloaded - a really sarky useful Clipboard node with no click.
https://redd.it/1t5ujla
@rStableDiffusion
https://redd.it/1t5ujla
@rStableDiffusion
LTX 2.3 is pretty much all I use for video gen at this point. Now I'm going to post stuff about it.
https://redd.it/1t61x0m
@rStableDiffusion
https://redd.it/1t61x0m
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
testing LTX 2.3 v1.1 distilled on my gpu. pretty decent for creating ugc content or short tiktok vlog.
https://redd.it/1t643be
@rStableDiffusion
https://redd.it/1t643be
@rStableDiffusion
LTX 2.3 Slow Motion
Does anyone know how to stop LTX 2.3 image to video from being slow motion? I am using the default work flow in comfy ui and have tried with both the dev/distilled checkpoints and loras. Experimented with CFG, Lora Weight, Prompts, etc. More often than not, the video is in slow motion, for 5 & 10 second clips and 25 FPS.
https://redd.it/1t66h1d
@rStableDiffusion
Does anyone know how to stop LTX 2.3 image to video from being slow motion? I am using the default work flow in comfy ui and have tried with both the dev/distilled checkpoints and loras. Experimented with CFG, Lora Weight, Prompts, etc. More often than not, the video is in slow motion, for 5 & 10 second clips and 25 FPS.
https://redd.it/1t66h1d
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
I built a tool to mix two artists on one image with region masks — Van Gogh + Picasso, no training, arbitrary refs
https://redd.it/1t65bi8
@rStableDiffusion
https://redd.it/1t65bi8
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I built a tool to mix two artists on one image with region masks — Van Gogh + Picasso…
Explore this post and more from the StableDiffusion community
SenseNova U1 Interleaved Output: From Single Prompt to Consistent Visual Set
https://redd.it/1t6aamt
@rStableDiffusion
https://redd.it/1t6aamt
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Open-sourcing Banodoco Hivemind: 1M+ Discord messages from artists and engineers working deeply with open image/video models, packaged as an agent skill
https://redd.it/1t6amma
@rStableDiffusion
https://redd.it/1t6amma
@rStableDiffusion
Qwen 3.5 in ComfyUI + Align Tool & Pixaroma Nodes Updates (Ep16)
https://www.youtube.com/watch?v=IEH02ZQy0zY
https://redd.it/1t6dd3h
@rStableDiffusion
https://www.youtube.com/watch?v=IEH02ZQy0zY
https://redd.it/1t6dd3h
@rStableDiffusion
YouTube
Qwen 3.5 in ComfyUI + Align Tool & Pixaroma Nodes Updates (Ep16)
In this ComfyUI tutorial, I show how to use the Qwen 3.5 vision-language model to generate prompts from text or images directly inside ComfyUI, plus major updates for Pixaroma Nodes including the new Align Tool, improved Preview Image node, updated Note node…
Removed by Reddit
Removed by Reddit on account of violating the [content policy. ]
https://redd.it/1t6dc3h
@rStableDiffusion
Removed by Reddit on account of violating the [content policy. ]
https://redd.it/1t6dc3h
@rStableDiffusion
Reddit
From the sdforall community on Reddit: [ Removed by Reddit ]
Posted by Successful_Rip3025 - 4 votes and 0 comments
This media is not supported in your browser
VIEW IN TELEGRAM
LTX 2.3 Sulphur Uncensored Model - no need for external LoRAs
https://redd.it/1t6dh78
@rStableDiffusion
https://redd.it/1t6dh78
@rStableDiffusion
Working on a technique to produce style LoRAs from a single image. Post yours and I'll train it for Klein 9b!
https://redd.it/1t6gmqn
@rStableDiffusion
https://redd.it/1t6gmqn
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Working on a technique to produce style LoRAs from a single image. Post yours and…
Explore this post and more from the StableDiffusion community