I finetuned Qwen3-1.7B to imitate original Z-Image text encoder. 21% less VRAM
https://redd.it/1t71hvm
@rStableDiffusion
https://redd.it/1t71hvm
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I finetuned Qwen3-1.7B to imitate original Z-Image text encoder. 21% less VRAM
Explore this post and more from the StableDiffusion community
LTX 2.3 ID-LoRA with First-Last Frame
The official ComfyUI ID-LoRA workflow for LTX-Video 2.3 only supports first-frame conditioning out of the box, which limits how much control you have over character motion and pose. I wanted to add last-frame support with minimal changes to the original — no restructuring, no new samplers, just surgical node edits. You can grab the modified workflow here.
What was changed:
The default workflow uses
Concretely:
1. Added last-frame preprocessing — two new nodes mirror the existing first-frame preprocessing pipeline: a
2. Low-res pass — The
3. High-res pass — Same conversion applied to the conditioning node after
4. New subgraph input — A
That's it — 2 node type swaps, 2 preprocessing nodes, 1 new input. Everything else (sampler, audio conditioning, LoRA stacking, the upscale pipeline) is untouched from the official Comfy Cloud release. Let me know if you have any questions. Cheers!
https://redd.it/1t71x0r
@rStableDiffusion
The official ComfyUI ID-LoRA workflow for LTX-Video 2.3 only supports first-frame conditioning out of the box, which limits how much control you have over character motion and pose. I wanted to add last-frame support with minimal changes to the original — no restructuring, no new samplers, just surgical node edits. You can grab the modified workflow here.
What was changed:
The default workflow uses
LTXVImgToVideoInplace (comfy-core) for image conditioning in both the low-res and high-res sampling passes. This node only handles a single frame at a fixed position. The fix was to swap both instances out for LTXVImgToVideoInplaceKJ from KJNodes, which supports multiple images at arbitrary frame positions in a single call.Concretely:
1. Added last-frame preprocessing — two new nodes mirror the existing first-frame preprocessing pipeline: a
ResizeImagesByLongerEdge (1536px) followed by LTXVPreprocess. These feed the last-frame image into both sampling passes.2. Low-res pass — The
LTXVImgToVideoInplace node was replaced with LTXVImgToVideoInplaceKJ configured for 2 images: first frame at position 0, last frame at position -1, both at strength 0.7. One node, both frames conditioned simultaneously.3. High-res pass — Same conversion applied to the conditioning node after
LTXVLatentUpsampler. Both frames re-conditioned at strength 1.0 so the last frame gets sharpened in the upscale pass just like the first frame. Without this step the last frame came out noticeably blurrier.4. New subgraph input — A
last_frame image input was added to the workflow's subgraph, wired to a LoadImage node on the canvas.That's it — 2 node type swaps, 2 preprocessing nodes, 1 new input. Everything else (sampler, audio conditioning, LoRA stacking, the upscale pipeline) is untouched from the official Comfy Cloud release. Let me know if you have any questions. Cheers!
https://redd.it/1t71x0r
@rStableDiffusion
Comfy
Comfy — Professional Control of Visual AI
Comfy is the AI creation engine for visual professionals who demand control over every model, every parameter, and every output.
Why is it that 3 years old SDXL is still the best base for porn checkpoints, where the best ones on civitai produce materially better images than the z image or flux porn checkpoints in terms of realism and skin texture?
https://redd.it/1t71cs5
@rStableDiffusion
https://redd.it/1t71cs5
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Continuous-Time Distribution Matching: A new SOTA method for step distillation.
https://redd.it/1t76p1t
@rStableDiffusion
https://redd.it/1t76p1t
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Continuous-Time Distribution Matching: A new SOTA method for step distillation.
Explore this post and more from the StableDiffusion community