LTX2.3 (Distilled) - Updated sigmas for better results (?)
Hey y'all,
Was playing around with the LTX2.3 distilled sigmas for the first Ksampler and tried to tweak them for a bit of fun, and I think I've stumbled upon updated sigmas that give me better quality, detail and prompt adherence.
I've been using LTX2.3 since it came out and never really questioned the "official" sigmas that come with the original workflow, but today, for fun, I tried to tweak them a bit and I'm really liking the results I am getting.
This is all T2V and I have not tried with I2V, so not sure how it would affect the results there.
The original Sigmas for the first Ksampler (8 steps) are: 1.0, 0.99375, 0.9875, 0.98125, 0.975, 0.909375, 0.725, 0.421875, 0.0
After a bit of testing, I've settled on these new sigmas: 1.0, 0.995, 0.99, 0.9875, 0.975, 0.65, 0.28, 0.07, 0.0
I have made some comparisons that showcase the difference between both old and new sigmas, and I am really liking how things turn out with the new ones.
All results are 1280 x 704 x 24FPS, 5 seconds, Euler A sampler (16GB of VRAM so excuse the lower quality, also Reddit compression hurts a lot).
Left is with old sigmas, right is with new sigmas.
Sounds is from the video with the new sigmas.
https://reddit.com/link/1sk8vhq/video/7gsjvdn15yug1/player
a muscular man with rolled-up sleeves and a leather apron leans over a metal workbench in a dimly lit industrial workshop, he presses an angle grinder against a large piece of steel, a cascade of bright orange and white sparks erupts and scatters across the floor, his forearms flex with the effort, face partially lit by the sparks and harsh overhead workshop lamp, sawdust and metal shavings on the floor, dark gritty background with shelving and hanging tools slightly out of focus, cinematic, shallow depth of field, photorealistic
Streamable link: *https://streamable.com/rwt3vl*
https://reddit.com/link/1sk8vhq/video/yn1qv1g55yug1/player
a heavily muscular man with short cropped hair and scarred knuckles wraps his hands in a dimly lit boxing gym, then steps up to a heavy bag and throws a hard combination of punches, the bag swings violently, sweat flying off his arms with each impact, harsh overhead fluorescent light, cinematic, photorealistic
Streamable link: *https://streamable.com/36b5nx*
https://reddit.com/link/1sk8vhq/video/a4ougyv17yug1/player
In a dark theater room, a ballerina wearing a typical ballerina outfit is dancing, moving gracefully on the stage. A spotlight is focused on her.
Streamable link: *https://streamable.com/jwey0a*
https://reddit.com/link/1sk8vhq/video/p8ip8l5d5yug1/player
a tall dark-haired muscular man in a fitted black shirt behind a moody speakeasy bar grabs a shaker, tosses it spinning in the air, catches it smoothly and slams it on the bar, then leans forward on both hands looking directly into camera, neon backlit bottles, dark atmospheric lighting, cinematic, photorealistic
Streamable link: *https://streamable.com/qhycpa*
https://reddit.com/link/1sk8vhq/video/belte2og5yug1/player
A beautiful woman with long blonde hair, wearing a long white dress flowing in the wind is walking by a cliff, looking ethereal, looking in the distance. The sound of waves crashing down below can be heard. She is barefoot, walking through tall grass. The sun is casting beautiful lights and shadows on the scene.
Streamable link: *https://streamable.com/hz2fu5*
These are just some short examples which weren't cherry picked.
Not sure what this is worth, but thought I would share.
https://redd.it/1sk8vhq
@rStableDiffusion
Hey y'all,
Was playing around with the LTX2.3 distilled sigmas for the first Ksampler and tried to tweak them for a bit of fun, and I think I've stumbled upon updated sigmas that give me better quality, detail and prompt adherence.
I've been using LTX2.3 since it came out and never really questioned the "official" sigmas that come with the original workflow, but today, for fun, I tried to tweak them a bit and I'm really liking the results I am getting.
This is all T2V and I have not tried with I2V, so not sure how it would affect the results there.
The original Sigmas for the first Ksampler (8 steps) are: 1.0, 0.99375, 0.9875, 0.98125, 0.975, 0.909375, 0.725, 0.421875, 0.0
After a bit of testing, I've settled on these new sigmas: 1.0, 0.995, 0.99, 0.9875, 0.975, 0.65, 0.28, 0.07, 0.0
I have made some comparisons that showcase the difference between both old and new sigmas, and I am really liking how things turn out with the new ones.
All results are 1280 x 704 x 24FPS, 5 seconds, Euler A sampler (16GB of VRAM so excuse the lower quality, also Reddit compression hurts a lot).
Left is with old sigmas, right is with new sigmas.
Sounds is from the video with the new sigmas.
https://reddit.com/link/1sk8vhq/video/7gsjvdn15yug1/player
a muscular man with rolled-up sleeves and a leather apron leans over a metal workbench in a dimly lit industrial workshop, he presses an angle grinder against a large piece of steel, a cascade of bright orange and white sparks erupts and scatters across the floor, his forearms flex with the effort, face partially lit by the sparks and harsh overhead workshop lamp, sawdust and metal shavings on the floor, dark gritty background with shelving and hanging tools slightly out of focus, cinematic, shallow depth of field, photorealistic
Streamable link: *https://streamable.com/rwt3vl*
https://reddit.com/link/1sk8vhq/video/yn1qv1g55yug1/player
a heavily muscular man with short cropped hair and scarred knuckles wraps his hands in a dimly lit boxing gym, then steps up to a heavy bag and throws a hard combination of punches, the bag swings violently, sweat flying off his arms with each impact, harsh overhead fluorescent light, cinematic, photorealistic
Streamable link: *https://streamable.com/36b5nx*
https://reddit.com/link/1sk8vhq/video/a4ougyv17yug1/player
In a dark theater room, a ballerina wearing a typical ballerina outfit is dancing, moving gracefully on the stage. A spotlight is focused on her.
Streamable link: *https://streamable.com/jwey0a*
https://reddit.com/link/1sk8vhq/video/p8ip8l5d5yug1/player
a tall dark-haired muscular man in a fitted black shirt behind a moody speakeasy bar grabs a shaker, tosses it spinning in the air, catches it smoothly and slams it on the bar, then leans forward on both hands looking directly into camera, neon backlit bottles, dark atmospheric lighting, cinematic, photorealistic
Streamable link: *https://streamable.com/qhycpa*
https://reddit.com/link/1sk8vhq/video/belte2og5yug1/player
A beautiful woman with long blonde hair, wearing a long white dress flowing in the wind is walking by a cliff, looking ethereal, looking in the distance. The sound of waves crashing down below can be heard. She is barefoot, walking through tall grass. The sun is casting beautiful lights and shadows on the scene.
Streamable link: *https://streamable.com/hz2fu5*
These are just some short examples which weren't cherry picked.
Not sure what this is worth, but thought I would share.
https://redd.it/1sk8vhq
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Inpaint workflows for z-image, qwen and flux fill onereward
Hi!
A couple of days ago I uploaded and shared here my 2 edit/inpaint workflows plus a z-image txt/img2img one.
Well, today I have uploaded the rest of the workflows I personally use, everything else I use is simple enough that it is not worthy to share.
I also updated the Z-Image txt/img2img workflow, as it was missing the functionality to use a denoise lower than 1.
What I have newly uploaded are 3 traditional inpaint workflows:
- An update to my previous Flux Fill workflow, this one is what I use nowadays, I only use the onereward fine-tune and I also trimmed some methods I don't use anymore. So I left the previous version uploaded in case someone want the full version.
- A Z-Image based version of the same inpaint logic as the Flux one, with the proper changes for the specific model.
- Same with Qwen Image. Note that there are the Qwen Image Edit based edit/inpaint workflow I previously uploaded and this one; based on Qwen Image and InstantX controlnet for traditional inpaint. In some cases is better to use an editing/inpaint approach and others pure inpant, so you have both options.
What makes my inpaint workflows different? Well, I haven't checked all inpaint workflows out there, but so far I have seen that:
- Default templates or example workflows straight use the full image, without crop&stitch logic, that is practically useless.
- Most user's inpaint workflows follow the crop-resize-sample-resize-paste logic, as do mine. But, most people use the inpaint crop&stitch custom nodes, which are great, as they are an easy to use all in one solution to do the aforementioned pipeline.
I use the old masquerade nodes, which separates all functions in individual nodes, allowing for a greater control of how the cropping is done, of course it is a bit of a pain to setup, but I already done that for you. So the cropped region aspect ratio is calculated in base of the mask shape, it is scaled to a total pixels size, which makes more comfortable to match the model optimal resolution, and you get some extra outputs to work with. It is all packed into a sub-graph, with the advantage of being able to dive in and debug should the behavior needed to be tweaked for some special case. Add to that a centralized control panel and a selection of specific sampler/scheduler nodes for each model, everything group-ordered and comments and usage notes.
As always, my hope is that they serve you as good as they serve me.
They are all here: https://ko-fi.com/botoni/shop/workflow
Free and no login necessary, it asks for an e-mail, but it won't check if it exists. If you put your real one it should notify of workflow updates and new posts.
https://redd.it/1ska9uv
@rStableDiffusion
Hi!
A couple of days ago I uploaded and shared here my 2 edit/inpaint workflows plus a z-image txt/img2img one.
Well, today I have uploaded the rest of the workflows I personally use, everything else I use is simple enough that it is not worthy to share.
I also updated the Z-Image txt/img2img workflow, as it was missing the functionality to use a denoise lower than 1.
What I have newly uploaded are 3 traditional inpaint workflows:
- An update to my previous Flux Fill workflow, this one is what I use nowadays, I only use the onereward fine-tune and I also trimmed some methods I don't use anymore. So I left the previous version uploaded in case someone want the full version.
- A Z-Image based version of the same inpaint logic as the Flux one, with the proper changes for the specific model.
- Same with Qwen Image. Note that there are the Qwen Image Edit based edit/inpaint workflow I previously uploaded and this one; based on Qwen Image and InstantX controlnet for traditional inpaint. In some cases is better to use an editing/inpaint approach and others pure inpant, so you have both options.
What makes my inpaint workflows different? Well, I haven't checked all inpaint workflows out there, but so far I have seen that:
- Default templates or example workflows straight use the full image, without crop&stitch logic, that is practically useless.
- Most user's inpaint workflows follow the crop-resize-sample-resize-paste logic, as do mine. But, most people use the inpaint crop&stitch custom nodes, which are great, as they are an easy to use all in one solution to do the aforementioned pipeline.
I use the old masquerade nodes, which separates all functions in individual nodes, allowing for a greater control of how the cropping is done, of course it is a bit of a pain to setup, but I already done that for you. So the cropped region aspect ratio is calculated in base of the mask shape, it is scaled to a total pixels size, which makes more comfortable to match the model optimal resolution, and you get some extra outputs to work with. It is all packed into a sub-graph, with the advantage of being able to dive in and debug should the behavior needed to be tweaked for some special case. Add to that a centralized control panel and a selection of specific sampler/scheduler nodes for each model, everything group-ordered and comments and usage notes.
As always, my hope is that they serve you as good as they serve me.
They are all here: https://ko-fi.com/botoni/shop/workflow
Free and no login necessary, it asks for an e-mail, but it won't check if it exists. If you put your real one it should notify of workflow updates and new posts.
https://redd.it/1ska9uv
@rStableDiffusion
Ko-fi
Botoni's Shop is Open!
I've opened a shop. Come take a look!
Z Image Turbo + GrainScape UltraReal + American Consistent Character
https://redd.it/1skbvr4
@rStableDiffusion
https://redd.it/1skbvr4
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Z Image Turbo + GrainScape UltraReal + American Consistent Character
Explore this post and more from the StableDiffusion community
Update: Distilled v1.1 is live
We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with improvements to audio quality and a slightly refined visual aesthetic. It's available on HuggingFace alongside the previous Distilled version.
Along with the new checkpoint, we've also retrained the distilled LoRA, updated all four ComfyUI example workflows, and refreshed the union control and motion tracking IC-LoRA checkpoints to work with the new base model (these replace the previous versions in place).
No major architecture changes, just refinement across the board. Files are live now. Would love to hear your impressions, especially on the audio side.
And stay tuned, more updates are on the way.
https://redd.it/1skds12
@rStableDiffusion
We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with improvements to audio quality and a slightly refined visual aesthetic. It's available on HuggingFace alongside the previous Distilled version.
Along with the new checkpoint, we've also retrained the distilled LoRA, updated all four ComfyUI example workflows, and refreshed the union control and motion tracking IC-LoRA checkpoints to work with the new base model (these replace the previous versions in place).
No major architecture changes, just refinement across the board. Files are live now. Would love to hear your impressions, especially on the audio side.
And stay tuned, more updates are on the way.
https://redd.it/1skds12
@rStableDiffusion
huggingface.co
Lightricks/LTX-2.3 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Free AI Voice Cloning with Qwen3 TTS — Google Colab Notebook (works on free tier, no GPU needed)
I've been using Qwen3 TTS for a couple of months now and figured I'd share a Colab notebook I put together for it. I know most of you have probably seen the model already, but setting it up locally can be a hassle if you don't have the right GPU, so this might save someone some time.
The notebook runs on the free Colab tier, no API keys or anything like that — just open and run.
Colab notebook: https://colab.research.google.com/drive/1JOebp3hwtw8BVeosUwtRj4kpP67sBx35
GitHub: https://github.com/QwenLM/Qwen3-TTS
For local install without terminal, Pinokio works well too: https://pinokio.computer
___________________
Also recorded a walkthrough if anyone needs it: https://www.youtube.com/watch?v=QmfiU8V5xq4
https://redd.it/1skeqk5
@rStableDiffusion
I've been using Qwen3 TTS for a couple of months now and figured I'd share a Colab notebook I put together for it. I know most of you have probably seen the model already, but setting it up locally can be a hassle if you don't have the right GPU, so this might save someone some time.
The notebook runs on the free Colab tier, no API keys or anything like that — just open and run.
Colab notebook: https://colab.research.google.com/drive/1JOebp3hwtw8BVeosUwtRj4kpP67sBx35
GitHub: https://github.com/QwenLM/Qwen3-TTS
For local install without terminal, Pinokio works well too: https://pinokio.computer
___________________
Also recorded a walkthrough if anyone needs it: https://www.youtube.com/watch?v=QmfiU8V5xq4
https://redd.it/1skeqk5
@rStableDiffusion
Google
qwen_tts.ipynb
Colab notebook
Turning Anime into Real and testing Klein9b vs Qwen Edit 2511 (Workflow Included)
https://redd.it/1skaqt5
@rStableDiffusion
https://redd.it/1skaqt5
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Turning Anime into Real and testing Klein9b vs Qwen Edit 2511 (Workflow Included)
Explore this post and more from the StableDiffusion community
AnimaYume - Anima finetune.
AnimaYume is a text-to-image model fine-tuned from [Anima](https://huggingface.co/circlestone-labs/Anima), a high-quality anime-style image generation model developed by [CircleStone Labs](https://huggingface.co/circlestone-labs). It builds upon [Cosmos 2](https://research.nvidia.com/labs/dir/cosmos-predict2/), a model developed by NVIDIA’s research team.
"For version 0.4:
* This version was trained on Anima Preview 3 using a custom dataset. In this release, I improved prompt understanding and artist style. Based on my testing, some artist styles match my expectations, although I haven’t tested everything in detail since I’m currently quite busy :<. Additionally, I fixed several issues from Anima Preview 3 that also appeared in Preview 2." [AnimaYume - v0.4 | Anima Checkpoint | Civitai](https://civitai.com/models/2385278/animayume?modelVersionId=2851312)
https://preview.redd.it/gf5sg4htezug1.png?width=2048&format=png&auto=webp&s=c749b214b11a6aefffedfe0c2751dfe4baa96953
[AnimaYume HF](https://huggingface.co/duongve/AnimaYume)
https://redd.it/1skfebq
@rStableDiffusion
AnimaYume is a text-to-image model fine-tuned from [Anima](https://huggingface.co/circlestone-labs/Anima), a high-quality anime-style image generation model developed by [CircleStone Labs](https://huggingface.co/circlestone-labs). It builds upon [Cosmos 2](https://research.nvidia.com/labs/dir/cosmos-predict2/), a model developed by NVIDIA’s research team.
"For version 0.4:
* This version was trained on Anima Preview 3 using a custom dataset. In this release, I improved prompt understanding and artist style. Based on my testing, some artist styles match my expectations, although I haven’t tested everything in detail since I’m currently quite busy :<. Additionally, I fixed several issues from Anima Preview 3 that also appeared in Preview 2." [AnimaYume - v0.4 | Anima Checkpoint | Civitai](https://civitai.com/models/2385278/animayume?modelVersionId=2851312)
https://preview.redd.it/gf5sg4htezug1.png?width=2048&format=png&auto=webp&s=c749b214b11a6aefffedfe0c2751dfe4baa96953
[AnimaYume HF](https://huggingface.co/duongve/AnimaYume)
https://redd.it/1skfebq
@rStableDiffusion
huggingface.co
circlestone-labs/Anima · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
New WAN 2.2 Lightx2v speed lora 260412
Barely tested, hoping to get some feedback.
Official Full Model lightx2v/Wan2.2-Distill-Models at main
Scaled fp8 and extracted lora obsxrver/wan2.2-i2v-lightx2v-260412 at main
https://redd.it/1skkotf
@rStableDiffusion
Barely tested, hoping to get some feedback.
Official Full Model lightx2v/Wan2.2-Distill-Models at main
Scaled fp8 and extracted lora obsxrver/wan2.2-i2v-lightx2v-260412 at main
https://redd.it/1skkotf
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: New WAN 2.2 Lightx2v speed lora 260412
Explore this post and more from the StableDiffusion community
I made a playable ping pong game where every frame is ai generated. This is my interactive diffusion model I made from scratch.
https://redd.it/1skmmnp
@rStableDiffusion
https://redd.it/1skmmnp
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I made a playable ping pong game where every frame is ai generated. This is my interactive…
Explore this post and more from the StableDiffusion community
Does anyone recognize what artist tags the user @Magnus_waifu on Twitter/X might be using for their images?
https://redd.it/1skox6h
@rStableDiffusion
https://redd.it/1skox6h
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Does anyone recognize what artist tags the user @Magnus_waifu on Twitter/X might…
Explore this post and more from the StableDiffusion community