Inpaint workflows for z-image, qwen and flux fill onereward

Hi!

A couple of days ago I uploaded and shared here my 2 edit/inpaint workflows plus a z-image txt/img2img one.

Well, today I have uploaded the rest of the workflows I personally use, everything else I use is simple enough that it is not worthy to share.

I also updated the Z-Image txt/img2img workflow, as it was missing the functionality to use a denoise lower than 1.

What I have newly uploaded are 3 traditional inpaint workflows:

- An update to my previous Flux Fill workflow, this one is what I use nowadays, I only use the onereward fine-tune and I also trimmed some methods I don't use anymore. So I left the previous version uploaded in case someone want the full version.

- A Z-Image based version of the same inpaint logic as the Flux one, with the proper changes for the specific model.

- Same with Qwen Image. Note that there are the Qwen Image Edit based edit/inpaint workflow I previously uploaded and this one; based on Qwen Image and InstantX controlnet for traditional inpaint. In some cases is better to use an editing/inpaint approach and others pure inpant, so you have both options.

What makes my inpaint workflows different? Well, I haven't checked all inpaint workflows out there, but so far I have seen that:

- Default templates or example workflows straight use the full image, without crop&stitch logic, that is practically useless.

- Most user's inpaint workflows follow the crop-resize-sample-resize-paste logic, as do mine. But, most people use the inpaint crop&stitch custom nodes, which are great, as they are an easy to use all in one solution to do the aforementioned pipeline.

I use the old masquerade nodes, which separates all functions in individual nodes, allowing for a greater control of how the cropping is done, of course it is a bit of a pain to setup, but I already done that for you. So the cropped region aspect ratio is calculated in base of the mask shape, it is scaled to a total pixels size, which makes more comfortable to match the model optimal resolution, and you get some extra outputs to work with. It is all packed into a sub-graph, with the advantage of being able to dive in and debug should the behavior needed to be tweaked for some special case. Add to that a centralized control panel and a selection of specific sampler/scheduler nodes for each model, everything group-ordered and comments and usage notes.

As always, my hope is that they serve you as good as they serve me.

They are all here: https://ko-fi.com/botoni/shop/workflow

Free and no login necessary, it asks for an e-mail, but it won't check if it exists. If you put your real one it should notify of workflow updates and new posts.


https://redd.it/1ska9uv
@rStableDiffusion
Update: Distilled v1.1 is live

We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with improvements to audio quality and a slightly refined visual aesthetic. It's available on HuggingFace alongside the previous Distilled version.

Along with the new checkpoint, we've also retrained the distilled LoRA, updated all four ComfyUI example workflows, and refreshed the union control and motion tracking IC-LoRA checkpoints to work with the new base model (these replace the previous versions in place).

No major architecture changes, just refinement across the board. Files are live now. Would love to hear your impressions, especially on the audio side.

And stay tuned, more updates are on the way.

https://redd.it/1skds12
@rStableDiffusion
Free AI Voice Cloning with Qwen3 TTS — Google Colab Notebook (works on free tier, no GPU needed)

I've been using Qwen3 TTS for a couple of months now and figured I'd share a Colab notebook I put together for it. I know most of you have probably seen the model already, but setting it up locally can be a hassle if you don't have the right GPU, so this might save someone some time.

The notebook runs on the free Colab tier, no API keys or anything like that — just open and run.

Colab notebook: https://colab.research.google.com/drive/1JOebp3hwtw8BVeosUwtRj4kpP67sBx35
GitHub: https://github.com/QwenLM/Qwen3-TTS
For local install without terminal, Pinokio works well too: https://pinokio.computer

___________________

Also recorded a walkthrough if anyone needs it: https://www.youtube.com/watch?v=QmfiU8V5xq4

https://redd.it/1skeqk5
@rStableDiffusion
AnimaYume - Anima finetune.

AnimaYume is a text-to-image model fine-tuned from [Anima](https://huggingface.co/circlestone-labs/Anima), a high-quality anime-style image generation model developed by [CircleStone Labs](https://huggingface.co/circlestone-labs). It builds upon [Cosmos 2](https://research.nvidia.com/labs/dir/cosmos-predict2/), a model developed by NVIDIA’s research team.

"For version 0.4:

* This version was trained on Anima Preview 3 using a custom dataset. In this release, I improved prompt understanding and artist style. Based on my testing, some artist styles match my expectations, although I haven’t tested everything in detail since I’m currently quite busy :<. Additionally, I fixed several issues from Anima Preview 3 that also appeared in Preview 2." [AnimaYume - v0.4 | Anima Checkpoint | Civitai](https://civitai.com/models/2385278/animayume?modelVersionId=2851312)





https://preview.redd.it/gf5sg4htezug1.png?width=2048&format=png&auto=webp&s=c749b214b11a6aefffedfe0c2751dfe4baa96953

[AnimaYume HF](https://huggingface.co/duongve/AnimaYume)

https://redd.it/1skfebq
@rStableDiffusion