I built a local GUI + AI builder for creating ComfyUI custom node packs
I've been working on ComfyUI Node Builder, a local app for building custom ComfyUI nodes without hand-writing all the boilerplate every time.
The demo shows:
1. user describes a node idea
2. AI creates the node contract and Python
3. dependencies/files are updated
4. the pack is deployed and tested in ComfyUI
It is open-source and local. The AI Builder can create nodes, edit generated files, explain validation errors, run checks, and request deploy only when deploy permission is enabled.
GitHub:
https://github.com/caoool/comfyui-node-canvas
Landing page:
https://caoool.github.io/comfyui-node-canvas/
Node ideas and feedback:
https://github.com/caoool/comfyui-node-canvas/issues/2
I'd especially like feedback from people who build custom nodes: what node authoring workflow should this support next?
https://redd.it/1tbk8zv
@rStableDiffusion
I've been working on ComfyUI Node Builder, a local app for building custom ComfyUI nodes without hand-writing all the boilerplate every time.
The demo shows:
1. user describes a node idea
2. AI creates the node contract and Python
3. dependencies/files are updated
4. the pack is deployed and tested in ComfyUI
It is open-source and local. The AI Builder can create nodes, edit generated files, explain validation errors, run checks, and request deploy only when deploy permission is enabled.
GitHub:
https://github.com/caoool/comfyui-node-canvas
Landing page:
https://caoool.github.io/comfyui-node-canvas/
Node ideas and feedback:
https://github.com/caoool/comfyui-node-canvas/issues/2
I'd especially like feedback from people who build custom nodes: what node authoring workflow should this support next?
https://redd.it/1tbk8zv
@rStableDiffusion
GitHub
GitHub - caoool/comfyui-node-canvas: AI-powered GUI app for building, editing, deploying, and testing ComfyUI custom nodes and…
AI-powered GUI app for building, editing, deploying, and testing ComfyUI custom nodes and node packs. - caoool/comfyui-node-canvas
OmniNFT: Modality-wise Omni Diffusion Reinforcement for Joint Audio-Video Generation
https://github.com/zghhui/OmniNFT
https://redd.it/1tbmfzm
@rStableDiffusion
https://github.com/zghhui/OmniNFT
https://redd.it/1tbmfzm
@rStableDiffusion
GitHub
GitHub - zghhui/OmniNFT: Code for "OmniNFT: Modality-wise Omni Diffusion Reinforcement for Joint Audio-Video Generation"
Code for "OmniNFT: Modality-wise Omni Diffusion Reinforcement for Joint Audio-Video Generation" - zghhui/OmniNFT
LTX 2.3 INT8 Benchmarks (2x Faster on Ampere)
Saw some interest in INT8 for LTX 2.3 after my last post, so here are the resources.
>Quick Warning: INT8 acceleration is specifically effective for Ampere GPUs (e.g., RTX 3080 Ti). If you’re already rocking an RTX 5090, you can safely ignore this.
The setup is easy—only the model loading part of the workflow changes. Everything else stays the same.
https://preview.redd.it/p1kqwomsgu0h1.png?width=931&format=png&auto=webp&s=626a72c691107d452a492acb4e1f3c169c7490e1
Performance Gain:
Stock: 118.77s
INT8: 66.45s
Result: \~2x speedup 🚀
Links:
weight & comfyui workflow
custom node
https://redd.it/1tbqxb5
@rStableDiffusion
Saw some interest in INT8 for LTX 2.3 after my last post, so here are the resources.
>Quick Warning: INT8 acceleration is specifically effective for Ampere GPUs (e.g., RTX 3080 Ti). If you’re already rocking an RTX 5090, you can safely ignore this.
The setup is easy—only the model loading part of the workflow changes. Everything else stays the same.
https://preview.redd.it/p1kqwomsgu0h1.png?width=931&format=png&auto=webp&s=626a72c691107d452a492acb4e1f3c169c7490e1
Performance Gain:
Stock: 118.77s
INT8: 66.45s
Result: \~2x speedup 🚀
Links:
weight & comfyui workflow
custom node
https://redd.it/1tbqxb5
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
LTX2.3 I2V Messing up the text details, anyone facing the same??
https://redd.it/1tbpd7h
@rStableDiffusion
https://redd.it/1tbpd7h
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
LTX 2.3 adding unwanted subtitles in generated videos even when not mentioned in prompt
https://redd.it/1tbrsf7
@rStableDiffusion
https://redd.it/1tbrsf7
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Scenema Audio: Zero-shot expressive voice cloning and speech generation
https://redd.it/1tbzgi3
@rStableDiffusion
https://redd.it/1tbzgi3
@rStableDiffusion
ComfyUI Pixaroma Nodes: New Load Image, Notify & Utility Nodes (Ep17)
https://www.youtube.com/watch?v=dXH7Qx9pzyc
https://redd.it/1tc2fuz
@rStableDiffusion
https://www.youtube.com/watch?v=dXH7Qx9pzyc
https://redd.it/1tc2fuz
@rStableDiffusion
YouTube
ComfyUI Pixaroma Nodes: New Load Image, Notify & Utility Nodes (Ep17)
In this episode, I’ll show you the latest updates in the Pixaroma node pack for ComfyUI and Easy Install. We’ll look at the new Pixaroma Load Image node, new Copy and Open buttons, filename outputs, date-based save folders, smarter image resizing, width and…
LTX 2.3 video generation notes after testing H100, RTX 5090, A100, L40, FP8, BF16, and CPU offload
This community helped me a lot in my last post so here's my contribution back. If you're looking to generate LTX 2.3 videos, these notes might save you a few hundred dollars on wasted cloud rentals.
H100:
\- 5s distilled FP8, 704x1280, 121f: 48s
\- 5s distilled no-quant, 704x1280, 121f: 45s
\- 5s HQ/no-quant, 704x1280, 121f, 20 steps: 121s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps: 321s
\- 20s HQ/no-quant, 704x1280, 481f, 28 steps: 380-390s
RTX 5090:
\- 5s distilled FP8, 704x1280, 121f: 43s
\- 5s HQ FP8, 704x1280, 121f, 20 steps: 151s
\- 20s distilled FP8, 704x1280, 481f: failed/OOM after 55s
\- 20s distilled FP8, 576x1024, 481f: 104s
\- 20s distilled, no quantization, CPU offload, 704x1280, 481f: 299s
A100:
\- 5s image-conditioned, 704x1280: 401-425s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps, serverless render step: 608s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps, serverless remote total: 713s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps, serverless local wall time: 797s
L40:
(I left a note about this in the lessons paragraph below.)
\- 5s distilled, no quantization, CPU offload, 704x1280, 121f: 1199s
\- 5s distilled FP8, 704x1280, 121f: 197s
\- 20s distilled FP8, 704x1280, 481f, max batch 4: failed/OOM after 189s
\- 20s distilled FP8 low-memory, 704x1280, 481f, max batch 1: 365s
\- 20s distilled FP8 low-memory, 704x1280, 481f, repeated runs: 433-453s
Some lessons:
\- For some reason, the output of A100 was worse than H100 for exact setup. I generated around 20 videos on each GPU from the same cloud host and A100 output was always worse. A100 scenes were less realistic than H100.
\- I did not like 5090 results on distilled + FP8. Distilled with offloading to CPU RAM is better.
- The L40 cloud I rented could generate 20s 704x1280 clips, but only with a lower-memory FP8 setup for some reason. I am guessing the cloud rental device was not in the best state.
\- For spoken words, try to target around 45-52 words per 20 seconds.
\- Avoid ending with important words. The model sometimes cuts off the final syllable. A short final sentence helps.
I am still exploring this so feel free to let me know if there's anything additional I can do. Happy to contribute to the community if you're looking for any generated samples or examples.
https://redd.it/1tc5s73
@rStableDiffusion
This community helped me a lot in my last post so here's my contribution back. If you're looking to generate LTX 2.3 videos, these notes might save you a few hundred dollars on wasted cloud rentals.
H100:
\- 5s distilled FP8, 704x1280, 121f: 48s
\- 5s distilled no-quant, 704x1280, 121f: 45s
\- 5s HQ/no-quant, 704x1280, 121f, 20 steps: 121s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps: 321s
\- 20s HQ/no-quant, 704x1280, 481f, 28 steps: 380-390s
RTX 5090:
\- 5s distilled FP8, 704x1280, 121f: 43s
\- 5s HQ FP8, 704x1280, 121f, 20 steps: 151s
\- 20s distilled FP8, 704x1280, 481f: failed/OOM after 55s
\- 20s distilled FP8, 576x1024, 481f: 104s
\- 20s distilled, no quantization, CPU offload, 704x1280, 481f: 299s
A100:
\- 5s image-conditioned, 704x1280: 401-425s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps, serverless render step: 608s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps, serverless remote total: 713s
\- 20s HQ/no-quant, 704x1280, 481f, 20 steps, serverless local wall time: 797s
L40:
(I left a note about this in the lessons paragraph below.)
\- 5s distilled, no quantization, CPU offload, 704x1280, 121f: 1199s
\- 5s distilled FP8, 704x1280, 121f: 197s
\- 20s distilled FP8, 704x1280, 481f, max batch 4: failed/OOM after 189s
\- 20s distilled FP8 low-memory, 704x1280, 481f, max batch 1: 365s
\- 20s distilled FP8 low-memory, 704x1280, 481f, repeated runs: 433-453s
Some lessons:
\- For some reason, the output of A100 was worse than H100 for exact setup. I generated around 20 videos on each GPU from the same cloud host and A100 output was always worse. A100 scenes were less realistic than H100.
\- I did not like 5090 results on distilled + FP8. Distilled with offloading to CPU RAM is better.
- The L40 cloud I rented could generate 20s 704x1280 clips, but only with a lower-memory FP8 setup for some reason. I am guessing the cloud rental device was not in the best state.
\- For spoken words, try to target around 45-52 words per 20 seconds.
\- Avoid ending with important words. The model sometimes cuts off the final syllable. A short final sentence helps.
I am still exploring this so feel free to let me know if there's anything additional I can do. Happy to contribute to the community if you're looking for any generated samples or examples.
https://redd.it/1tc5s73
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Media is too big
VIEW IN TELEGRAM
DramaBox - Most Expressive Voice model ever based on LTX 2.3
https://redd.it/1tc6i8w
@rStableDiffusion
https://redd.it/1tc6i8w
@rStableDiffusion
SenseNova-U1 Technical Report: VAE-free Pixel-level Flow Matching with 32x Compression
https://redd.it/1tc2anx
@rStableDiffusion
https://redd.it/1tc2anx
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: SenseNova-U1 Technical Report: VAE-free Pixel-level Flow Matching with 32x Compression
Explore this post and more from the StableDiffusion community
PyTorch 2.12.0+cu132 (CUDA 13.2) — SA2/SA3 Attention Stability Benchmarks
With the release of PyTorch 2.12.0+cu132, I ran a full benchmark suite to verify that SA2 and SA3 attention backends are stable and working correctly in the new environment.
Tests were conducted on the following models:
* **flux1-krea-dev\_fp8\_scaled** — 20 steps, CFG 1, 1024×1024
* **flux-2-klein-base-9b-fp8** — 20 steps, CFG 5, 1280×1280
* **wan2.2\_t2v\_high/low\_noise\_14B\_fp16 + lightx2v\_4steps\_lora** — 2+2 steps, CFG 1, 640×640
All backends (fp8\_cuda, fp8pp\_cuda, triton, SA3 standard, SA3 per\_block\_mean) are confirmed stable. Results in the charts below.
The Krea model has the largest options when changing modes sa2-3, but the quality is almost the same everywhere.
https://preview.redd.it/8v3quwkfyy0h1.png?width=3840&format=png&auto=webp&s=a38dcff0c402d1102425ababcf7e7ec7693eee09
https://preview.redd.it/b6lkjbfz0z0h1.jpg?width=6000&format=pjpg&auto=webp&s=d047b2fffe7ff4b444dc795f1d638ed8ce972678
The Klein model is almost the same when changing from SA2 to SA3, but the plastic skin remains, which is a credit to the model itself. But the speed is almost the same in all operating modes.
https://preview.redd.it/0ve393uoyy0h1.png?width=3840&format=png&auto=webp&s=107733601b7f0fe184b94d12d4677904df5273a5
https://preview.redd.it/21bfjzyv0z0h1.jpg?width=6000&format=pjpg&auto=webp&s=c4774218bd8b91e04ad4d04c2c1f27708f7213f7
The WAN 2.2 model worked almost identically except for the sa3=standard and sa3=per\_block\_mean modes, so the video lost a little quality and changed. The triton+standard mode slowed down very strangely.
https://preview.redd.it/p5dr6dv8zy0h1.png?width=3840&format=png&auto=webp&s=3600b2892299c8b84b7258dc9cb1608da5d64495
https://reddit.com/link/1tcd718/video/vzevp45kzy0h1/player
But the main task was achieved, everything works and with the new pytorch 2.12.0, I did not test different nodes for compatibility, the ones I created work.
Download the latest SA2/SA3 (windows): [https://github.com/Rogala/AI\_Attention](https://github.com/Rogala/AI_Attention)
The ComfyUI node used for testing: [https://github.com/Rogala/ComfyUI-rogala](https://github.com/Rogala/ComfyUI-rogala)
Original node discussion thread: [https://www.reddit.com/r/StableDiffusion/comments/1ta0ewm/smartattentiondispatcher\_comfyui\_node\_that/](https://www.reddit.com/r/StableDiffusion/comments/1ta0ewm/smartattentiondispatcher_comfyui_node_that/)
https://redd.it/1tcd718
@rStableDiffusion
With the release of PyTorch 2.12.0+cu132, I ran a full benchmark suite to verify that SA2 and SA3 attention backends are stable and working correctly in the new environment.
Tests were conducted on the following models:
* **flux1-krea-dev\_fp8\_scaled** — 20 steps, CFG 1, 1024×1024
* **flux-2-klein-base-9b-fp8** — 20 steps, CFG 5, 1280×1280
* **wan2.2\_t2v\_high/low\_noise\_14B\_fp16 + lightx2v\_4steps\_lora** — 2+2 steps, CFG 1, 640×640
All backends (fp8\_cuda, fp8pp\_cuda, triton, SA3 standard, SA3 per\_block\_mean) are confirmed stable. Results in the charts below.
The Krea model has the largest options when changing modes sa2-3, but the quality is almost the same everywhere.
https://preview.redd.it/8v3quwkfyy0h1.png?width=3840&format=png&auto=webp&s=a38dcff0c402d1102425ababcf7e7ec7693eee09
https://preview.redd.it/b6lkjbfz0z0h1.jpg?width=6000&format=pjpg&auto=webp&s=d047b2fffe7ff4b444dc795f1d638ed8ce972678
The Klein model is almost the same when changing from SA2 to SA3, but the plastic skin remains, which is a credit to the model itself. But the speed is almost the same in all operating modes.
https://preview.redd.it/0ve393uoyy0h1.png?width=3840&format=png&auto=webp&s=107733601b7f0fe184b94d12d4677904df5273a5
https://preview.redd.it/21bfjzyv0z0h1.jpg?width=6000&format=pjpg&auto=webp&s=c4774218bd8b91e04ad4d04c2c1f27708f7213f7
The WAN 2.2 model worked almost identically except for the sa3=standard and sa3=per\_block\_mean modes, so the video lost a little quality and changed. The triton+standard mode slowed down very strangely.
https://preview.redd.it/p5dr6dv8zy0h1.png?width=3840&format=png&auto=webp&s=3600b2892299c8b84b7258dc9cb1608da5d64495
https://reddit.com/link/1tcd718/video/vzevp45kzy0h1/player
But the main task was achieved, everything works and with the new pytorch 2.12.0, I did not test different nodes for compatibility, the ones I created work.
Download the latest SA2/SA3 (windows): [https://github.com/Rogala/AI\_Attention](https://github.com/Rogala/AI_Attention)
The ComfyUI node used for testing: [https://github.com/Rogala/ComfyUI-rogala](https://github.com/Rogala/ComfyUI-rogala)
Original node discussion thread: [https://www.reddit.com/r/StableDiffusion/comments/1ta0ewm/smartattentiondispatcher\_comfyui\_node\_that/](https://www.reddit.com/r/StableDiffusion/comments/1ta0ewm/smartattentiondispatcher_comfyui_node_that/)
https://redd.it/1tcd718
@rStableDiffusion
Is it possible to FEEL real acting with Open Source AI Tools? ( A little experiment)
I spent two weeks working on this at my company for learning and reach purposes. Tried to see if you can create compelling shots. In my opinion, you can, and better than Seedance. (Emotion, not action). But you be the judge. I'll wait and see and if anyone wants I'll share my workflow.
Spaghetti Shortfilm by Arturo Pola
https://redd.it/1tcem8c
@rStableDiffusion
I spent two weeks working on this at my company for learning and reach purposes. Tried to see if you can create compelling shots. In my opinion, you can, and better than Seedance. (Emotion, not action). But you be the judge. I'll wait and see and if anyone wants I'll share my workflow.
Spaghetti Shortfilm by Arturo Pola
https://redd.it/1tcem8c
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
ComfyUI Node: Unified Image + Mask Resize (LTX 2.3 ready, keeps BOTH sides divisible by 32, replaces Image Resize + Image Resize V2 + Mask mismatch issues)
https://redd.it/1tci23f
@rStableDiffusion
https://redd.it/1tci23f
@rStableDiffusion