Thanks to the sub my silly node and workflow got 3k downloads overnight, therefore I fixed some bugs, unified some features, and uploaded the latest and the greatest version to HF.
What started as an internal tool for my secret project and custom workflow somehow went from \~160 downloads to 3000+ overnight after being shared here.
ComfyUI Character Composer node is basically a structured procedural prompt system for Qwen workflows focused on:
character consistency
scene composition
controllable generation
SFW Json library (But who the hell is JSON??)
unified txt2img + img2img workflow (just bypass the input "image1")
(you will rarely ever type, or copy-paste from an llm again)
https://preview.redd.it/71qoqvo28jzg1.png?width=1540&format=png&auto=webp&s=6f016a56bdbe5745129ba7eb105df1d7bffaf258
Built on top of the amazing Qwen ecosystem work by Phr00t:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO
Project:
https://huggingface.co/datasets/unh1nge/comfyui-character-composer
Currently improving the UX, simplifying the node, and preparing better docs/tutorials.
Really appreciate all the feedback and testing so far. I am a newbie in the scene therefore I am still learning the best approaches and trying to keep up with the best and the latest models which isn't easy, so expect more from the future.
https://redd.it/1t5fu9l
@rStableDiffusion
What started as an internal tool for my secret project and custom workflow somehow went from \~160 downloads to 3000+ overnight after being shared here.
ComfyUI Character Composer node is basically a structured procedural prompt system for Qwen workflows focused on:
character consistency
scene composition
controllable generation
SFW Json library (But who the hell is JSON??)
unified txt2img + img2img workflow (just bypass the input "image1")
(you will rarely ever type, or copy-paste from an llm again)
https://preview.redd.it/71qoqvo28jzg1.png?width=1540&format=png&auto=webp&s=6f016a56bdbe5745129ba7eb105df1d7bffaf258
Built on top of the amazing Qwen ecosystem work by Phr00t:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO
Project:
https://huggingface.co/datasets/unh1nge/comfyui-character-composer
Currently improving the UX, simplifying the node, and preparing better docs/tutorials.
Really appreciate all the feedback and testing so far. I am a newbie in the scene therefore I am still learning the best approaches and trying to keep up with the best and the latest models which isn't easy, so expect more from the future.
https://redd.it/1t5fu9l
@rStableDiffusion
3 hours of lora training completely wasted on Runpod. Any alternatives?
Decided to use runpod to train a character lora. Uploaded the dataset, configured AI toolkit and selected the RTX 5090. Time to complete was 3 hours which seems okay since its being trained on 1024 pixels, 75 images and 7500 steps.
Training is complete and when I proceed to download the lora files, the download speed is 50-60kbps. A 300MB file is not going to get downloaded on 50-60kbps download speed.
Checked speedtest and my gigabit internet connection is perfectly fine. Tried various methods - runpodctl, ssh, hf_transfer all showed maximum transfer speed of no more than 60kbps.
Will try it again with a smaller dataset and less steps to see if its a persistent issue.
In the meantime, is there any alternative to runpod where I can run AI Toolkit?
https://redd.it/1t5hw7p
@rStableDiffusion
Decided to use runpod to train a character lora. Uploaded the dataset, configured AI toolkit and selected the RTX 5090. Time to complete was 3 hours which seems okay since its being trained on 1024 pixels, 75 images and 7500 steps.
Training is complete and when I proceed to download the lora files, the download speed is 50-60kbps. A 300MB file is not going to get downloaded on 50-60kbps download speed.
Checked speedtest and my gigabit internet connection is perfectly fine. Tried various methods - runpodctl, ssh, hf_transfer all showed maximum transfer speed of no more than 60kbps.
Will try it again with a smaller dataset and less steps to see if its a persistent issue.
In the meantime, is there any alternative to runpod where I can run AI Toolkit?
https://redd.it/1t5hw7p
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Media is too big
VIEW IN TELEGRAM
LTX 2.3 is pretty much all I use for video gen at this point -- Scene from my current story-driven fantasy project -- Info on process/workflow in comments.
https://redd.it/1t5p0ae
@rStableDiffusion
https://redd.it/1t5p0ae
@rStableDiffusion
How can Anima work on StabilityMatrix?
I tried to put the files as instructed, but it still gets an error message in generating images. Any solutions?
https://redd.it/1t5p1nj
@rStableDiffusion
I tried to put the files as instructed, but it still gets an error message in generating images. Any solutions?
https://redd.it/1t5p1nj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
My Reference Latent Node including Auto Masking and Timesteps per image is out tomorrow
https://redd.it/1t5s662
@rStableDiffusion
https://redd.it/1t5s662
@rStableDiffusion
Clippy Reloaded - a really sarky useful Clipboard node with no click.
https://redd.it/1t5ujla
@rStableDiffusion
https://redd.it/1t5ujla
@rStableDiffusion
LTX 2.3 is pretty much all I use for video gen at this point. Now I'm going to post stuff about it.
https://redd.it/1t61x0m
@rStableDiffusion
https://redd.it/1t61x0m
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
testing LTX 2.3 v1.1 distilled on my gpu. pretty decent for creating ugc content or short tiktok vlog.
https://redd.it/1t643be
@rStableDiffusion
https://redd.it/1t643be
@rStableDiffusion
LTX 2.3 Slow Motion
Does anyone know how to stop LTX 2.3 image to video from being slow motion? I am using the default work flow in comfy ui and have tried with both the dev/distilled checkpoints and loras. Experimented with CFG, Lora Weight, Prompts, etc. More often than not, the video is in slow motion, for 5 & 10 second clips and 25 FPS.
https://redd.it/1t66h1d
@rStableDiffusion
Does anyone know how to stop LTX 2.3 image to video from being slow motion? I am using the default work flow in comfy ui and have tried with both the dev/distilled checkpoints and loras. Experimented with CFG, Lora Weight, Prompts, etc. More often than not, the video is in slow motion, for 5 & 10 second clips and 25 FPS.
https://redd.it/1t66h1d
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
I built a tool to mix two artists on one image with region masks — Van Gogh + Picasso, no training, arbitrary refs
https://redd.it/1t65bi8
@rStableDiffusion
https://redd.it/1t65bi8
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I built a tool to mix two artists on one image with region masks — Van Gogh + Picasso…
Explore this post and more from the StableDiffusion community