This media is not supported in your browser
VIEW IN TELEGRAM
LTX 2.3 - Image + Audio + Video ControlNet (IC-LoRA) to Video
https://redd.it/1shxv8n
@rStableDiffusion
https://redd.it/1shxv8n
@rStableDiffusion
Ace Step 1.5 XL ComfyUI automation workflow without lama for generating random tags using qwen, generate song and then give it a rating by using waveform analysis
The idea came to me after sorting trough a lot of Ace Step 1.5 XL outputs and trying to find best styles and tags for songs. Why not automate the generation process AND the review process, or at least make it easier. So as usual I used Qwen LM and Qwen VL (compared to something like olama these ones run directly in comfy and do not require a server) to randomize the tags on each run, but more importantly to try and rate the output. How ? By converting the audio output into a set of waveforms for 4 segments of the song that I feed into Qwen VL as an image and ask it to subjectively look at the waveform and give it feedback and rating, rating that is used then to also name the output file. Like this. I am not sure it works properly but the A+ rated songs were indeed better than B rated ones.
Workflow is here. Install the missing extensions and add the qwen models.
Here is part of the working flow, including output folder.
https://preview.redd.it/kpar4blijfug1.jpg?width=1280&format=pjpg&auto=webp&s=cf2b4e5491c8b237d29e9649d90d40c6172090a9
https://preview.redd.it/oxtxaf8kjfug1.jpg?width=1400&format=pjpg&auto=webp&s=643c100c7fe05bb5184551edd0b7a34d99476ddf
https://preview.redd.it/3old46smjfug1.jpg?width=1592&format=pjpg&auto=webp&s=07b366afe5ae259b11fbd86cf2332c56ab9192ea
https://redd.it/1shzm63
@rStableDiffusion
The idea came to me after sorting trough a lot of Ace Step 1.5 XL outputs and trying to find best styles and tags for songs. Why not automate the generation process AND the review process, or at least make it easier. So as usual I used Qwen LM and Qwen VL (compared to something like olama these ones run directly in comfy and do not require a server) to randomize the tags on each run, but more importantly to try and rate the output. How ? By converting the audio output into a set of waveforms for 4 segments of the song that I feed into Qwen VL as an image and ask it to subjectively look at the waveform and give it feedback and rating, rating that is used then to also name the output file. Like this. I am not sure it works properly but the A+ rated songs were indeed better than B rated ones.
Workflow is here. Install the missing extensions and add the qwen models.
Here is part of the working flow, including output folder.
https://preview.redd.it/kpar4blijfug1.jpg?width=1280&format=pjpg&auto=webp&s=cf2b4e5491c8b237d29e9649d90d40c6172090a9
https://preview.redd.it/oxtxaf8kjfug1.jpg?width=1400&format=pjpg&auto=webp&s=643c100c7fe05bb5184551edd0b7a34d99476ddf
https://preview.redd.it/3old46smjfug1.jpg?width=1592&format=pjpg&auto=webp&s=07b366afe5ae259b11fbd86cf2332c56ab9192ea
https://redd.it/1shzm63
@rStableDiffusion
Dystalgia - Aurel Manea Photography (Aurel Manega)
Ace Step 1.5 XL ComfyUI workflow for generating random tags, generate song and then give it a rating by using waveform analysis…
The idea came to me after sorting trough a lot of Ace Step 1.5 XL outputs and trying to find best styles and tags for songs. Why not automate the […]
Just installed ForgeNeo and I'm facing this issue *failed to recognize model type*
https://redd.it/1si419g
@rStableDiffusion
https://redd.it/1si419g
@rStableDiffusion
[Release] ComfyUI Image Conveyor — sequential drag-and-drop image queue node
https://redd.it/1sibmrf
@rStableDiffusion
https://redd.it/1sibmrf
@rStableDiffusion
New nodes to handle/visualize bboxes
Hello community, I'd like to introduce my ComfyUI nodes I recently created, which I hope you find useful. They are designed to work with BBoxes coming from face/pose detectors, but not only that. I tried my best but didn't find any custom nodes that allow selecting particular bboxes (per frame) during processing videos with multiple persons present on the video. The thing is - face detector perfectly detects bboxes (BoundingBox) of people's faces, but, when you want to use it for Wan 2.2. Animation or other purposes, there is no way to choose particular person on the video to crop their face for animation, when multiple characters present on the video/image. Face/Pose detectors do their job just fine, but further processing of bboxes they produce jump from one person to another sometimes, causing inconsistency. My nodes allow to pick particular bbox per frame, in order to crop their faces with precision for Wan2.2 animation, when multiple persons are present in the frame. Hence, you can choose particular face(bbox) per frame.
I haven't found any nodes that allow that so I created these for this purpose.
Please let me know if they would be helpful for your creations.
https://registry.comfy.org/publishers/masternc80/nodes/bboxnodes
Description of the nodes is in repository:
https://github.com/masternc80/ComfyUI-BBoxNodes
https://redd.it/1sidcv5
@rStableDiffusion
Hello community, I'd like to introduce my ComfyUI nodes I recently created, which I hope you find useful. They are designed to work with BBoxes coming from face/pose detectors, but not only that. I tried my best but didn't find any custom nodes that allow selecting particular bboxes (per frame) during processing videos with multiple persons present on the video. The thing is - face detector perfectly detects bboxes (BoundingBox) of people's faces, but, when you want to use it for Wan 2.2. Animation or other purposes, there is no way to choose particular person on the video to crop their face for animation, when multiple characters present on the video/image. Face/Pose detectors do their job just fine, but further processing of bboxes they produce jump from one person to another sometimes, causing inconsistency. My nodes allow to pick particular bbox per frame, in order to crop their faces with precision for Wan2.2 animation, when multiple persons are present in the frame. Hence, you can choose particular face(bbox) per frame.
I haven't found any nodes that allow that so I created these for this purpose.
Please let me know if they would be helpful for your creations.
https://registry.comfy.org/publishers/masternc80/nodes/bboxnodes
Description of the nodes is in repository:
https://github.com/masternc80/ComfyUI-BBoxNodes
https://redd.it/1sidcv5
@rStableDiffusion
registry.comfy.org
ComfyUI Registry
Discover and install ComfyUI custom nodes.
ComfyUI Tutorial: Create Mind Blowing Video With LTX 2.3 Transition LORA
https://youtu.be/egQb_iHc05Q
https://redd.it/1sidsdf
@rStableDiffusion
https://youtu.be/egQb_iHc05Q
https://redd.it/1sidsdf
@rStableDiffusion
YouTube
ComfyUI Tutorial: Create Mind Blowing Video With LTX 2.3 Transition LORA #comfyui #ltx2.3
In this tutorial, I show you how to create stunning ai transition videos with the new LTX2.3 TRANSITION LORA inside ComfyUI — all running on a low VRAM setup (works even with 6GB GPUs!). You’ll learn how to build a complete workflow that combines image generation…
Trying to inpaint using Z-image Turbo BF16; what am I doing wrong?
https://preview.redd.it/3krmmy345jug1.png?width=1787&format=png&auto=webp&s=359dfa4e2515bd33e40090f986e4a597a00d06d6
Fairly new to the SD scene. I've been trying to do inpainting for an hour or so with no luck. The model, CLIP and VAE are in the screenshot. The output image always looks incredibly similar to the input image, as if I had zero denoise. the prompt also seems to do nothing. Here, I tried to make LeBron scream by masking just his face. The node connections seem to be all correct too. Is there another explanation? Sampler? The model itself?
https://redd.it/1siefug
@rStableDiffusion
https://preview.redd.it/3krmmy345jug1.png?width=1787&format=png&auto=webp&s=359dfa4e2515bd33e40090f986e4a597a00d06d6
Fairly new to the SD scene. I've been trying to do inpainting for an hour or so with no luck. The model, CLIP and VAE are in the screenshot. The output image always looks incredibly similar to the input image, as if I had zero denoise. the prompt also seems to do nothing. Here, I tried to make LeBron scream by masking just his face. The node connections seem to be all correct too. Is there another explanation? Sampler? The model itself?
https://redd.it/1siefug
@rStableDiffusion