New nodes to handle/visualize bboxes
Hello community, I'd like to introduce my ComfyUI nodes I recently created, which I hope you find useful. They are designed to work with BBoxes coming from face/pose detectors, but not only that. I tried my best but didn't find any custom nodes that allow selecting particular bboxes (per frame) during processing videos with multiple persons present on the video. The thing is - face detector perfectly detects bboxes (BoundingBox) of people's faces, but, when you want to use it for Wan 2.2. Animation or other purposes, there is no way to choose particular person on the video to crop their face for animation, when multiple characters present on the video/image. Face/Pose detectors do their job just fine, but further processing of bboxes they produce jump from one person to another sometimes, causing inconsistency. My nodes allow to pick particular bbox per frame, in order to crop their faces with precision for Wan2.2 animation, when multiple persons are present in the frame. Hence, you can choose particular face(bbox) per frame.
I haven't found any nodes that allow that so I created these for this purpose.
Please let me know if they would be helpful for your creations.
https://registry.comfy.org/publishers/masternc80/nodes/bboxnodes
Description of the nodes is in repository:
https://github.com/masternc80/ComfyUI-BBoxNodes
https://redd.it/1sidcv5
@rStableDiffusion
Hello community, I'd like to introduce my ComfyUI nodes I recently created, which I hope you find useful. They are designed to work with BBoxes coming from face/pose detectors, but not only that. I tried my best but didn't find any custom nodes that allow selecting particular bboxes (per frame) during processing videos with multiple persons present on the video. The thing is - face detector perfectly detects bboxes (BoundingBox) of people's faces, but, when you want to use it for Wan 2.2. Animation or other purposes, there is no way to choose particular person on the video to crop their face for animation, when multiple characters present on the video/image. Face/Pose detectors do their job just fine, but further processing of bboxes they produce jump from one person to another sometimes, causing inconsistency. My nodes allow to pick particular bbox per frame, in order to crop their faces with precision for Wan2.2 animation, when multiple persons are present in the frame. Hence, you can choose particular face(bbox) per frame.
I haven't found any nodes that allow that so I created these for this purpose.
Please let me know if they would be helpful for your creations.
https://registry.comfy.org/publishers/masternc80/nodes/bboxnodes
Description of the nodes is in repository:
https://github.com/masternc80/ComfyUI-BBoxNodes
https://redd.it/1sidcv5
@rStableDiffusion
registry.comfy.org
ComfyUI Registry
Discover and install ComfyUI custom nodes.
ComfyUI Tutorial: Create Mind Blowing Video With LTX 2.3 Transition LORA
https://youtu.be/egQb_iHc05Q
https://redd.it/1sidsdf
@rStableDiffusion
https://youtu.be/egQb_iHc05Q
https://redd.it/1sidsdf
@rStableDiffusion
YouTube
ComfyUI Tutorial: Create Mind Blowing Video With LTX 2.3 Transition LORA #comfyui #ltx2.3
In this tutorial, I show you how to create stunning ai transition videos with the new LTX2.3 TRANSITION LORA inside ComfyUI — all running on a low VRAM setup (works even with 6GB GPUs!). You’ll learn how to build a complete workflow that combines image generation…
Trying to inpaint using Z-image Turbo BF16; what am I doing wrong?
https://preview.redd.it/3krmmy345jug1.png?width=1787&format=png&auto=webp&s=359dfa4e2515bd33e40090f986e4a597a00d06d6
Fairly new to the SD scene. I've been trying to do inpainting for an hour or so with no luck. The model, CLIP and VAE are in the screenshot. The output image always looks incredibly similar to the input image, as if I had zero denoise. the prompt also seems to do nothing. Here, I tried to make LeBron scream by masking just his face. The node connections seem to be all correct too. Is there another explanation? Sampler? The model itself?
https://redd.it/1siefug
@rStableDiffusion
https://preview.redd.it/3krmmy345jug1.png?width=1787&format=png&auto=webp&s=359dfa4e2515bd33e40090f986e4a597a00d06d6
Fairly new to the SD scene. I've been trying to do inpainting for an hour or so with no luck. The model, CLIP and VAE are in the screenshot. The output image always looks incredibly similar to the input image, as if I had zero denoise. the prompt also seems to do nothing. Here, I tried to make LeBron scream by masking just his face. The node connections seem to be all correct too. Is there another explanation? Sampler? The model itself?
https://redd.it/1siefug
@rStableDiffusion