I built a custom NVENC encoder bridge to split FLUX 2 Models across two GPUs over Ethernet LAN (example: 5090 + laptop 4090 spreading model layers over two machines via Eth = 4.4s per image). Completely bypasses the need for NVLink. Multi GPU in one PC supported, Wifi 6 works very well also.
https://github.com/shootthesound/comfyui-mesh

https://redd.it/1tegs83
@rStableDiffusion
Sharing my experience with Anima (ComfyUI): great detail, but struggling with multiple characters

Hi everyone, I wanted to share my experience.

Lately I’ve started using the Anima model with ComfyUI, and I have to say I’m really enjoying the results so far. What stands out to me the most is the level of detail, which I’ve found to be particularly strong not only on the characters, but even more on backgrounds and environments. I wasn’t really able to reach the same quality with models like Illustrious or Pony.

Another thing I really like (and honestly find kind of genius) is the possibility to build prompts using a mix of Gelbooru-style tags and natural language descriptions. That hybrid approach works incredibly well for me and feels much more flexible compared to sticking to only one style.

That said, I’ve noticed a limitation: when Anima has to handle more than one character in the scene, the results seem noticeably worse compared to what I could get with Illustrious or Pony.

I’m curious if anyone else has run into the same issue, and if there are specific techniques to better handle multi-character compositions.

I’m also wondering whether there’s any kind of regional prompting or similar workflow that works well with Anima, or if there are alternative approaches to improve consistency when generating multiple characters.

Curious to hear your thoughts and tips!

https://redd.it/1tepgn4
@rStableDiffusion
How do you actually keep track of prompts that work?

Curious if anyone here has cracked cross-model prompt management, or if you just stay in ComfyUI for everything?

https://redd.it/1teskau
@rStableDiffusion