I built a custom NVENC encoder bridge to split FLUX 2 Models across two GPUs over Ethernet LAN (example: 5090 + laptop 4090 spreading model layers over two machines via Eth = 4.4s per image). Completely bypasses the need for NVLink. Multi GPU in one PC supported, Wifi 6 works very well also.
https://github.com/shootthesound/comfyui-mesh
https://redd.it/1tegs83
@rStableDiffusion
https://github.com/shootthesound/comfyui-mesh
https://redd.it/1tegs83
@rStableDiffusion
GitHub
GitHub - shootthesound/comfyui-mesh: Split FLUX.2 across two GPUs (LAN or same-machine) — NVENC compresses activations live on…
Split FLUX.2 across two GPUs (LAN or same-machine) — NVENC compresses activations live on the wire. Icarus (ComfyUI node) + Daedalus (back-half server). - shootthesound/comfyui-mesh
Sharing my experience with Anima (ComfyUI): great detail, but struggling with multiple characters
Hi everyone, I wanted to share my experience.
Lately I’ve started using the Anima model with ComfyUI, and I have to say I’m really enjoying the results so far. What stands out to me the most is the level of detail, which I’ve found to be particularly strong not only on the characters, but even more on backgrounds and environments. I wasn’t really able to reach the same quality with models like Illustrious or Pony.
Another thing I really like (and honestly find kind of genius) is the possibility to build prompts using a mix of Gelbooru-style tags and natural language descriptions. That hybrid approach works incredibly well for me and feels much more flexible compared to sticking to only one style.
That said, I’ve noticed a limitation: when Anima has to handle more than one character in the scene, the results seem noticeably worse compared to what I could get with Illustrious or Pony.
I’m curious if anyone else has run into the same issue, and if there are specific techniques to better handle multi-character compositions.
I’m also wondering whether there’s any kind of regional prompting or similar workflow that works well with Anima, or if there are alternative approaches to improve consistency when generating multiple characters.
Curious to hear your thoughts and tips!
https://redd.it/1tepgn4
@rStableDiffusion
Hi everyone, I wanted to share my experience.
Lately I’ve started using the Anima model with ComfyUI, and I have to say I’m really enjoying the results so far. What stands out to me the most is the level of detail, which I’ve found to be particularly strong not only on the characters, but even more on backgrounds and environments. I wasn’t really able to reach the same quality with models like Illustrious or Pony.
Another thing I really like (and honestly find kind of genius) is the possibility to build prompts using a mix of Gelbooru-style tags and natural language descriptions. That hybrid approach works incredibly well for me and feels much more flexible compared to sticking to only one style.
That said, I’ve noticed a limitation: when Anima has to handle more than one character in the scene, the results seem noticeably worse compared to what I could get with Illustrious or Pony.
I’m curious if anyone else has run into the same issue, and if there are specific techniques to better handle multi-character compositions.
I’m also wondering whether there’s any kind of regional prompting or similar workflow that works well with Anima, or if there are alternative approaches to improve consistency when generating multiple characters.
Curious to hear your thoughts and tips!
https://redd.it/1tepgn4
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
Found this in the attic...morphing between unrelated images...
https://redd.it/1terefl
@rStableDiffusion
https://redd.it/1terefl
@rStableDiffusion
ComfyUI Tutorial : LTX 2.3 Style Enhancer LoRA For More Beautiful Cinematic Videos (Res: 1920x1080, Vram: 6 Gb, Gen Time: 20 min)
https://youtu.be/zEckV4j40x4
https://redd.it/1tetekz
@rStableDiffusion
https://youtu.be/zEckV4j40x4
https://redd.it/1tetekz
@rStableDiffusion
YouTube
ComfyUI Tutorial : LTX 2.3 Style Enhancer LoRA Beautiful Cinematic Videos #comfyui #comfyuitutorial
Hello everyone, in this tutorial we explore the style enhance lora for the LTX 2.3 model. This lora model is natural detail enhancer made for users who want a cleaner, more refined look. The cutom workflow helps in generating 5 seconds AI video at full…
What ai was used to make these images or does anyone know a certain prompt?
https://redd.it/1tes221
@rStableDiffusion
https://redd.it/1tes221
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: What ai was used to make these images or does anyone know a certain prompt?
Explore this post and more from the StableDiffusion community
How do you actually keep track of prompts that work?
Curious if anyone here has cracked cross-model prompt management, or if you just stay in ComfyUI for everything?
https://redd.it/1teskau
@rStableDiffusion
Curious if anyone here has cracked cross-model prompt management, or if you just stay in ComfyUI for everything?
https://redd.it/1teskau
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Character Sheet Worflow or Lora?
I am looking for a way to create high quality character sheets with loras that I trained.
I did face loras in f1 and fluxklein 9b.
There are tutorials with prompting for Nano Banana from one or more Images but it looks not as good I expected.
And some seems to get great results with a single Image on Higgsfield or other saas.
But what would be the ideal process.
It would be ideal to generate one Face close up and one full body shot to any workflow or lora that understands the task!?
Do you guys do that with openpose?
Multiangle with qwen I tried already, however it could have been better.
So if there is any idea how I can process this task, please let me know your ideas.
Currently I make a promptlist and hope to get the results I need.
Thanks a ton!
https://redd.it/1tf19vr
@rStableDiffusion
I am looking for a way to create high quality character sheets with loras that I trained.
I did face loras in f1 and fluxklein 9b.
There are tutorials with prompting for Nano Banana from one or more Images but it looks not as good I expected.
And some seems to get great results with a single Image on Higgsfield or other saas.
But what would be the ideal process.
It would be ideal to generate one Face close up and one full body shot to any workflow or lora that understands the task!?
Do you guys do that with openpose?
Multiangle with qwen I tried already, however it could have been better.
So if there is any idea how I can process this task, please let me know your ideas.
Currently I make a promptlist and hope to get the results I need.
Thanks a ton!
https://redd.it/1tf19vr
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
My steps and yours: Anima Base 1.0 - Qwen Image Edit 2511 - Wan 2.2
https://redd.it/1tf3mex
@rStableDiffusion
https://redd.it/1tf3mex
@rStableDiffusion
INSTARAW 2.0 FULL WORKFLOWS
[https://civitai.com/articles/29974](https://civitai.com/articles/29974)
[https://huggingface.co/rizzlaa/INSTARAW-2.0-WORKFLOW-CUSTOM](https://huggingface.co/rizzlaa/INSTARAW-2.0-WORKFLOW-CUSTOM)
This pack includes **14 ready-made ComfyUI workflows** designed for:
* Maximum photorealistic image generation
* Face swap
* Detail enhancement and final polishing
Most workflows use the **Grok API** and **Nano Banana**
(**Google Gemini 2.5 Flash/Pro Image**).
If you do not configure the API, some nodes may appear red.
https://redd.it/1tf6v66
@rStableDiffusion
[https://civitai.com/articles/29974](https://civitai.com/articles/29974)
[https://huggingface.co/rizzlaa/INSTARAW-2.0-WORKFLOW-CUSTOM](https://huggingface.co/rizzlaa/INSTARAW-2.0-WORKFLOW-CUSTOM)
This pack includes **14 ready-made ComfyUI workflows** designed for:
* Maximum photorealistic image generation
* Face swap
* Detail enhancement and final polishing
Most workflows use the **Grok API** and **Nano Banana**
(**Google Gemini 2.5 Flash/Pro Image**).
If you do not configure the API, some nodes may appear red.
https://redd.it/1tf6v66
@rStableDiffusion
Civitai
INSTARAW 2.0 WorkFlow! | Civitai
We posted the full workflows on our discord for absolutely FREE!: https://t.me/aihustlersgc https://huggingface.co/rizzlaa/INSTARAW-2.0-WORKFLOW-CU...