Closed-source AI hate is understandable, but local AI has nothing that should concern AI haters
https://redd.it/1su2arp
@rStableDiffusion
Comfy raises $30M to continue building the best creative AI tool in open

Hi r/StableDiffusion, Today we’re excited to share that Comfy has raised $30M at a $500M valuation! Comfy has grown a lot over the past year, and especially over the past six months: more than 50% of our users joined the Comfy ecosystem during that period. Comfy Cloud has also grown quickly, with annualized bookings crossing $10M in 8 months.

This funding gives us more room to invest in the things this community cares about most: making Comfy more stable, improving the product experience, fixing bugs faster (sorry again for the bugs!) and continuing to launch powerful new features in the open!

The main goal of this announcement is to also attract top talent to build what we believe to be a generational mission of making sure open source creative tools win. If you are passionate about Comfy and OSS creative AI, join us at comfy.org.

Please help us spread the news by spending 90s on twitter and Linkedin where you can help us to amplify our announcement and enter to win an exclusive ComfyUI Swag

We are an open source team, being in the open is part of our culture (although we have not been doing a great job at communicating at times). As part of the announcement, we would love to do a live AMA on Discord. Please upvote this post and add your questions there, we will go through them live at 3PM PST.

Tune in to the AMA here: https://www.reddit.com/r/comfyui/comments/1sumsoh/comfy\_org\_funding\_announcement\_ama\_live\_at\_3pm\_pst/

PS:
For those who speculated on our announcement in this thread, I apologize for the dramatic vibe-coded countdown page. For those who believed our announcement is more bugs, I will be personally shipping a few extra bugs IP-enabled just for you u/IllEase6749

https://preview.redd.it/i1m2xj7ie6xg1.png?width=508&format=png&auto=webp&s=250e8307c5ad4600fc9b29718268215a4753e5d2

https://redd.it/1sumuc3
@rStableDiffusion
ComfyUI's countdown announcment: New funding ☠️☠️☠️☠️☠️
https://redd.it/1sumhs1
@rStableDiffusion
Is anyone using models to describe an image and get a prompt? Is there much difference between Qwen 3.5 9b vs Qwen 3.5 27b, vs gemma 4 27b and another model you use ?
https://redd.it/1susx9w
@rStableDiffusion
Is it possible to use/adapt ernie-image-prompt-enhancer.safetensors to also work with Z-image turbo?

Using Forge Classic Neo I can run Z-image turbo with ae, Qwen3-4B-Q8_0.gguf, and ernie-image-prompt-enhancer all at the same time, but it doesn't appear to do anything. I'm assuming Forge Classic Neo is just ignoring the prompt enhancer. Would be cool to have as an option.

https://redd.it/1sv3p37
@rStableDiffusion
Multi-shot Consistency

Hey all - I'm trying to figure out just how well some models (real people, mind you) on IG are pulling off multi-shot consistency with their generated content. A couple prime examples include *musatovaak* and *mashymi*. Both real people with obviously excellent LoRAs or even full checkpoints trained on their likeness. I'm wondering how they're getting 6, 7, 8, 9+ images out of a single "set up" or scene. With really good consistency across the images - both in their attire and the environment - across huge swings in camera angle. The quality appears far too high for either Flux2Klein or Qwen local. I'm sure they must be using a paid service, right? Any thoughts?

https://redd.it/1sv2iav
@rStableDiffusion