Closed-source AI hate is understandable, but local AI has nothing that should concern AI haters
https://redd.it/1su2arp
@rStableDiffusion
https://redd.it/1su2arp
@rStableDiffusion
ComfyUI teasing something "big" for open, creative AI 👀
https://preview.redd.it/uqhdodqyx1xg1.png?width=3550&format=png&auto=webp&s=448b54b2a73600c991c35c7d9bc5f7f2c5e291e9
https://comfy.org/countdown
https://redd.it/1su3c8z
@rStableDiffusion
https://preview.redd.it/uqhdodqyx1xg1.png?width=3550&format=png&auto=webp&s=448b54b2a73600c991c35c7d9bc5f7f2c5e291e9
https://comfy.org/countdown
https://redd.it/1su3c8z
@rStableDiffusion
ComfyUI Tutorial : Add, Remove Replace, Style With LTX 2 3 Edit LORA (Made Using RTX 3060 6GB of Vram With 1080x1920 Resolution)
https://youtu.be/JU4aWPJrsUw
https://redd.it/1su9j74
@rStableDiffusion
https://youtu.be/JU4aWPJrsUw
https://redd.it/1su9j74
@rStableDiffusion
YouTube
ComfyUI Tutorial : Add, Remove Replace, Style With LTX 2 3 Edit LORA #comfyui #comfyuitutorial
in this tutorial we will explore the new LTX 2.3 EDIT ANYTHING LORA a new powerfull tool for AI video Editing within comfyui. this lora model was trained on extensive video data that allows you to add, remove, change style, and modify elements in your input…
A Repository of Most of Playtime_AI's Nuked Models
https://huggingface.co/Playtime-AI
https://redd.it/1suctuw
@rStableDiffusion
https://huggingface.co/Playtime-AI
https://redd.it/1suctuw
@rStableDiffusion
huggingface.co
Playtime-AI (Playtime_AI)
User profile of Playtime_AI on Hugging Face
Comfy raises $30M to continue building the best creative AI tool in open
Hi r/StableDiffusion, Today we’re excited to share that Comfy has raised $30M at a $500M valuation! Comfy has grown a lot over the past year, and especially over the past six months: more than 50% of our users joined the Comfy ecosystem during that period. Comfy Cloud has also grown quickly, with annualized bookings crossing $10M in 8 months.
This funding gives us more room to invest in the things this community cares about most: making Comfy more stable, improving the product experience, fixing bugs faster (sorry again for the bugs!) and continuing to launch powerful new features in the open!
The main goal of this announcement is to also attract top talent to build what we believe to be a generational mission of making sure open source creative tools win. If you are passionate about Comfy and OSS creative AI, join us at comfy.org.
Please help us spread the news by spending 90s on twitter and Linkedin where you can help us to amplify our announcement and enter to win an exclusive ComfyUI Swag
We are an open source team, being in the open is part of our culture (although we have not been doing a great job at communicating at times). As part of the announcement, we would love to do a live AMA on Discord. Please upvote this post and add your questions there, we will go through them live at 3PM PST.
Tune in to the AMA here: https://www.reddit.com/r/comfyui/comments/1sumsoh/comfy\_org\_funding\_announcement\_ama\_live\_at\_3pm\_pst/
PS:
For those who speculated on our announcement in this thread, I apologize for the dramatic vibe-coded countdown page. For those who believed our announcement is more bugs, I will be personally shipping a few extra bugs IP-enabled just for you u/IllEase6749
https://preview.redd.it/i1m2xj7ie6xg1.png?width=508&format=png&auto=webp&s=250e8307c5ad4600fc9b29718268215a4753e5d2
https://redd.it/1sumuc3
@rStableDiffusion
Hi r/StableDiffusion, Today we’re excited to share that Comfy has raised $30M at a $500M valuation! Comfy has grown a lot over the past year, and especially over the past six months: more than 50% of our users joined the Comfy ecosystem during that period. Comfy Cloud has also grown quickly, with annualized bookings crossing $10M in 8 months.
This funding gives us more room to invest in the things this community cares about most: making Comfy more stable, improving the product experience, fixing bugs faster (sorry again for the bugs!) and continuing to launch powerful new features in the open!
The main goal of this announcement is to also attract top talent to build what we believe to be a generational mission of making sure open source creative tools win. If you are passionate about Comfy and OSS creative AI, join us at comfy.org.
Please help us spread the news by spending 90s on twitter and Linkedin where you can help us to amplify our announcement and enter to win an exclusive ComfyUI Swag
We are an open source team, being in the open is part of our culture (although we have not been doing a great job at communicating at times). As part of the announcement, we would love to do a live AMA on Discord. Please upvote this post and add your questions there, we will go through them live at 3PM PST.
Tune in to the AMA here: https://www.reddit.com/r/comfyui/comments/1sumsoh/comfy\_org\_funding\_announcement\_ama\_live\_at\_3pm\_pst/
PS:
For those who speculated on our announcement in this thread, I apologize for the dramatic vibe-coded countdown page. For those who believed our announcement is more bugs, I will be personally shipping a few extra bugs IP-enabled just for you u/IllEase6749
https://preview.redd.it/i1m2xj7ie6xg1.png?width=508&format=png&auto=webp&s=250e8307c5ad4600fc9b29718268215a4753e5d2
https://redd.it/1sumuc3
@rStableDiffusion
Reddit
From the comfyui community on Reddit: Comfy Org Funding Announcement AMA! Live at 3PM PST
Explore this post and more from the comfyui community
Is anyone using models to describe an image and get a prompt? Is there much difference between Qwen 3.5 9b vs Qwen 3.5 27b, vs gemma 4 27b and another model you use ?
https://redd.it/1susx9w
@rStableDiffusion
https://redd.it/1susx9w
@rStableDiffusion
[Workflow updated] Swapped Joker with Harley Quinn in the Classic Stair Dance!
https://youtube.com/shorts/G-ClkUQTJmg
https://redd.it/1suzdjy
@rStableDiffusion
https://youtube.com/shorts/G-ClkUQTJmg
https://redd.it/1suzdjy
@rStableDiffusion
YouTube
AIGCTV
Swapped Joker with Harley Quinn in the Classic Stair Dance! 🃏#thatgirl #wananimate #harleyquinn
All in Wan I2V v2.0 workflow - I2V, F2LF, SVI with optional F2LF, NAG, LTX for V2A, Pulse of Motion, Lora Optimizer, CFG-Ctrl, 4 modes and more
https://civitai.com/models/2516432?modelVersionId=2888854
https://redd.it/1sv0kze
@rStableDiffusion
https://civitai.com/models/2516432?modelVersionId=2888854
https://redd.it/1sv0kze
@rStableDiffusion
Civitai
Wan 2.2 - All in Wan I2V (I2V, F2LF, SVI with optional F2LF, NAG, LTX for V2A, Pulse of Motion, Lora Optimizer, CFG-Ctrl, 4 modes…
v2 Release: A complete visual and functional overhaul, the features now include: - Regular I2V - F2LF (I2V but with a chosen end frame) - Video ext...
Is it possible to use/adapt ernie-image-prompt-enhancer.safetensors to also work with Z-image turbo?
Using Forge Classic Neo I can run Z-image turbo with ae, Qwen3-4B-Q8_0.gguf, and ernie-image-prompt-enhancer all at the same time, but it doesn't appear to do anything. I'm assuming Forge Classic Neo is just ignoring the prompt enhancer. Would be cool to have as an option.
https://redd.it/1sv3p37
@rStableDiffusion
Using Forge Classic Neo I can run Z-image turbo with ae, Qwen3-4B-Q8_0.gguf, and ernie-image-prompt-enhancer all at the same time, but it doesn't appear to do anything. I'm assuming Forge Classic Neo is just ignoring the prompt enhancer. Would be cool to have as an option.
https://redd.it/1sv3p37
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Multi-shot Consistency
Hey all - I'm trying to figure out just how well some models (real people, mind you) on IG are pulling off multi-shot consistency with their generated content. A couple prime examples include *musatovaak* and *mashymi*. Both real people with obviously excellent LoRAs or even full checkpoints trained on their likeness. I'm wondering how they're getting 6, 7, 8, 9+ images out of a single "set up" or scene. With really good consistency across the images - both in their attire and the environment - across huge swings in camera angle. The quality appears far too high for either Flux2Klein or Qwen local. I'm sure they must be using a paid service, right? Any thoughts?
https://redd.it/1sv2iav
@rStableDiffusion
Hey all - I'm trying to figure out just how well some models (real people, mind you) on IG are pulling off multi-shot consistency with their generated content. A couple prime examples include *musatovaak* and *mashymi*. Both real people with obviously excellent LoRAs or even full checkpoints trained on their likeness. I'm wondering how they're getting 6, 7, 8, 9+ images out of a single "set up" or scene. With really good consistency across the images - both in their attire and the environment - across huge swings in camera angle. The quality appears far too high for either Flux2Klein or Qwen local. I'm sure they must be using a paid service, right? Any thoughts?
https://redd.it/1sv2iav
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Comparing Realism: Z-Image Turbo vs Ernie Turbo vs Klein 9B - Same seed and prompts, no LoRAs
https://redd.it/1sv8uo3
@rStableDiffusion
https://redd.it/1sv8uo3
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Comparing Realism: Z-Image Turbo vs Ernie Turbo vs Klein 9B - Same seed and prompts…
Explore this post and more from the StableDiffusion community