A new SOTA local video model (HappyHorse 1.0) will be released in april 10th.
https://redd.it/1sfo3dq
@rStableDiffusion
https://redd.it/1sfo3dq
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: A new SOTA local video model (HappyHorse 1.0) will be released in april 10th.
Explore this post and more from the StableDiffusion community
Built a tool for anyone drowning in huge image folders: HybridScorer
https://redd.it/1sg5paj
@rStableDiffusion
https://redd.it/1sg5paj
@rStableDiffusion
Anima Preview 3 is out and its better than illustrious or pony.
this is the biggest potential "best diffuser ever" for anime kind of diffusers. just take a look at it on civitai try it and you will never want to use illustrious or pony ever again.
https://redd.it/1sgfjbs
@rStableDiffusion
this is the biggest potential "best diffuser ever" for anime kind of diffusers. just take a look at it on civitai try it and you will never want to use illustrious or pony ever again.
https://redd.it/1sgfjbs
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Vibe Code Your First ComfyUI Custom Node Step by Step (Ep12)
https://www.youtube.com/watch?v=oiiCkrX8hq4
https://redd.it/1sfvnnz
@rStableDiffusion
https://www.youtube.com/watch?v=oiiCkrX8hq4
https://redd.it/1sfvnnz
@rStableDiffusion
YouTube
Vibe Code Your First ComfyUI Custom Node Step by Step (Ep12)
Learn how to create your first ComfyUI custom node step by step with AI, even if you have no coding experience. In this episode, I show how to vibe code a working custom node for ComfyUI using tools like Gemini and Claude, how custom nodes are structured…
ACE-Step 1.5 XL Turbo — BF16 version (converted from FP32)
I converted the ACE-Step 1.5 XL Turbo model from FP32 to BF16.
The original weights were \~18.8 GB in FP32, this version is \~9.97 GB — same quality, lower VRAM usage.
🤗 https://huggingface.co/marcorez8/acestep-v15-xl-turbo-bf16
https://redd.it/1sgiqg7
@rStableDiffusion
I converted the ACE-Step 1.5 XL Turbo model from FP32 to BF16.
The original weights were \~18.8 GB in FP32, this version is \~9.97 GB — same quality, lower VRAM usage.
🤗 https://huggingface.co/marcorez8/acestep-v15-xl-turbo-bf16
https://redd.it/1sgiqg7
@rStableDiffusion
huggingface.co
ACE-Step/acestep-v15-xl-turbo · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Qwen 2512 is so Underrated, prompt understanding is really great, only Flux 2 Dev is better. I'm using Q4KS with 4-6 steps and it is fast (20-30 sec per gen), almost as fast as Anima model. It just need that LoRA love from the community.
https://redd.it/1sgnfv0
@rStableDiffusion
https://redd.it/1sgnfv0
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Qwen 2512 is so Underrated, prompt understanding is really great, only Flux 2 Dev…
Explore this post and more from the StableDiffusion community