"FLUX Creator Program" - New Flux models sooner than expected?
https://redd.it/1t4vlo7
@rStableDiffusion
https://redd.it/1t4vlo7
@rStableDiffusion
Tencent is about to release an anime video model (AniMatrix).
https://redd.it/1t51oi3
@rStableDiffusion
https://redd.it/1t51oi3
@rStableDiffusion
Install Stable Diffusion WebUI Forge easily on Windows: portable one-click installer for Forge Classic + Forge Neo
Hi everyone - I made a portable Windows batch script to make installing Stable Diffusion WebUI Forge easier.
GitHub repo: https://github.com/Merserk/sd-webui-forge-universal-portable
It lets you install and choose between:
Forge Classic \- stable/traditional version
Forge Neo \- newer experimental version
It is designed for people who want an easier way to install Stable Diffusion WebUI Forge on Windows without manually setting up Python, Git, virtual environments, or dependencies.
Basic install:
1. Download
2. Double-click it
3. Choose Forge Neo or Forge Classic
4. Run the generated launcher
This may also help people looking for a simple way to install Stable Diffusion on Windows, install Stable Diffusion WebUI Forge, or try a Forge-based alternative to A1111 / Automatic1111.
Feedback, bug reports, and suggestions are welcome.
https://redd.it/1t4wkr4
@rStableDiffusion
Hi everyone - I made a portable Windows batch script to make installing Stable Diffusion WebUI Forge easier.
GitHub repo: https://github.com/Merserk/sd-webui-forge-universal-portable
It lets you install and choose between:
Forge Classic \- stable/traditional version
Forge Neo \- newer experimental version
It is designed for people who want an easier way to install Stable Diffusion WebUI Forge on Windows without manually setting up Python, Git, virtual environments, or dependencies.
Basic install:
1. Download
install_forge_universal.bat2. Double-click it
3. Choose Forge Neo or Forge Classic
4. Run the generated launcher
This may also help people looking for a simple way to install Stable Diffusion on Windows, install Stable Diffusion WebUI Forge, or try a Forge-based alternative to A1111 / Automatic1111.
Feedback, bug reports, and suggestions are welcome.
https://redd.it/1t4wkr4
@rStableDiffusion
GitHub
GitHub - Merserk/sd-webui-forge-universal-portable: A universal, portable installer and launcher for Stable Diffusion WebUI Forge…
A universal, portable installer and launcher for Stable Diffusion WebUI Forge (Classic & Neo). - Merserk/sd-webui-forge-universal-portable
This media is not supported in your browser
VIEW IN TELEGRAM
GTA 70s - Teaser Trailer (Alternative Version): Z-image Turbo - Flux Klein 9b - Wan 2.2
https://redd.it/1t56xx6
@rStableDiffusion
https://redd.it/1t56xx6
@rStableDiffusion
SenseNova U1 Infographic Test: Capabilities in Image-Based Reasoning
https://redd.it/1t56mad
@rStableDiffusion
https://redd.it/1t56mad
@rStableDiffusion
If I use Famegrid Z-Image Base, can I later use LoRAs that were trained on the original/raw Z-Image Base?
https://redd.it/1t59mwd
@rStableDiffusion
https://redd.it/1t59mwd
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
System prompt Chroma
Ola,
I would like to know if there are any resources for system prompts to get the best results with chroma, I know some models are trained on natural language and the quality of course depends on steps cfg etc. But if someone has a good template to use that would very much appreciated. Also which llm works well!
Thanks!
https://redd.it/1t5aw29
@rStableDiffusion
Ola,
I would like to know if there are any resources for system prompts to get the best results with chroma, I know some models are trained on natural language and the quality of course depends on steps cfg etc. But if someone has a good template to use that would very much appreciated. Also which llm works well!
Thanks!
https://redd.it/1t5aw29
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Thanks to the sub my silly node and workflow got 3k downloads overnight, therefore I fixed some bugs, unified some features, and uploaded the latest and the greatest version to HF.
What started as an internal tool for my secret project and custom workflow somehow went from \~160 downloads to 3000+ overnight after being shared here.
ComfyUI Character Composer node is basically a structured procedural prompt system for Qwen workflows focused on:
character consistency
scene composition
controllable generation
SFW Json library (But who the hell is JSON??)
unified txt2img + img2img workflow (just bypass the input "image1")
(you will rarely ever type, or copy-paste from an llm again)
https://preview.redd.it/71qoqvo28jzg1.png?width=1540&format=png&auto=webp&s=6f016a56bdbe5745129ba7eb105df1d7bffaf258
Built on top of the amazing Qwen ecosystem work by Phr00t:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO
Project:
https://huggingface.co/datasets/unh1nge/comfyui-character-composer
Currently improving the UX, simplifying the node, and preparing better docs/tutorials.
Really appreciate all the feedback and testing so far. I am a newbie in the scene therefore I am still learning the best approaches and trying to keep up with the best and the latest models which isn't easy, so expect more from the future.
https://redd.it/1t5fu9l
@rStableDiffusion
What started as an internal tool for my secret project and custom workflow somehow went from \~160 downloads to 3000+ overnight after being shared here.
ComfyUI Character Composer node is basically a structured procedural prompt system for Qwen workflows focused on:
character consistency
scene composition
controllable generation
SFW Json library (But who the hell is JSON??)
unified txt2img + img2img workflow (just bypass the input "image1")
(you will rarely ever type, or copy-paste from an llm again)
https://preview.redd.it/71qoqvo28jzg1.png?width=1540&format=png&auto=webp&s=6f016a56bdbe5745129ba7eb105df1d7bffaf258
Built on top of the amazing Qwen ecosystem work by Phr00t:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO
Project:
https://huggingface.co/datasets/unh1nge/comfyui-character-composer
Currently improving the UX, simplifying the node, and preparing better docs/tutorials.
Really appreciate all the feedback and testing so far. I am a newbie in the scene therefore I am still learning the best approaches and trying to keep up with the best and the latest models which isn't easy, so expect more from the future.
https://redd.it/1t5fu9l
@rStableDiffusion
3 hours of lora training completely wasted on Runpod. Any alternatives?
Decided to use runpod to train a character lora. Uploaded the dataset, configured AI toolkit and selected the RTX 5090. Time to complete was 3 hours which seems okay since its being trained on 1024 pixels, 75 images and 7500 steps.
Training is complete and when I proceed to download the lora files, the download speed is 50-60kbps. A 300MB file is not going to get downloaded on 50-60kbps download speed.
Checked speedtest and my gigabit internet connection is perfectly fine. Tried various methods - runpodctl, ssh, hf_transfer all showed maximum transfer speed of no more than 60kbps.
Will try it again with a smaller dataset and less steps to see if its a persistent issue.
In the meantime, is there any alternative to runpod where I can run AI Toolkit?
https://redd.it/1t5hw7p
@rStableDiffusion
Decided to use runpod to train a character lora. Uploaded the dataset, configured AI toolkit and selected the RTX 5090. Time to complete was 3 hours which seems okay since its being trained on 1024 pixels, 75 images and 7500 steps.
Training is complete and when I proceed to download the lora files, the download speed is 50-60kbps. A 300MB file is not going to get downloaded on 50-60kbps download speed.
Checked speedtest and my gigabit internet connection is perfectly fine. Tried various methods - runpodctl, ssh, hf_transfer all showed maximum transfer speed of no more than 60kbps.
Will try it again with a smaller dataset and less steps to see if its a persistent issue.
In the meantime, is there any alternative to runpod where I can run AI Toolkit?
https://redd.it/1t5hw7p
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Media is too big
VIEW IN TELEGRAM
LTX 2.3 is pretty much all I use for video gen at this point -- Scene from my current story-driven fantasy project -- Info on process/workflow in comments.
https://redd.it/1t5p0ae
@rStableDiffusion
https://redd.it/1t5p0ae
@rStableDiffusion