Why cant we use 2 GPU's the same way RAM offloading works?
I am in the process of building a PC and was going through the sub to understand about RAM offloading. Then I wondered, if we are using RAM offloading, why is it that we can't used GPU offloading or something like that?
I see everyone saying 2 GPU's at same time is only useful in generating two separate images at same time, but I am also seeing comments about RAM offloading to help load large models. Why would one help in sharing and other won't?
I might be completely oblivious to some point and I would like to learn more on this.
https://redd.it/1l6j4y9
@rStableDiffusion
I am in the process of building a PC and was going through the sub to understand about RAM offloading. Then I wondered, if we are using RAM offloading, why is it that we can't used GPU offloading or something like that?
I see everyone saying 2 GPU's at same time is only useful in generating two separate images at same time, but I am also seeing comments about RAM offloading to help load large models. Why would one help in sharing and other won't?
I might be completely oblivious to some point and I would like to learn more on this.
https://redd.it/1l6j4y9
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Check this Flux model.
That's it — this is the original:
https://civitai.com/models/1486143/flluxdfp16-10steps00001?modelVersionId=1681047
And this is the one I use with my humble GTX 1070:
https://huggingface.co/ElGeeko/flluxdfp16-10steps-UNET/tree/main
Thanks to the person who made this version and posted it in the comments!
This model halved my render time — from 8 minutes at 832×1216 to 3:40, and from 5 minutes at 640×960 to 2:20.
This post is mostly a thank-you to the person who made this model, since with my card, Flux was taking way too long.
https://redd.it/1l6l4t4
@rStableDiffusion
That's it — this is the original:
https://civitai.com/models/1486143/flluxdfp16-10steps00001?modelVersionId=1681047
And this is the one I use with my humble GTX 1070:
https://huggingface.co/ElGeeko/flluxdfp16-10steps-UNET/tree/main
Thanks to the person who made this version and posted it in the comments!
This model halved my render time — from 8 minutes at 832×1216 to 3:40, and from 5 minutes at 640×960 to 2:20.
This post is mostly a thank-you to the person who made this model, since with my card, Flux was taking way too long.
https://redd.it/1l6l4t4
@rStableDiffusion
huggingface.co
ElGeeko/flluxdfp16-10steps-UNET at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Good formula for training steps while training a style LORA?
I've been using a fairly common Google Collab for doing LORA training and it recommends, "...images multiplied by their repeats is around 100, or 1 repeat with more than 100 images."
Does anyone have a strong objection to that formula or can recommend a better formula for style?
In the past, I was just doing token training, so I only had up to 10 images per set so the formula made sense and didn't seem to cause any issues.
If it matters, I normally train in 10 epochs at a time just for time and resource constraints.
Learning rate: 3e-4
Text encoder: 6e-5
I just use the defaults provided by the model.
https://redd.it/1l6m1oa
@rStableDiffusion
I've been using a fairly common Google Collab for doing LORA training and it recommends, "...images multiplied by their repeats is around 100, or 1 repeat with more than 100 images."
Does anyone have a strong objection to that formula or can recommend a better formula for style?
In the past, I was just doing token training, so I only had up to 10 images per set so the formula made sense and didn't seem to cause any issues.
If it matters, I normally train in 10 epochs at a time just for time and resource constraints.
Learning rate: 3e-4
Text encoder: 6e-5
I just use the defaults provided by the model.
https://redd.it/1l6m1oa
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
inference.sh getting closer to alpha launch. gemma, granite, qwen2, qwen3, deepseek, flux, hidream, cogview, diffrythm, audio-x, magi, ltx-video, wan all in one flow!
https://redd.it/1l6q4mm
@rStableDiffusion
https://redd.it/1l6q4mm
@rStableDiffusion
I accidentally discovered 3 gigabytes of images in the "input" folder of comfyui. I had no idea this folder existed. I discovered it because there was an image with such a long name that it prevented my comfyui from updating.
many input images were saved. some related to ipadapter. others were inpainting masks
I don't know if there is a way to prevent this
https://redd.it/1l6p6rb
@rStableDiffusion
many input images were saved. some related to ipadapter. others were inpainting masks
I don't know if there is a way to prevent this
https://redd.it/1l6p6rb
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Any step-by-step tutorial for video in SD.Next? cannot get it to work..
I managed to create videos in SwarmUI, but not with SD.Next. Something is missing and I have no idea what it is. I am using RTX3060 12GB on linux docker. Thanks.
https://redd.it/1l6tfjx
@rStableDiffusion
I managed to create videos in SwarmUI, but not with SD.Next. Something is missing and I have no idea what it is. I am using RTX3060 12GB on linux docker. Thanks.
https://redd.it/1l6tfjx
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Best way to animate emojis?
I tried Framepack, but the results were pretty meh. Does anyone know a good method to animate emojis?
https://redd.it/1l70x3s
@rStableDiffusion
I tried Framepack, but the results were pretty meh. Does anyone know a good method to animate emojis?
https://redd.it/1l70x3s
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Best way to animate an image to a short video using AMD gpu ?
https://redd.it/1l6zoxo
@rStableDiffusion
https://redd.it/1l6zoxo
@rStableDiffusion
How to prevent style bleed on LoRA?
I want to train a simple LoRA for Illustrious XL to generate characters with four arms because I've tried some similar LoRAs and at high weight they all have style bleed on the generated images.
Is this a Dataset issue? Should I use different style images when training or what?
https://redd.it/1l72oei
@rStableDiffusion
I want to train a simple LoRA for Illustrious XL to generate characters with four arms because I've tried some similar LoRAs and at high weight they all have style bleed on the generated images.
Is this a Dataset issue? Should I use different style images when training or what?
https://redd.it/1l72oei
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
BAGEL in ComfyUI | All-in-One AI for Image Generation, Editing & Reasoning
https://youtu.be/QRwecGQTev0
https://redd.it/1l7535b
@rStableDiffusion
https://youtu.be/QRwecGQTev0
https://redd.it/1l7535b
@rStableDiffusion
YouTube
BAGEL in ComfyUI | All-in-One AI for Image Generation, Editing & Reasoning
BAGEL in ComfyUI | All-in-One AI for Image Generation, Editing & Reasoning
Running Hub - Open the link: https://www.runninghub.ai/?inviteCode=2ros5nl9
Register and get 1000 RH coins, free!
Workflow - http://www.aiverseblog.site/2025/06/bagel-in-comfyui…
Running Hub - Open the link: https://www.runninghub.ai/?inviteCode=2ros5nl9
Register and get 1000 RH coins, free!
Workflow - http://www.aiverseblog.site/2025/06/bagel-in-comfyui…
What is the best solution for generating images that feature multiple characters interacting with significant overlaps, while preserving the distinct details of each character?
Does this still require extensive manual masking and inpainting, or is there now a more straightforward solution?
Personally, I use SDXL with Krita and ComfyUI, which significantly speeds up the process, but it still demands considerable human effort and time. I experimented with some custom nodes, such as the regional prompter, but they ultimately require extensive manual editing to create scenes with lots of overlapping and separate LoRAs. In my opinion, Krita's AI painting plugin is the most user-friendly solution for crafting sophisticated scenes, provided you have a tablet and can manage numerous layers.
OK, it seems I have answered my own question, but I am asking this because I have noticed some Patreon accounts generating hundreds of images per day featuring multiple characters doing complex interactions, which appears impossible to achieve through human editing alone. I am curious if there are any advanced tools(commercial models or not) or methods that I may have overlooked.
https://redd.it/1l75afz
@rStableDiffusion
Does this still require extensive manual masking and inpainting, or is there now a more straightforward solution?
Personally, I use SDXL with Krita and ComfyUI, which significantly speeds up the process, but it still demands considerable human effort and time. I experimented with some custom nodes, such as the regional prompter, but they ultimately require extensive manual editing to create scenes with lots of overlapping and separate LoRAs. In my opinion, Krita's AI painting plugin is the most user-friendly solution for crafting sophisticated scenes, provided you have a tablet and can manage numerous layers.
OK, it seems I have answered my own question, but I am asking this because I have noticed some Patreon accounts generating hundreds of images per day featuring multiple characters doing complex interactions, which appears impossible to achieve through human editing alone. I am curious if there are any advanced tools(commercial models or not) or methods that I may have overlooked.
https://redd.it/1l75afz
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
About 5060ti and stabble difussion
Am i safe buying it to generate stuff using forge ui and flux? I remember when they came out reading something about ppl not being able to use that card because of some cuda stuff, i am kinda new into this and since i cant find stuff like benchmarks on youtube is making me doubt about buying it. Thx if anyone is willing to help and srry about the broken english.
https://redd.it/1l7a9k3
@rStableDiffusion
Am i safe buying it to generate stuff using forge ui and flux? I remember when they came out reading something about ppl not being able to use that card because of some cuda stuff, i am kinda new into this and since i cant find stuff like benchmarks on youtube is making me doubt about buying it. Thx if anyone is willing to help and srry about the broken english.
https://redd.it/1l7a9k3
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
HeyGem Lipsync Avatar Demos & Guide!
https://youtu.be/Lefc84zlroA
https://redd.it/1l75bso
@rStableDiffusion
https://youtu.be/Lefc84zlroA
https://redd.it/1l75bso
@rStableDiffusion
YouTube
HeyGem: Free AI Lipsync Avatar Videos! HeyGen & Synthesia Open-Source Alternative, Run Locally!
In this video, I introduce HeyGem, a revolutionary open-source tool for generating AI lipsync videos—completely free. HeyGem gives creators the power to animate talking avatars up to 30 minutes long, producing results that rival top paid services like HeyGen…
Framepack Studio: Exclusive First Look at the New Update (6/10/25) + Behind-the-Scenes with the Dev
https://youtu.be/hUvZ9VR-9_8
https://redd.it/1l7eug0
@rStableDiffusion
https://youtu.be/hUvZ9VR-9_8
https://redd.it/1l7eug0
@rStableDiffusion
YouTube
Framepack Studio: Exclusive First Look at the New Update + Behind-the-Scenes with the Dev
The massive new update to Framepack Studio is about to drop—and we’ve got your exclusive first look. GetGoingFast.pro sits down with developer Colin Urbs for a full walkthrough of the powerful new features coming your way. From video generation to post-processing…
5070 ti vs 4070 ti super. Only $80 difference. But I am seeing a lot of backlash for the 5070 ti, should I getvthe 4070 ti super for $cheaper
Saw some posts regarding performance and PCIe compatibility issues with 5070 ti. Anyone here facing issues with image generations? Should I go with 4070 ti s. There is only around 8% performance difference between the two in benchmarks. Any other reasons I should go with 5070 ti.
https://redd.it/1l7eva5
@rStableDiffusion
Saw some posts regarding performance and PCIe compatibility issues with 5070 ti. Anyone here facing issues with image generations? Should I go with 4070 ti s. There is only around 8% performance difference between the two in benchmarks. Any other reasons I should go with 5070 ti.
https://redd.it/1l7eva5
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community