3 hours of lora training completely wasted on Runpod. Any alternatives?
Decided to use runpod to train a character lora. Uploaded the dataset, configured AI toolkit and selected the RTX 5090. Time to complete was 3 hours which seems okay since its being trained on 1024 pixels, 75 images and 7500 steps.
Training is complete and when I proceed to download the lora files, the download speed is 50-60kbps. A 300MB file is not going to get downloaded on 50-60kbps download speed.
Checked speedtest and my gigabit internet connection is perfectly fine. Tried various methods - runpodctl, ssh, hf_transfer all showed maximum transfer speed of no more than 60kbps.
Will try it again with a smaller dataset and less steps to see if its a persistent issue.
In the meantime, is there any alternative to runpod where I can run AI Toolkit?
https://redd.it/1t5hw7p
@rStableDiffusion
Decided to use runpod to train a character lora. Uploaded the dataset, configured AI toolkit and selected the RTX 5090. Time to complete was 3 hours which seems okay since its being trained on 1024 pixels, 75 images and 7500 steps.
Training is complete and when I proceed to download the lora files, the download speed is 50-60kbps. A 300MB file is not going to get downloaded on 50-60kbps download speed.
Checked speedtest and my gigabit internet connection is perfectly fine. Tried various methods - runpodctl, ssh, hf_transfer all showed maximum transfer speed of no more than 60kbps.
Will try it again with a smaller dataset and less steps to see if its a persistent issue.
In the meantime, is there any alternative to runpod where I can run AI Toolkit?
https://redd.it/1t5hw7p
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Media is too big
VIEW IN TELEGRAM
LTX 2.3 is pretty much all I use for video gen at this point -- Scene from my current story-driven fantasy project -- Info on process/workflow in comments.
https://redd.it/1t5p0ae
@rStableDiffusion
https://redd.it/1t5p0ae
@rStableDiffusion
How can Anima work on StabilityMatrix?
I tried to put the files as instructed, but it still gets an error message in generating images. Any solutions?
https://redd.it/1t5p1nj
@rStableDiffusion
I tried to put the files as instructed, but it still gets an error message in generating images. Any solutions?
https://redd.it/1t5p1nj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
My Reference Latent Node including Auto Masking and Timesteps per image is out tomorrow
https://redd.it/1t5s662
@rStableDiffusion
https://redd.it/1t5s662
@rStableDiffusion
Clippy Reloaded - a really sarky useful Clipboard node with no click.
https://redd.it/1t5ujla
@rStableDiffusion
https://redd.it/1t5ujla
@rStableDiffusion
LTX 2.3 is pretty much all I use for video gen at this point. Now I'm going to post stuff about it.
https://redd.it/1t61x0m
@rStableDiffusion
https://redd.it/1t61x0m
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
testing LTX 2.3 v1.1 distilled on my gpu. pretty decent for creating ugc content or short tiktok vlog.
https://redd.it/1t643be
@rStableDiffusion
https://redd.it/1t643be
@rStableDiffusion
LTX 2.3 Slow Motion
Does anyone know how to stop LTX 2.3 image to video from being slow motion? I am using the default work flow in comfy ui and have tried with both the dev/distilled checkpoints and loras. Experimented with CFG, Lora Weight, Prompts, etc. More often than not, the video is in slow motion, for 5 & 10 second clips and 25 FPS.
https://redd.it/1t66h1d
@rStableDiffusion
Does anyone know how to stop LTX 2.3 image to video from being slow motion? I am using the default work flow in comfy ui and have tried with both the dev/distilled checkpoints and loras. Experimented with CFG, Lora Weight, Prompts, etc. More often than not, the video is in slow motion, for 5 & 10 second clips and 25 FPS.
https://redd.it/1t66h1d
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
I built a tool to mix two artists on one image with region masks — Van Gogh + Picasso, no training, arbitrary refs
https://redd.it/1t65bi8
@rStableDiffusion
https://redd.it/1t65bi8
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I built a tool to mix two artists on one image with region masks — Van Gogh + Picasso…
Explore this post and more from the StableDiffusion community
SenseNova U1 Interleaved Output: From Single Prompt to Consistent Visual Set
https://redd.it/1t6aamt
@rStableDiffusion
https://redd.it/1t6aamt
@rStableDiffusion
This media is not supported in your browser
VIEW IN TELEGRAM
Open-sourcing Banodoco Hivemind: 1M+ Discord messages from artists and engineers working deeply with open image/video models, packaged as an agent skill
https://redd.it/1t6amma
@rStableDiffusion
https://redd.it/1t6amma
@rStableDiffusion
Qwen 3.5 in ComfyUI + Align Tool & Pixaroma Nodes Updates (Ep16)
https://www.youtube.com/watch?v=IEH02ZQy0zY
https://redd.it/1t6dd3h
@rStableDiffusion
https://www.youtube.com/watch?v=IEH02ZQy0zY
https://redd.it/1t6dd3h
@rStableDiffusion
YouTube
Qwen 3.5 in ComfyUI + Align Tool & Pixaroma Nodes Updates (Ep16)
In this ComfyUI tutorial, I show how to use the Qwen 3.5 vision-language model to generate prompts from text or images directly inside ComfyUI, plus major updates for Pixaroma Nodes including the new Align Tool, improved Preview Image node, updated Note node…
Removed by Reddit
Removed by Reddit on account of violating the [content policy. ]
https://redd.it/1t6dc3h
@rStableDiffusion
Removed by Reddit on account of violating the [content policy. ]
https://redd.it/1t6dc3h
@rStableDiffusion
Reddit
From the sdforall community on Reddit: [ Removed by Reddit ]
Posted by Successful_Rip3025 - 4 votes and 0 comments