People who've trained LORA models on both Kohya and OneTrainer with the same datasets, what differences have you noticed between the two?



https://redd.it/1l7p387
@rStableDiffusion
Whats the best Virtual Try-On model today?

I know none of them are perfect at assigning patterns/textures/text. But from what you've researched, which do you think in today's age is the most accurate at them?

I tried Flux Kontext Pro on Fal and it wasnt very accurate in determining what to change and what not to, same with 4o Image Gen. I wanted to try the google "dressup" virtual try on, but I cant seem to find it anywhere.

OSS models would be ideal as I can tweak the workflow rather than just the prompt on ComfyUI.

https://redd.it/1l7roia
@rStableDiffusion
Self Forcing: The new Holy Grail for video generation?

https://self-forcing.github.io/

> Our model generates high-quality 480P videos with an initial latency of ~0.8 seconds, after which frames are generated in a streaming fashion at ~16 FPS on a single H100 GPU and ~10 FPS on a single 4090 with some optimizations.

> Our method has the same speed as CausVid but has much better video quality, free from over-saturation artifacts and having more natural motion. Compared to Wan, SkyReels, and MAGI, our approach is 150–400× faster in terms of latency, while achieving comparable or superior visual quality.

https://redd.it/1l7sxh3
@rStableDiffusion