"Lighthouse" mode for ComfyUI — click any node and the rest of the workflow lights up by graph distance. Direct dependencies red, then orange, yellow, green, blue, violet.
https://redd.it/1t6ox12
@rStableDiffusion
https://redd.it/1t6ox12
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: "Lighthouse" mode for ComfyUI — click any node and the rest of the workflow lights…
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
I just tried Reactor's open source world model demo, here are my thoughts
https://redd.it/1t6qfff
@rStableDiffusion
https://redd.it/1t6qfff
@rStableDiffusion
Trajectory of video generation models
I am wondering if anyone in this community has meaningfully insight into the trajectory of video generation models. Specifically, how likely is it that within two years there will be open models equal to what Grok imagine currently is now? Presently, I can 10 reference images of a subject and give it a simple prompt. And it will spit out a 720P 10s clip in a minute, and the resemblance is 90 to 100% most of the time. Will we see that in open models? And how soon do you think? thanks in advance for anything you share.
https://redd.it/1t6rprz
@rStableDiffusion
I am wondering if anyone in this community has meaningfully insight into the trajectory of video generation models. Specifically, how likely is it that within two years there will be open models equal to what Grok imagine currently is now? Presently, I can 10 reference images of a subject and give it a simple prompt. And it will spit out a 720P 10s clip in a minute, and the resemblance is 90 to 100% most of the time. Will we see that in open models? And how soon do you think? thanks in advance for anything you share.
https://redd.it/1t6rprz
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community