Segment Anything (SAM) ControlNet for Z-Image
https://huggingface.co/neuralvfx/Z-Image-SAM-ControlNet
https://redd.it/1s7r1ly
@rStableDiffusion
https://huggingface.co/neuralvfx/Z-Image-SAM-ControlNet
https://redd.it/1s7r1ly
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Segment Anything (SAM) ControlNet for Z-Image
Explore this post and more from the StableDiffusion community
What's your thoughts on ltx 2.3 now?
in my personal experience, it's a big improvement over the previous version. prompt following far better. sound far better. less unprompted sounds and music.
i2v is still pretty hit and miss. keeping about 30% likeness to orginal source image. Any type of movement that is not talking causes the model to fall apart and produce body horror. I'm finding myself throwing away more gens due to just terrible results.
it's great for talking heads in my opinion, but I've gone back to wan 2.2 for now. hopefully, ltx can improve the movement and animation in coming updates.
what are your thoughts on the model so far ?
https://redd.it/1s7srxg
@rStableDiffusion
in my personal experience, it's a big improvement over the previous version. prompt following far better. sound far better. less unprompted sounds and music.
i2v is still pretty hit and miss. keeping about 30% likeness to orginal source image. Any type of movement that is not talking causes the model to fall apart and produce body horror. I'm finding myself throwing away more gens due to just terrible results.
it's great for talking heads in my opinion, but I've gone back to wan 2.2 for now. hopefully, ltx can improve the movement and animation in coming updates.
what are your thoughts on the model so far ?
https://redd.it/1s7srxg
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community