Colorize GreyScale Images using multiple techs - Can you make this any better or quicker?
https://redd.it/1kw3j9x
@rStableDiffusion
The first step in T5-SDXL

So far, I have created XLLSD (sdxl vae, longclip, sd1.5) and sdxlONE (SDXL, with a single clip -- LongCLIP-L)


I was about to start training sdxlONE to take advantage of longclip.
But before I started in on that, I thought I would double check to see if anyone has released a public variant with T5 and SDXL instead of CLIP. (They have not)

Then, since I am a little more comfortable messing around with diffuser pipelines these days, I decided to double check just how hard it would be to assemble a "working" pipeline for it.

Turns out, I managed to do it in a few hours (!!)

So now I'm going to be pondering just how much effort it will take to turn into a "normal", savable model.... and then how hard it will be to train the thing to actually turn out images that make sense.

Here's what it spewed out without training, for "sad girl in snow"

\\"sad girl in snow\\" ???

Seems like it is a long way from sanity :D

But, for some reason, I feel a little optimistic about what its potential is.

I shall try to track my explorations of this project at


https://github.com/ppbrown/t5sdxl


Currently there is a single file that will replicate the output as above, using only T5 and SDXL.


https://redd.it/1kwbu2f
@rStableDiffusion
If you are just doing I2V, is VACE actually any better than just WAN2.1 itself? Why use Vace if you aren't using guidance video at all?

Just wondering, if you are only doing a straight I2V why bother using VACE?

Also, WanFun could already do Video2Video

So, what's the big deal about VACE? Is it just that it can do everything "in one" ?

https://redd.it/1kw5av6
@rStableDiffusion
is anyone still using AI for just still images rather than video? im still using SD1.5 on A1111. am I missing any big leaps?

Videos are cool but i'm more into art/photography right now. As per title i'm still using A1111 and its the only ai software i've ever used. I can't really say if it's better or worse than other UI since its the only one i've used. So I'm wondering if others have shifting to different ui/apps, and if i'm missing something sticking with A1111.

I do have SDXL and Flux dev/schnell models but for most of my inpaint/outpaint i'm finding SD1.5 a bit more solid



https://redd.it/1kwjj07
@rStableDiffusion