Facebook removed my 421.000 Members AI Group

This is really devestating. I build the AI Group AI Revolution from the Groud Up with the help of my awesome Moderators. Yet Facebook removed it over Spam Posts, even though we tried to remove Spam as fast as possible. The worst part: Facebook doesn't even care and doesn't give any useful replies or allows us to talk with them to solve this. All I got is a copy & paste email that isn't even about my issue.
You can watch more about this here: https://youtu.be/DBD56TXkpv8

https://redd.it/1las0gg
@rStableDiffusion
Who do you follow for tutorials and workflows?

I feel like everything has been moving so fast and there all these different models and variations of workflows for everything. I've been going through Benji's AI Playground to try and catch up on some of the video gen stuff. I'm curious who your go to creator is, particularly when it comes to workflows?

https://redd.it/1latd74
@rStableDiffusion
Normalized Attention Guidance (NAG), the art of using negative prompts without CFG (almost 2x speed on Wan).
https://redd.it/1lauxve
@rStableDiffusion
For some reason I don't see anyone talking about FusionX, its a merge of Causvid / Accvid / MPS reward lora and some others loras which both massively increase the speed and quality of wan2.1
https://civitai.com/models/1651125

https://redd.it/1lan4m4
@rStableDiffusion
Video generation speed : Colab vs 4090 vs 4060

I've played with FramePack for a while, and it is versatile. My setups include a PC Ryzen 7500 with 4090 and a Victus notebook Ryzen 8845HS with 4060. Both run Windows 11. On Colab, I used this Notebook by sagiodev.

Here are some information on running FramePack I2V, for 20-sec 480 video generation.

PC 4090 (24GB vram, 128GB ram) : Generation time around 25 mins, utilization 50GB ram, 20GB vram (16GB allocation in FramePack) Total power consumption 450-525 watt

Colab T4 (12GB vram, 12GB ram) : crash during Pytorch sampling.

Colab L4 (20GB: vram 50GB ram) : around 80 mins, utilization 6GB ram, 12GB vram (16GB allocation)

Mobile 4060 (8GB vram, 32GB ram) : around 90 mins, utilization 31GB ram, 6GB vram (6GB allocation)

These numbers make me stunned. BTW, the iteration times are different; the L4's (2.8 s/it) is faster than 4060's (7 s/it).

I'm surprised that, for the turn-around time, my 4060 mobile ran as fast as Colab L4's !! It seems to be Colab L4 is a shared machine. I forget to mention that the L4 took 4 mins to setup, installing and downloading models.

If you have a mobile 4060 machine, it might be a free solution for video generation.

FYI.

PS Btw, I copied the models into my Google Drive. Colab Pro allows a terminal access so you can copy files from Google Drive to Colab's drive. Google Drive is super slow running disk, and you can't run an application from it. Copying files through the terminal is free (Pro subscription). For non-Pro, you need to copy file by putting the shell command in a Colab Notebook cell, and this costs your runtime.

If you use a high vram machine, like A100, you could save your runtime fee by using your Google Drive to store the model files.

https://redd.it/1lb2bbb
@rStableDiffusion