ML Research Hub
32.8K subscribers
4.15K photos
251 videos
23 files
4.49K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Image-Free Timestep Distillation via Continuous-Time Consistency with Trajectory-Sampled Pairs

📝 Summary:
TBCM is a self-contained method that distills diffusion models by extracting latent representations directly from the teacher model trajectory. This eliminates external data, greatly improving efficiency and quality for few-step generation with reduced resources.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20410
• PDF: https://arxiv.org/pdf/2511.20410
• Github: https://github.com/hustvl/TBCM

==================================

For more data science resources:
https://t.me/DataScienceT

#DiffusionModels #ModelDistillation #GenerativeAI #AIResearch #MachineLearning
Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield

📝 Summary:
This study challenges the understanding of Distribution Matching Distillation DMD for text-to-image generation. It reveals that CFG Augmentation is the primary driver of few-step distillation, while distribution matching acts as a regularizer. This new insight enables improved distillation method...

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22677
• PDF: https://arxiv.org/pdf/2511.22677
• Project Page: https://tongyi-mai.github.io/Z-Image-blog/
• Github: https://github.com/Tongyi-MAI/Z-Image/tree/main

==================================

For more data science resources:
https://t.me/DataScienceT

#TextToImage #GenerativeAI #DiffusionModels #ModelDistillation #AIResearch
Flash-DMD: Towards High-Fidelity Few-Step Image Generation with Efficient Distillation and Joint Reinforcement Learning

📝 Summary:
Flash-DMD accelerates generative diffusion models via efficient timestep-aware distillation and joint reinforcement learning. This framework achieves faster convergence, high-fidelity few-step generation, and stabilizes RL training using distillation as a regularizer, all with reduced computation...

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20549
• PDF: https://arxiv.org/pdf/2511.20549

==================================

For more data science resources:
https://t.me/DataScienceT

#DiffusionModels #ImageGeneration #ReinforcementLearning #ModelDistillation #GenerativeAI
👍1