Media is too big
VIEW IN TELEGRAM
✨Stable Video Infinity: Infinite-Length Video Generation with Error Recycling
📝 Summary:
Stable Video Infinity SVI generates infinite-length videos with high consistency and controllable stories. It introduces Error-Recycling Fine-Tuning, teaching the Diffusion Transformer to correct its self-generated errors and address the training-test discrepancy.
🔹 Publication Date: Published on Oct 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.09212
• PDF: https://arxiv.org/pdf/2510.09212
• Project Page: https://stable-video-infinity.github.io/homepage/
• Github: https://github.com/vita-epfl/Stable-Video-Infinity
🔹 Models citing this paper:
• https://huggingface.co/vita-video-gen/svi-model
✨ Datasets citing this paper:
• https://huggingface.co/datasets/vita-video-gen/svi-benchmark
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #AI #DiffusionModels #DeepLearning #ComputerVision
📝 Summary:
Stable Video Infinity SVI generates infinite-length videos with high consistency and controllable stories. It introduces Error-Recycling Fine-Tuning, teaching the Diffusion Transformer to correct its self-generated errors and address the training-test discrepancy.
🔹 Publication Date: Published on Oct 10
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.09212
• PDF: https://arxiv.org/pdf/2510.09212
• Project Page: https://stable-video-infinity.github.io/homepage/
• Github: https://github.com/vita-epfl/Stable-Video-Infinity
🔹 Models citing this paper:
• https://huggingface.co/vita-video-gen/svi-model
✨ Datasets citing this paper:
• https://huggingface.co/datasets/vita-video-gen/svi-benchmark
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #AI #DiffusionModels #DeepLearning #ComputerVision
✨BulletTime: Decoupled Control of Time and Camera Pose for Video Generation
📝 Summary:
This paper presents a video diffusion framework that decouples scene dynamics from camera pose. This enables precise 4D control over time and viewpoint for high-quality video generation, outperforming prior models in controllability.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05076
• PDF: https://arxiv.org/pdf/2512.05076
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #DiffusionModels #GenerativeAI #ComputerVision #AICameraControl
📝 Summary:
This paper presents a video diffusion framework that decouples scene dynamics from camera pose. This enables precise 4D control over time and viewpoint for high-quality video generation, outperforming prior models in controllability.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05076
• PDF: https://arxiv.org/pdf/2512.05076
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #DiffusionModels #GenerativeAI #ComputerVision #AICameraControl
This media is not supported in your browser
VIEW IN TELEGRAM
✨EgoLCD: Egocentric Video Generation with Long Context Diffusion
📝 Summary:
EgoLCD addresses content drift in long egocentric video generation by integrating long-term sparse and attention-based short-term memory with narrative prompting. It achieves state-of-the-art perceptual quality and temporal consistency, mitigating generative forgetting.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04515
• PDF: https://arxiv.org/pdf/2512.04515
• Project Page: https://aigeeksgroup.github.io/EgoLCD/
• Github: https://github.com/AIGeeksGroup/EgoLCD
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#AI #VideoGeneration #DiffusionModels #ComputerVision #EgocentricVision
📝 Summary:
EgoLCD addresses content drift in long egocentric video generation by integrating long-term sparse and attention-based short-term memory with narrative prompting. It achieves state-of-the-art perceptual quality and temporal consistency, mitigating generative forgetting.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04515
• PDF: https://arxiv.org/pdf/2512.04515
• Project Page: https://aigeeksgroup.github.io/EgoLCD/
• Github: https://github.com/AIGeeksGroup/EgoLCD
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#AI #VideoGeneration #DiffusionModels #ComputerVision #EgocentricVision
👍1
✨Generative Action Tell-Tales: Assessing Human Motion in Synthesized Videos
📝 Summary:
A new metric evaluates human action in generated videos by using a learned latent space of real-world actions, fusing skeletal geometry and appearance features. It significantly improves temporal and visual correctness assessment, outperforming existing methods and correlating better with human p...
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01803
• PDF: https://arxiv.org/pdf/2512.01803
• Project Page: https://xthomasbu.github.io/video-gen-evals/
• Github: https://xthomasbu.github.io/video-gen-evals/
✨ Datasets citing this paper:
• https://huggingface.co/datasets/dghadiya/TAG-Bench-Video
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #HumanMotion #ComputerVision #AIMetrics #DeepLearning
📝 Summary:
A new metric evaluates human action in generated videos by using a learned latent space of real-world actions, fusing skeletal geometry and appearance features. It significantly improves temporal and visual correctness assessment, outperforming existing methods and correlating better with human p...
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01803
• PDF: https://arxiv.org/pdf/2512.01803
• Project Page: https://xthomasbu.github.io/video-gen-evals/
• Github: https://xthomasbu.github.io/video-gen-evals/
✨ Datasets citing this paper:
• https://huggingface.co/datasets/dghadiya/TAG-Bench-Video
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #HumanMotion #ComputerVision #AIMetrics #DeepLearning
✨Deep Forcing: Training-Free Long Video Generation with Deep Sink and Participative Compression
📝 Summary:
Deep Forcing is a training-free method that enhances real-time video diffusion for high-quality, long-duration generation. It uses Deep Sink for stable context and Participative Compression for efficient KV cache pruning, achieving over 12x extrapolation and improved consistency.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05081
• PDF: https://arxiv.org/pdf/2512.05081
• Github: https://cvlab-kaist.github.io/DeepForcing/
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #DiffusionModels #TrainingFreeAI #DeepLearning #ComputerVision
📝 Summary:
Deep Forcing is a training-free method that enhances real-time video diffusion for high-quality, long-duration generation. It uses Deep Sink for stable context and Participative Compression for efficient KV cache pruning, achieving over 12x extrapolation and improved consistency.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05081
• PDF: https://arxiv.org/pdf/2512.05081
• Github: https://cvlab-kaist.github.io/DeepForcing/
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #DiffusionModels #TrainingFreeAI #DeepLearning #ComputerVision
❤2
Media is too big
VIEW IN TELEGRAM
✨Light-X: Generative 4D Video Rendering with Camera and Illumination Control
📝 Summary:
Light-X is a video generation framework for controllable rendering from monocular videos with joint viewpoint and illumination control. It disentangles geometry and lighting using synthetic data for robust training, outperforming prior methods in both aspects.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05115
• PDF: https://arxiv.org/pdf/2512.05115
• Project Page: https://lightx-ai.github.io/
• Github: https://github.com/TQTQliu/Light-X
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #ComputerVision #AI #NeuralRendering #GenerativeAI
📝 Summary:
Light-X is a video generation framework for controllable rendering from monocular videos with joint viewpoint and illumination control. It disentangles geometry and lighting using synthetic data for robust training, outperforming prior methods in both aspects.
🔹 Publication Date: Published on Dec 4
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05115
• PDF: https://arxiv.org/pdf/2512.05115
• Project Page: https://lightx-ai.github.io/
• Github: https://github.com/TQTQliu/Light-X
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #ComputerVision #AI #NeuralRendering #GenerativeAI
❤1
✨ProPhy: Progressive Physical Alignment for Dynamic World Simulation
📝 Summary:
ProPhy is a two-stage framework that enhances video generation by explicitly incorporating physics-aware conditioning and anisotropic generation. It uses a Mixture-of-Physics-Experts mechanism to extract fine-grained physical priors, improving physical consistency and realism in dynamic world sim...
🔹 Publication Date: Published on Dec 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05564
• PDF: https://arxiv.org/pdf/2512.05564
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #PhysicsAI #DynamicSimulation #DeepLearning #ComputerVision
📝 Summary:
ProPhy is a two-stage framework that enhances video generation by explicitly incorporating physics-aware conditioning and anisotropic generation. It uses a Mixture-of-Physics-Experts mechanism to extract fine-grained physical priors, improving physical consistency and realism in dynamic world sim...
🔹 Publication Date: Published on Dec 5
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05564
• PDF: https://arxiv.org/pdf/2512.05564
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #PhysicsAI #DynamicSimulation #DeepLearning #ComputerVision
✨UnityVideo: Unified Multi-Modal Multi-Task Learning for Enhancing World-Aware Video Generation
📝 Summary:
UnityVideo is a unified framework enhancing video generation by integrating multiple modalities and training paradigms. It uses dynamic noising and a modality switcher for comprehensive world understanding. This improves video quality, consistency, and zero-shot generalization to new data.
🔹 Publication Date: Published on Dec 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.07831
• PDF: https://arxiv.org/pdf/2512.07831
• Project Page: https://jackailab.github.io/Projects/UnityVideo/
• Github: https://github.com/dvlab-research/UnityVideo
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #MultimodalAI #GenerativeAI #DeepLearning #AIResearch
📝 Summary:
UnityVideo is a unified framework enhancing video generation by integrating multiple modalities and training paradigms. It uses dynamic noising and a modality switcher for comprehensive world understanding. This improves video quality, consistency, and zero-shot generalization to new data.
🔹 Publication Date: Published on Dec 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.07831
• PDF: https://arxiv.org/pdf/2512.07831
• Project Page: https://jackailab.github.io/Projects/UnityVideo/
• Github: https://github.com/dvlab-research/UnityVideo
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #MultimodalAI #GenerativeAI #DeepLearning #AIResearch
✨MIND-V: Hierarchical Video Generation for Long-Horizon Robotic Manipulation with RL-based Physical Alignment
📝 Summary:
MIND-V generates long-horizon, physically plausible robotic manipulation videos. This hierarchical framework uses semantic reasoning and an RL-based physical alignment strategy to synthesize robust, coherent actions, addressing data scarcity.
🔹 Publication Date: Published on Dec 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.06628
• PDF: https://arxiv.org/pdf/2512.06628
• Project Page: https://github.com/Richard-Zhang-AI/MIND-V
• Github: https://github.com/Richard-Zhang-AI/MIND-V
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#Robotics #VideoGeneration #ReinforcementLearning #AI #MachineLearning
📝 Summary:
MIND-V generates long-horizon, physically plausible robotic manipulation videos. This hierarchical framework uses semantic reasoning and an RL-based physical alignment strategy to synthesize robust, coherent actions, addressing data scarcity.
🔹 Publication Date: Published on Dec 7
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.06628
• PDF: https://arxiv.org/pdf/2512.06628
• Project Page: https://github.com/Richard-Zhang-AI/MIND-V
• Github: https://github.com/Richard-Zhang-AI/MIND-V
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#Robotics #VideoGeneration #ReinforcementLearning #AI #MachineLearning
Media is too big
VIEW IN TELEGRAM
✨OneStory: Coherent Multi-Shot Video Generation with Adaptive Memory
📝 Summary:
OneStory generates coherent multi-shot videos by modeling global cross-shot context. It uses a Frame Selection module and an Adaptive Conditioner for next-shot generation, leveraging pretrained models and a new dataset. This achieves state-of-the-art narrative coherence for long-form video storyt...
🔹 Publication Date: Published on Dec 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.07802
• PDF: https://arxiv.org/pdf/2512.07802
• Project Page: https://zhaochongan.github.io/projects/OneStory/
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #AI #DeepLearning #ComputerVision #GenerativeAI
📝 Summary:
OneStory generates coherent multi-shot videos by modeling global cross-shot context. It uses a Frame Selection module and an Adaptive Conditioner for next-shot generation, leveraging pretrained models and a new dataset. This achieves state-of-the-art narrative coherence for long-form video storyt...
🔹 Publication Date: Published on Dec 8
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.07802
• PDF: https://arxiv.org/pdf/2512.07802
• Project Page: https://zhaochongan.github.io/projects/OneStory/
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#VideoGeneration #AI #DeepLearning #ComputerVision #GenerativeAI
❤1