ML Research Hub
32.8K subscribers
4.31K photos
260 videos
23 files
4.66K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
MorphAny3D: Unleashing the Power of Structured Latent in 3D Morphing

📝 Summary:
MorphAny3D offers a training-free framework for high-quality 3D morphing, even across categories. It leverages Structured Latent representations with novel attention mechanisms MCA, TFSA for structural coherence and temporal consistency. This achieves state-of-the-art results and supports advance...

🔹 Publication Date: Published on Jan 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00204
• PDF: https://arxiv.org/pdf/2601.00204
• Project Page: https://xiaokunsun.github.io/MorphAny3D.github.io
• Github: https://github.com/XiaokunSun/MorphAny3D

==================================

For more data science resources:
https://t.me/DataScienceT

#3DMorphing #ComputerGraphics #DeepLearning #StructuredLatent #AIResearch
Nested Learning: The Illusion of Deep Learning Architectures

📝 Summary:
Nested Learning NL models ML as nested optimization problems. It enables expressive algorithms for higher-order learning and continual adaptation, introducing optimizers, self-modifying models, and continuum memory systems.

🔹 Publication Date: Published on Dec 31, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.24695
• PDF: https://arxiv.org/pdf/2512.24695

==================================

For more data science resources:
https://t.me/DataScienceT

#NestedLearning #MachineLearning #DeepLearning #Optimization #AI
InfoSynth: Information-Guided Benchmark Synthesis for LLMs

📝 Summary:
InfoSynth automatically generates novel and diverse coding benchmarks for LLMs. It uses information-theoretic metrics and genetic algorithms to create scalable self-verifying problems, overcoming manual effort and training data contamination.

🔹 Publication Date: Published on Jan 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00575
• PDF: https://arxiv.org/pdf/2601.00575
• Project Page: https://ishirgarg.github.io/infosynth_web/
• Github: https://github.com/ishirgarg/infosynth

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #AI #Benchmarking #GenerativeAI #DeepLearning
OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions

📝 Summary:
OmniVCus introduces a system for feedforward multi-subject video customization with multimodal controls. It proposes a data pipeline, VideoCus-Factory, and a diffusion Transformer framework with novel embedding mechanisms. This enables more subjects and precise editing, significantly outperformin...

🔹 Publication Date: Published on Jun 29, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.23361
• PDF: https://arxiv.org/pdf/2506.23361
• Project Page: https://caiyuanhao1998.github.io/project/OmniVCus/
• Github: https://github.com/caiyuanhao1998/Open-OmniVCus

🔹 Models citing this paper:
https://huggingface.co/CaiYuanhao/OmniVCus

Datasets citing this paper:
https://huggingface.co/datasets/CaiYuanhao/OmniVCus
https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Test
https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Train

==================================

For more data science resources:
https://t.me/DataScienceT

#VideoGeneration #DiffusionModels #MultimodalAI #DeepLearning #ComputerVision
1
Bitnet.cpp: Efficient Edge Inference for Ternary LLMs

📝 Summary:
Bitnet.cpp enhances edge inference for ternary LLMs using a novel mixed-precision matrix multiplication library. This system incorporates Ternary Lookup Tables and Int2 with a Scale for efficient, lossless inference, achieving up to a 6.25x speed increase over baselines.

🔹 Publication Date: Published on Feb 17, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2502.11880
• PDF: https://arxiv.org/pdf/2502.11880
• Github: https://github.com/microsoft/BitNet/tree/paper

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #EdgeAI #MachineLearning #DeepLearning #AI
1
BitNet b1.58 2B4T Technical Report

📝 Summary:
BitNet b1.58 2B4T is the first open-source 1-bit Large Language Model with 2 billion parameters. It matches full-precision LLM performance while offering significant improvements in computational efficiency like reduced memory and energy. The model weights are openly released for research.

🔹 Publication Date: Published on Apr 16, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.12285
• PDF: https://arxiv.org/pdf/2504.12285
• Github: https://github.com/microsoft/bitnet

🔹 Models citing this paper:
https://huggingface.co/microsoft/bitnet-b1.58-2B-4T
https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf
https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16

Spaces citing this paper:
https://huggingface.co/spaces/suayptalha/Chat-with-Bitnet-b1.58-2B-4T
https://huggingface.co/spaces/aizip-dev/SLM-RAG-Arena
https://huggingface.co/spaces/Tonic/Native_1-bit_LLM

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #AI #Quantization #OpenSourceAI #DeepLearning
BitNet Distillation

📝 Summary:
BitNet Distillation fine-tunes LLMs to 1.58-bit precision using SubLN, attention distillation, and continual pre-training. It achieves comparable performance to full-precision models, offering 10x memory savings and 2.65x faster inference.

🔹 Publication Date: Published on Oct 15, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.13998
• PDF: https://arxiv.org/pdf/2510.13998
• Github: https://github.com/microsoft/BitNet

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #Quantization #ModelCompression #DeepLearning #AI
InfiniteVGGT: Visual Geometry Grounded Transformer for Endless Streams

📝 Summary:
InfiniteVGGT enables continuous 3D visual geometry understanding for infinite streams. It uses a causal transformer with adaptive rolling memory for long-term stability, outperforming existing streaming methods. A new Long3D benchmark is introduced for rigorous evaluation of such systems.

🔹 Publication Date: Published on Jan 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02281
• PDF: https://arxiv.org/pdf/2601.02281
• Github: https://github.com/AutoLab-SAI-SJTU/InfiniteVGGT

==================================

For more data science resources:
https://t.me/DataScienceT

#VisualGeometry #3DVision #Transformers #StreamingAI #DeepLearning
This media is not supported in your browser
VIEW IN TELEGRAM
DiffProxy: Multi-View Human Mesh Recovery via Diffusion-Generated Dense Proxies

📝 Summary:
DiffProxy generates multi-view consistent human proxies using diffusion models to improve human mesh recovery. This bridges synthetic training and real-world generalization, achieving state-of-the-art performance on real benchmarks.

🔹 Publication Date: Published on Jan 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02267
• PDF: https://arxiv.org/pdf/2601.02267
• Project Page: https://wrk226.github.io/DiffProxy.html
• Github: https://github.com/wrk226/DiffProxy

==================================

For more data science resources:
https://t.me/DataScienceT

#HumanMeshRecovery #DiffusionModels #ComputerVision #DeepLearning #AI
1
CPPO: Contrastive Perception for Vision Language Policy Optimization

📝 Summary:
CPPO improves vision-language model fine-tuning by detecting perception tokens through entropy shifts. It then applies a Contrastive Perception Loss to enhance multimodal reasoning, outperforming prior methods more efficiently.

🔹 Publication Date: Published on Jan 1

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.00501
• PDF: https://arxiv.org/pdf/2601.00501

==================================

For more data science resources:
https://t.me/DataScienceT

#VisionLanguageModels #MultimodalAI #ContrastiveLearning #DeepLearning #AIResearch