ML Research Hub
32.8K subscribers
4.38K photos
270 videos
23 files
4.74K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
On the Non-decoupling of Supervised Fine-tuning and Reinforcement Learning in Post-training

📝 Summary:
Supervised fine-tuning SFT and reinforcement learning RL in large language model post-training cannot be decoupled. Separating them causes performance degradation because RL increases SFT loss, and SFT lowers RL reward.

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07389
• PDF: https://arxiv.org/pdf/2601.07389

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Ministral 3

📝 Summary:
Ministral 3 is a series of parameter-efficient dense language models available in three sizes 3B, 8B, 14B with three variants each. Designed for compute-constrained applications, they are trained via Cascade Distillation and include image understanding capabilities.

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08584
• PDF: https://arxiv.org/pdf/2601.08584

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
End-to-End Video Character Replacement without Structural Guidance

📝 Summary:
MoCha enables controllable video character replacement using a single frame mask through condition-aware RoPE and a comprehensive data construction pipeline with specialized datasets. AI-generated sum...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08587
• PDF: https://arxiv.org/pdf/2601.08587

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
JudgeRLVR: Judge First, Generate Second for Efficient Reasoning

📝 Summary:
Reinforcement learning with verifiable rewards is enhanced through a judge-then-generate paradigm that improves both efficiency and accuracy in mathematical problem-solving. AI-generated summary Reinf...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08468
• PDF: https://arxiv.org/pdf/2601.08468

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
The Confidence Dichotomy: Analyzing and Mitigating Miscalibration in Tool-Use Agents

📝 Summary:
Tool-integrated language model agents exhibit different calibration behaviors based on tool type, with a reinforcement learning framework improving both task accuracy and reliable uncertainty estimati...

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07264
• PDF: https://arxiv.org/pdf/2601.07264

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
ArenaRL: Scaling RL for Open-Ended Agents via Tournament-based Relative Ranking

📝 Summary:
Reinforcement learning for large language model agents suffers from discrimination collapse in open-ended tasks due to pointwise scalar scoring, which ArenaRL addresses through relative ranking and pa...

🔹 Publication Date: Published on Jan 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.06487
• PDF: https://arxiv.org/pdf/2601.06487
• Github: https://github.com/Alibaba-NLP/qqr

Datasets citing this paper:
https://huggingface.co/datasets/Alibaba-NLP/Open-Travel
https://huggingface.co/datasets/Alibaba-NLP/Open-DeepResearch

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Motion Attribution for Video Generation

📝 Summary:
Motive is a gradient-based data attribution framework that identifies influential video clips for motion improvement in text-to-video models through motion-weighted loss masking. AI-generated summary ...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08828
• PDF: https://arxiv.org/pdf/2601.08828
• Project Page: https://research.nvidia.com/labs/sil/projects/MOTIVE/

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
SnapGen++: Unleashing Diffusion Transformers for Efficient High-Fidelity Image Generation on Edge Devices

📝 Summary:
An efficient diffusion transformer framework for mobile and edge devices that maintains high-generation quality while reducing computational costs through compact architecture, elastic training, and k...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08303
• PDF: https://arxiv.org/pdf/2601.08303

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Aligning Text, Code, and Vision: A Multi-Objective Reinforcement Learning Framework for Text-to-Visualization

📝 Summary:
A reinforcement learning framework for text-to-visualization generation that improves chart quality and code execution by optimizing multiple objectives using post-execution feedback. AI-generated sum...

🔹 Publication Date: Published on Jan 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.04582
• PDF: https://arxiv.org/pdf/2601.04582

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Media is too big
VIEW IN TELEGRAM
VLingNav: Embodied Navigation with Adaptive Reasoning and Visual-Assisted Linguistic Memory

📝 Summary:
VLingNav enhances embodied navigation through linguistic-driven cognition with adaptive reasoning and visual-assisted memory, achieving state-of-the-art performance and zero-shot transfer to real robo...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08665
• PDF: https://arxiv.org/pdf/2601.08665
• Project Page: https://wsakobe.github.io/VLingNav-web/

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
MemGovern: Enhancing Code Agents through Learning from Governed Human Experiences

📝 Summary:
MemGovern framework transforms unstructured GitHub data into structured experiential memory for autonomous software engineering agents, improving bug resolution rates through enhanced experience retri...

🔹 Publication Date: Published on Jan 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.06789
• PDF: https://arxiv.org/pdf/2601.06789
• Github: https://github.com/QuantaAlpha/MemGovern

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
User-Oriented Multi-Turn Dialogue Generation with Tool Use at scale

📝 Summary:
Large reasoning models enable scalable multi-turn dialogue generation through automated task-oriented simulation and user-oriented behavioral modeling for enhanced human-agent interaction datasets. AI...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08225
• PDF: https://arxiv.org/pdf/2601.08225

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Solar Open Technical Report

📝 Summary:
Solar Open presents a 102B-parameter bilingual Mixture-of-Experts language model that addresses data scarcity in underserved languages through synthetic data generation, progressive curriculum coordin...

🔹 Publication Date: Published on Jan 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07022
• PDF: https://arxiv.org/pdf/2601.07022

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
This media is not supported in your browser
VIEW IN TELEGRAM
ShowUI-π: Flow-based Generative Models as GUI Dexterous Hands

📝 Summary:
ShowUI-π is the first flow-based generative model for GUI agents, unifying discrete clicks and continuous drag actions. It achieves smooth, stable trajectories and significantly outperforms prior agents on ScreenDrag, a new benchmark for GUI drag capabilities.

🔹 Publication Date: Published on Dec 31, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.24965
• PDF: https://arxiv.org/pdf/2512.24965
• Project Page: https://showlab.github.io/showui-pi
• Github: https://github.com/showlab/showui-pi

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital Companions

📝 Summary:
KnowMe-Bench is a new benchmark using long autobiographical narratives to evaluate AI's person understanding, moving beyond simple retrieval. It tests factual recall, subjective states, and principle-level reasoning. Current systems struggle with higher-level inferences despite factual improvemen...

🔹 Publication Date: Published on Jan 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.04745
• PDF: https://arxiv.org/pdf/2601.04745
• Github: https://github.com/QuantaAlpha/KnowMeBench

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #PersonUnderstanding #NLP #Benchmarking #DigitalCompanions
Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking

📝 Summary:
FactArena is a new automated framework for comprehensively benchmarking LLMs across the entire fact-checking pipeline, including claim extraction and evidence retrieval. It reveals significant gaps between claim verification accuracy and overall fact-checking competence, highlighting the need for...

🔹 Publication Date: Published on Jan 6

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02669
• PDF: https://arxiv.org/pdf/2601.02669

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #FactChecking #AI #NLP #Benchmarking
1
MemoBrain: Executive Memory as an Agentic Brain for Reasoning

📝 Summary:
Long-horizon tasks strain tool-augmented agents due to accumulating context. MemoBrain is an executive memory model that organizes and prunes reasoning steps, maintaining a compact, high-salience backbone within a fixed context. This improves coherent, goal-directed reasoning.

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08079
• PDF: https://arxiv.org/pdf/2601.08079
• Github: https://github.com/qhjqhj00/MemoBrain

==================================

For more data science resources:
https://t.me/DataScienceT

#AIagents #ExecutiveMemory #Reasoning #LLM #CognitiveAI
EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs

📝 Summary:
LLM self-training improves reasoning but causes overconfidence. EpiCaR solves this by jointly optimizing reasoning performance and calibration through epistemic learning and self-evaluation. It achieves better accuracy and calibration, reduces inference compute by 3X, and generalizes well to new ...

🔹 Publication Date: Published on Jan 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.06786
• PDF: https://arxiv.org/pdf/2601.06786

==================================

For more data science resources:
https://t.me/DataScienceT

#LLMs #AI #MachineLearning #Reasoning #Calibration
End-to-End Test-Time Training for Long Context

📝 Summary:
This paper proposes End-to-End Test-Time Training TTT-E2E for long-context language modeling, treating it as continual learning. It uses a standard Transformer, learning at test time and improving initialization via meta-learning. TTT-E2E scales well and offers constant inference latency, being m...

🔹 Publication Date: Published on Dec 29, 2025

🔹 Paper Links:
• arXiv Page: https://arxivlens.com/PaperView/Details/end-to-end-test-time-training-for-long-context-6176-bf8fd7e6
• PDF: https://arxiv.org/pdf/2512.23675
• Github: https://github.com/test-time-training/e2e

==================================

For more data science resources:
https://t.me/DataScienceT

#TestTimeTraining #LongContext #LanguageModels #Transformers #ContinualLearning
Parallel Context-of-Experts Decoding for Retrieval Augmented Generation

📝 Summary:
Parallel Context-of-Experts Decoding Pced is a training-free framework for multi-document RAG that avoids prefill bottlenecks. It treats documents as isolated experts, using a retrieval-aware contrastive decoding rule to synchronize predictions and recover cross-document reasoning.

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08670
• PDF: https://arxiv.org/pdf/2601.08670

==================================

For more data science resources:
https://t.me/DataScienceT

#RAG #LLM #NLP #AI #Decoding