ML Research Hub
32.8K subscribers
4.4K photos
272 videos
23 files
4.76K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
User-Oriented Multi-Turn Dialogue Generation with Tool Use at scale

📝 Summary:
Large reasoning models enable scalable multi-turn dialogue generation through automated task-oriented simulation and user-oriented behavioral modeling for enhanced human-agent interaction datasets. AI...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08225
• PDF: https://arxiv.org/pdf/2601.08225

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Solar Open Technical Report

📝 Summary:
Solar Open presents a 102B-parameter bilingual Mixture-of-Experts language model that addresses data scarcity in underserved languages through synthetic data generation, progressive curriculum coordin...

🔹 Publication Date: Published on Jan 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07022
• PDF: https://arxiv.org/pdf/2601.07022

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
This media is not supported in your browser
VIEW IN TELEGRAM
ShowUI-π: Flow-based Generative Models as GUI Dexterous Hands

📝 Summary:
ShowUI-π is the first flow-based generative model for GUI agents, unifying discrete clicks and continuous drag actions. It achieves smooth, stable trajectories and significantly outperforms prior agents on ScreenDrag, a new benchmark for GUI drag capabilities.

🔹 Publication Date: Published on Dec 31, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.24965
• PDF: https://arxiv.org/pdf/2512.24965
• Project Page: https://showlab.github.io/showui-pi
• Github: https://github.com/showlab/showui-pi

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital Companions

📝 Summary:
KnowMe-Bench is a new benchmark using long autobiographical narratives to evaluate AI's person understanding, moving beyond simple retrieval. It tests factual recall, subjective states, and principle-level reasoning. Current systems struggle with higher-level inferences despite factual improvemen...

🔹 Publication Date: Published on Jan 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.04745
• PDF: https://arxiv.org/pdf/2601.04745
• Github: https://github.com/QuantaAlpha/KnowMeBench

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #PersonUnderstanding #NLP #Benchmarking #DigitalCompanions
Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking

📝 Summary:
FactArena is a new automated framework for comprehensively benchmarking LLMs across the entire fact-checking pipeline, including claim extraction and evidence retrieval. It reveals significant gaps between claim verification accuracy and overall fact-checking competence, highlighting the need for...

🔹 Publication Date: Published on Jan 6

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.02669
• PDF: https://arxiv.org/pdf/2601.02669

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #FactChecking #AI #NLP #Benchmarking
1
MemoBrain: Executive Memory as an Agentic Brain for Reasoning

📝 Summary:
Long-horizon tasks strain tool-augmented agents due to accumulating context. MemoBrain is an executive memory model that organizes and prunes reasoning steps, maintaining a compact, high-salience backbone within a fixed context. This improves coherent, goal-directed reasoning.

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08079
• PDF: https://arxiv.org/pdf/2601.08079
• Github: https://github.com/qhjqhj00/MemoBrain

==================================

For more data science resources:
https://t.me/DataScienceT

#AIagents #ExecutiveMemory #Reasoning #LLM #CognitiveAI
EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs

📝 Summary:
LLM self-training improves reasoning but causes overconfidence. EpiCaR solves this by jointly optimizing reasoning performance and calibration through epistemic learning and self-evaluation. It achieves better accuracy and calibration, reduces inference compute by 3X, and generalizes well to new ...

🔹 Publication Date: Published on Jan 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.06786
• PDF: https://arxiv.org/pdf/2601.06786

==================================

For more data science resources:
https://t.me/DataScienceT

#LLMs #AI #MachineLearning #Reasoning #Calibration
End-to-End Test-Time Training for Long Context

📝 Summary:
This paper proposes End-to-End Test-Time Training TTT-E2E for long-context language modeling, treating it as continual learning. It uses a standard Transformer, learning at test time and improving initialization via meta-learning. TTT-E2E scales well and offers constant inference latency, being m...

🔹 Publication Date: Published on Dec 29, 2025

🔹 Paper Links:
• arXiv Page: https://arxivlens.com/PaperView/Details/end-to-end-test-time-training-for-long-context-6176-bf8fd7e6
• PDF: https://arxiv.org/pdf/2512.23675
• Github: https://github.com/test-time-training/e2e

==================================

For more data science resources:
https://t.me/DataScienceT

#TestTimeTraining #LongContext #LanguageModels #Transformers #ContinualLearning
Parallel Context-of-Experts Decoding for Retrieval Augmented Generation

📝 Summary:
Parallel Context-of-Experts Decoding Pced is a training-free framework for multi-document RAG that avoids prefill bottlenecks. It treats documents as isolated experts, using a retrieval-aware contrastive decoding rule to synchronize predictions and recover cross-document reasoning.

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08670
• PDF: https://arxiv.org/pdf/2601.08670

==================================

For more data science resources:
https://t.me/DataScienceT

#RAG #LLM #NLP #AI #Decoding
Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models

📝 Summary:
The Engram module introduces conditional memory as a new sparsity axis for Transformers, improving knowledge lookup and reasoning. It outperforms MoE, boosting performance across domains by offloading static knowledge and enhancing efficiency.

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07372
• PDF: https://arxiv.org/pdf/2601.07372
• Github: https://github.com/deepseek-ai/Engram

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #AI #MachineLearning #Transformers #Sparsity
FunAudioLLM: Voice Understanding and Generation Foundation Models for Natural Interaction Between Humans and LLMs

📝 Summary:
FunAudioLLM enhances natural voice interactions with LLMs by integrating SenseVoice for multilingual speech recognition and CosyVoice for natural, multi-style speech generation. This enables applications like speech-to-speech translation and emotional voice chat.

🔹 Publication Date: Published on Jul 4, 2024

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2407.04051
• PDF: https://arxiv.org/pdf/2407.04051
• Github: https://github.com/FunAudioLLM

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #VoiceAI #SpeechRecognition #SpeechSynthesis #MultimodalAI
1
This media is not supported in your browser
VIEW IN TELEGRAM
3AM: Segment Anything with Geometric Consistency in Videos

📝 Summary:
3AM enhances video object segmentation by integrating 3D-aware features from MUSt3R into SAM2. This improves viewpoint consistency and geometric recognition using only RGB input at inference, significantly outperforming prior methods on challenging datasets.

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08831
• PDF: https://arxiv.org/pdf/2601.08831
• Project Page: https://jayisaking.github.io/3AM-Page/

==================================

For more data science resources:
https://t.me/DataScienceT

#VideoSegmentation #ComputerVision #DeepLearning #GeometricAI #AI
ViDoRe V3: A Comprehensive Evaluation of Retrieval Augmented Generation in Complex Real-World Scenarios

📝 Summary:
ViDoRe v3 is a new multimodal RAG benchmark for complex queries over visually rich, multi-language documents. It shows visual retrievers and late-interaction models improve performance, though models struggle with non-textual elements and visual grounding.

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08620
• PDF: https://arxiv.org/pdf/2601.08620

Datasets citing this paper:
https://huggingface.co/datasets/vidore/vidore_v3_physics
https://huggingface.co/datasets/vidore/vidore_v3_computer_science
https://huggingface.co/datasets/vidore/vidore_v3_finance_en

==================================

For more data science resources:
https://t.me/DataScienceT

#RAG #MultimodalAI #AIResearch #NLP #ComputerVision
VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding

📝 Summary:
VideoLoom is a unified video large language model that achieves state-of-the-art performance in spatial-temporal video understanding through a specialized dataset and benchmark. AI-generated summary T...

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07290
• PDF: https://arxiv.org/pdf/2601.07290
• Github: https://github.com/JPShi12/VideoLoom

🔹 Models citing this paper:
https://huggingface.co/JPShi/VideoLoom-4B
https://huggingface.co/JPShi/VideoLoom-8B

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
UM-Text: A Unified Multimodal Model for Image Understanding

📝 Summary:
A unified multimodal model for visual text editing that understands natural language instructions and maintains stylistic consistency with reference images through visual language modeling and context...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08321
• PDF: https://arxiv.org/pdf/2601.08321

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
GeoMotionGPT: Geometry-Aligned Motion Understanding with Large Language Models

📝 Summary:
GeoMotionGPT introduces a framework aligning motion token geometry with language model embeddings using orthogonal constraints and sparse projection. This unified geometric basis enhances LLM motion reasoning, achieving a 20% performance improvement on HumanML3D.

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07632
• PDF: https://arxiv.org/pdf/2601.07632
• Project Page: https://huggingface.co/papers?q=sparse%20projection
• Github: https://github.com/JYe16/GeoMotionGPT

🔹 Models citing this paper:
https://huggingface.co/zy22b/GeoMotionGPT

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
The Agent's First Day: Benchmarking Learning, Exploration, and Scheduling in the Workplace Scenarios

📝 Summary:
EvoEnv is a new dynamic evaluation environment for MLLMs. It assesses agent robustness in real-world tasks, focusing on context-aware scheduling, active exploration, and continuous learning. Current MLLMs show significant deficiencies in these dynamic scenarios.

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08173
• PDF: https://arxiv.org/pdf/2601.08173
• Github: https://github.com/KnowledgeXLab/EvoEnv

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Fast-ThinkAct: Efficient Vision-Language-Action Reasoning via Verbalizable Latent Planning

📝 Summary:
Fast-ThinkAct is an efficient vision-language-action framework that reduces inference latency by 89.3% through compact latent reasoning while maintaining long-horizon planning and few-shot adaptation ...

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09708
• PDF: https://arxiv.org/pdf/2601.09708
• Project Page: https://jasper0314-huang.github.io/fast-thinkact/
• Github: https://jasper0314-huang.github.io/fast-thinkact/

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
A^3-Bench: Benchmarking Memory-Driven Scientific Reasoning via Anchor and Attractor Activation

📝 Summary:
Scientific reasoning relies not only on logical inference but also on activating prior knowledge and experiential structures. Memory can efficiently reuse knowledge and enhance reasoning consistency a...

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09274
• PDF: https://arxiv.org/pdf/2601.09274

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
MAXS: Meta-Adaptive Exploration with LLM Agents

📝 Summary:
MAXS is a meta-adaptive reasoning framework for LLM agents that improves multi-tool reasoning through lookahead strategies and trajectory convergence mechanisms, balancing global effectiveness and com...

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09259
• PDF: https://arxiv.org/pdf/2601.09259

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research