ML Research Hub
32.8K subscribers
4.44K photos
272 videos
23 files
4.8K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Imagine-then-Plan: Agent Learning from Adaptive Lookahead with World Models

📝 Summary:
Imagine-then-Plan framework enables agent learning through adaptive lookahead imagination, combining imagined trajectories with current observations to guide policy learning in complex task scenarios....

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08955
• PDF: https://arxiv.org/pdf/2601.08955

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Focal Guidance: Unlocking Controllability from Semantic-Weak Layers in Video Diffusion Models

📝 Summary:
Diffusion Transformer-based image-to-video models suffer from condition isolation where visual attention becomes detached from text guidance; focal guidance addresses this through fine-grained semanti...

🔹 Publication Date: Published on Jan 12

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.07287
• PDF: https://arxiv.org/pdf/2601.07287

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning

📝 Summary:
DASD-4B-Thinking is a new lightweight model achieving state-of-the-art reasoning by enhancing sequence-level distillation. It addresses limitations in current teacher-student knowledge transfer by better capturing the teachers full output distribution, using significantly fewer training samples.

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09088
• PDF: https://arxiv.org/pdf/2601.09088
• Project Page: https://github.com/D2I-ai/dasd-thinking
• Github: https://github.com/D2I-ai/dasd-thinking

🔹 Models citing this paper:
https://huggingface.co/Alibaba-Apsara/DASD-4B-Thinking
https://huggingface.co/Alibaba-Apsara/DASD-30B-A3B-Thinking-Preview

Datasets citing this paper:
https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b
https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b-Logprob

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #MachineLearning #LLM #KnowledgeDistillation #ChainOfThought
1
Geometric Stability: The Missing Axis of Representations

📝 Summary:
This paper introduces geometric stability, a new metric quantifying how reliably representational geometry holds under perturbation. It is distinct from similarity, offering complementary insights for safety monitoring, controllability, and model selection across diverse systems.

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09173
• PDF: https://arxiv.org/pdf/2601.09173
• Github: https://github.com/prashantcraju/geometric-stability

🔹 Models citing this paper:
https://huggingface.co/pcr2120/shesha-geometry

==================================

For more data science resources:
https://t.me/DataScienceT

#GeometricStability #RepresentationalGeometry #MachineLearning #AIResearch #ModelEvaluation
1
Omni-R1: Towards the Unified Generative Paradigm for Multimodal Reasoning

📝 Summary:
Omni-R1 proposes unified generative multimodal reasoning. It uses intermediate image generation to enable diverse skills across tasks. Omni-R1-Zero, needing no multimodal data, matches or exceeds its performance, showing a promising path.

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09536
• PDF: https://arxiv.org/pdf/2601.09536

🔹 Models citing this paper:
https://huggingface.co/ModalityDance/Omni-R1
https://huggingface.co/ModalityDance/Omni-R1-Zero

Datasets citing this paper:
https://huggingface.co/datasets/ModalityDance/Omni-Bench

==================================

For more data science resources:
https://t.me/DataScienceT

#MultimodalAI #GenerativeAI #DeepLearning #ComputerVision #AIResearch
LoongFlow: Directed Evolutionary Search via a Cognitive Plan-Execute-Summarize Paradigm

📝 Summary:
LoongFlow is a self-evolving agent that integrates LLMs into a cognitive Plan-Execute-Summarize PES paradigm for directed evolutionary search. It prevents premature convergence by balancing exploration and exploitation with a hybrid memory system. LoongFlow achieves superior solutions 60% more ef...

🔹 Publication Date: Published on Dec 30, 2025

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.24077
• PDF: https://arxiv.org/pdf/2512.24077
• Project Page: https://github.com/baidu-baige/LoongFlow
• Github: https://github.com/baidu-baige/LoongFlow

==================================

For more data science resources:
https://t.me/DataScienceT

#EvolutionarySearch #LLMs #CognitiveAI #AIAgents #Optimization
🎁❗️TODAY FREE❗️🎁

Entry to our VIP channel is completely free today. Tomorrow it will cost $500! 🔥

JOIN 👇

https://t.me/+DBdNGbxImzgxMDBi
https://t.me/+DBdNGbxImzgxMDBi
https://t.me/+DBdNGbxImzgxMDBi
Cluster Workload Allocation: Semantic Soft Affinity Using Natural Language Processing

📝 Summary:
This paper introduces an LLM-based approach to interpret natural language hints for cluster workload allocation. It achieved over 95% accuracy and improved placement compared to traditional methods, simplifying workload orchestration.

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09282
• PDF: https://arxiv.org/pdf/2601.09282

==================================

For more data science resources:
https://t.me/DataScienceT

#ClusterAllocation #NLP #LLMs #WorkloadOrchestration #AIResearch
1
SampoNLP: A Self-Referential Toolkit for Morphological Analysis of Subword Tokenizers

📝 Summary:
SampoNLP is a new corpus-free toolkit for creating morphological lexicons for Uralic languages. It was used to systematically evaluate BPE tokenizers, identifying optimal vocabulary sizes and demonstrating BPE's limitations for these highly agglutinative languages.

🔹 Publication Date: Published on Jan 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.04469
• PDF: https://arxiv.org/pdf/2601.04469
• Github: https://github.com/AragonerUA/SampoNLP

==================================

For more data science resources:
https://t.me/DataScienceT

#NLP #ComputationalLinguistics #Morphology #Tokenization #UralicLanguages
1
DPWriter: Reinforcement Learning with Diverse Planning Branching for Creative Writing

📝 Summary:
DPWriter is an RL framework that improves output diversity in LLM creative writing. It introduces Diverse Planning Branching and group-aware diversity rewards to encourage distinct generation trajectories. This approach significantly boosts diversity without compromising quality.

🔹 Publication Date: Published on Jan 14

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09609
• PDF: https://arxiv.org/pdf/2601.09609

==================================

For more data science resources:
https://t.me/DataScienceT

#ReinforcementLearning #LLM #CreativeWriting #AI #NLP
1
No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning

📝 Summary:
ECHO is an RL framework addressing stale critics in LLM agent training. It jointly optimizes policy and critic through a co-evolutionary loop and cascaded rollouts. This ensures synchronized feedback, leading to more stable training and higher task success in open-world environments.

🔹 Publication Date: Published on Jan 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.06794
• PDF: https://arxiv.org/pdf/2601.06794

==================================

For more data science resources:
https://t.me/DataScienceT

#ReinforcementLearning #LLMAgents #MachineLearning #AIResearch #OpenWorldAI
1
Flow Equivariant World Models: Memory for Partially Observed Dynamic Environments

📝 Summary:
Flow Equivariant World Models unify self-motion and external object motion as Lie group flows, enabling stable, symmetry-guided representations. They outperform other models in partially observed environments, particularly for long-term prediction and out-of-view dynamics, leading to data-efficie...

🔹 Publication Date: Published on Jan 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01075
• PDF: https://arxiv.org/pdf/2601.01075
• Project Page: https://flowequivariantworldmodels.github.io/
• Github: https://github.com/hlillemark/flowm

==================================

For more data science resources:
https://t.me/DataScienceT

#WorldModels #Equivariance #MachineLearning #AI #DeepLearning
1
sui-1: Grounded and Verifiable Long-Form Summarization

📝 Summary:
sui-1 is a 24B model producing verifiable abstractive summaries with inline citations. It uses synthetic data training to significantly outperform larger models, showing task-specific training beats scale for grounded summarization.

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.08472
• PDF: https://arxiv.org/pdf/2601.08472

🔹 Models citing this paper:
https://huggingface.co/ellamind/sui-1-24b
https://huggingface.co/ellamind/sui-1-24b-fp8

Spaces citing this paper:
https://huggingface.co/spaces/ellamind/sui-demo

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
OpenDecoder: Open Large Language Model Decoding to Incorporate Document Quality in RAG

📝 Summary:
OpenDecoder enhances retrieval-augmented generation by explicitly evaluating retrieved information quality through relevance, ranking, and query performance prediction scores, improving robustness to ...

🔹 Publication Date: Published on Jan 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.09028
• PDF: https://arxiv.org/pdf/2601.09028

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
SCALER:Synthetic Scalable Adaptive Learning Environment for Reasoning

📝 Summary:
SCALER is an RL framework for language models that sustains effective training signals in reasoning tasks. It uses adaptive environment design and scalable synthesis of diverse problems to prevent reward sparsity and overfitting, enabling sustained performance improvements.

🔹 Publication Date: Published on Jan 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.04809
• PDF: https://arxiv.org/pdf/2601.04809

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
1
A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5

📝 Summary:
This report evaluated 7 frontier AI models for safety across language, vision-language, and image generation. It found varied safety performance, with GPT-5.2 consistently strong. All models showed significant vulnerability to adversarial attacks, highlighting the multidimensional nature of AI sa...

🔹 Publication Date: Published on Jan 15

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.10527
• PDF: https://arxiv.org/pdf/2601.10527

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Think-Then-Generate: Reasoning-Aware Text-to-Image Diffusion with LLM Encoders

📝 Summary:
Text-to-image diffusion models enhanced with language model reasoning capabilities achieve improved factual consistency and semantic alignment through a think-then-generate paradigm with dual-gradient...

🔹 Publication Date: Published on Jan 15

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.10332
• PDF: https://arxiv.org/pdf/2601.10332
• Project Page: https://zhijie-group.github.io/Think-Then-Generate/
• Github: https://github.com/zhijie-group/Think-Then-Generate

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

📝 Summary:
Molmo2 is a new open-source video-language model family that achieves state-of-the-art performance through novel datasets and training methods, particularly excelling in video grounding tasks without ...

🔹 Publication Date: Published on Jan 15

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.10611
• PDF: https://arxiv.org/pdf/2601.10611

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
Inference-time Physics Alignment of Video Generative Models with Latent World Models

📝 Summary:
Latent world models enhance video generation physics plausibility through inference-time alignment and trajectory steering, achieving superior performance in challenging benchmarks. AI-generated summa...

🔹 Publication Date: Published on Jan 15

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.10553
• PDF: https://arxiv.org/pdf/2601.10553

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research
DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset

📝 Summary:
A large-scale Chinese image-text dataset called DanQing is introduced to advance vision-language pretraining, demonstrating superior performance in various downstream tasks through continual pretraini...

🔹 Publication Date: Published on Jan 15

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.10305
• PDF: https://arxiv.org/pdf/2601.10305
• Project Page: https://deepglint.github.io/DanQing/
• Github: https://github.com/deepglint/DanQing

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #DataScience #MachineLearning #HuggingFace #Research