ML Research Hub
32.8K subscribers
4.36K photos
267 videos
23 files
4.71K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Can World Simulators Reason? Gen-ViRe: A Generative Visual Reasoning Benchmark

📝 Summary:
Current video model benchmarks miss assessing Chain-of-Frames CoF reasoning, crucial for world simulators. Gen-ViRe is a new benchmark that decomposes CoF reasoning into cognitive subtasks, offering the first quantitative assessment. It reveals poor reasoning depth despite impressive visual quali...

🔹 Publication Date: Published on Nov 17

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.13853
• PDF: https://arxiv.org/pdf/2511.13853

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #WorldSimulators #VisualReasoning #GenerativeAI #Benchmarks
Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens

📝 Summary:
Chain-of-Visual-Thought COVT enables VLMs to improve dense visual perception by reasoning through continuous visual tokens. These tokens capture rich perceptual cues like 2D appearance and 3D geometry from lightweight vision experts. COVT consistently boosts VLM performance on diverse benchmarks,...

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19418
• PDF: https://arxiv.org/pdf/2511.19418
• Project Page: https://wakalsprojectpage.github.io/comt-website/

==================================

For more data science resources:
https://t.me/DataScienceT

#VLMs #ComputerVision #AI #MachineLearning #VisualReasoning
Monet: Reasoning in Latent Visual Space Beyond Images and Language

📝 Summary:
Monet is a new framework enabling MLLMs to reason directly in latent visual space using continuous embeddings as intermediate visual thoughts. It addresses training challenges with a three-stage distillation pipeline and introduces VLPO, outperforming on visual reasoning tasks.

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21395
• PDF: https://arxiv.org/pdf/2511.21395
• Github: https://github.com/NOVAglow646/Monet

==================================

For more data science resources:
https://t.me/DataScienceT

#MLLM #VisualReasoning #LatentSpace #AI #DeepLearning
1
Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning Generalization

📝 Summary:
Concise Chain-of-Thought steps, specifically minimal visual grounding, are most effective for achieving generalizable visual reasoning in vision-language models. Longer or visual CoT primarily accelerate training but do not improve final performance or generalization across tasks.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22586
• PDF: https://arxiv.org/pdf/2511.22586

==================================

For more data science resources:
https://t.me/DataScienceT

#ChainOfThought #VisionLanguageModels #VisualReasoning #AIGeneralization #DeepLearning
CodeV: Code with Images for Faithful Visual Reasoning via Tool-Aware Policy Optimization

📝 Summary:
CodeV improves faithful visual reasoning by training an agent with Tool-Aware Policy Optimization TAPO. TAPO uses dense rewards directly on visual tool inputs and outputs, encouraging evidence-consistent tool use. This approach significantly boosts faithful tool use and achieves competitive accur...

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19661
• PDF: https://arxiv.org/pdf/2511.19661

🔹 Models citing this paper:
https://huggingface.co/RenlyH/CodeV-RL
https://huggingface.co/RenlyH/CodeV-SFT

Datasets citing this paper:
https://huggingface.co/datasets/RenlyH/CodeV-RL-Data

==================================

For more data science resources:
https://t.me/DataScienceT

#VisualReasoning #ReinforcementLearning #ComputerVision #AI #ToolLearning
4
ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning

📝 Summary:
ARM-Thinker is an agentic reward model that uses external tools like image cropping and document retrieval to verify judgments in multimodal reasoning tasks. This significantly improves accuracy, interpretability, and visual grounding compared to existing reward models, achieving substantial perf...

🔹 Publication Date: Published on Dec 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.05111
• PDF: https://arxiv.org/pdf/2512.05111
• Project Page: https://github.com/InternLM/ARM-Thinker
• Github: https://github.com/open-compass/VLMEvalKit/pull/1334

==================================

For more data science resources:
https://t.me/DataScienceT

#MultimodalAI #AgenticAI #RewardModels #VisualReasoning #AIResearch
VG-Refiner: Towards Tool-Refined Referring Grounded Reasoning via Agentic Reinforcement Learning

📝 Summary:
VG-Refiner improves visual reasoning by addressing unreliable tool outputs. It uses a two-stage think-rethink mechanism and refinement reward to correct poor tool results. This significantly improves accuracy and correction ability in referring and grounding tasks.

🔹 Publication Date: Published on Dec 6

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.06373
• PDF: https://arxiv.org/pdf/2512.06373
• Github: https://github.com/VoyageWang/VG-Refiner

==================================

For more data science resources:
https://t.me/DataScienceT

#VisualReasoning #ReinforcementLearning #ComputerVision #AIResearch #MachineLearning
Thinking with Images via Self-Calling Agent

📝 Summary:
sCoT is a novel visual reasoning paradigm that reformulates interleaved multimodal CoT as a language-only CoT with self-calling subagents. It improves reasoning performance and efficiency by avoiding explicit multimodal interleaving and using group-relative policy optimization.

🔹 Publication Date: Published on Dec 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.08511
• PDF: https://arxiv.org/pdf/2512.08511
• Github: https://github.com/YWenxi/think-with-images-through-self-calling

==================================

For more data science resources:
https://t.me/DataScienceT

#VisualReasoning #MultimodalAI #LLMs #AIagents #AIResearch
Puzzle Curriculum GRPO for Vision-Centric Reasoning

📝 Summary:
Puzzle Curriculum GRPO PC-GRPO improves VLM visual reasoning without annotations. It uses self-supervised puzzle environments for verifiable rewards and a difficulty-aware curriculum to enhance consistency and accuracy.

🔹 Publication Date: Published on Dec 16

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.14944
• PDF: https://arxiv.org/pdf/2512.14944
• Project Page: https://pcgrpo.github.io/

==================================

For more data science resources:
https://t.me/DataScienceT

#VLM #VisualReasoning #SelfSupervisedLearning #ComputerVision #AI
1
Latent Implicit Visual Reasoning

📝 Summary:
Large Multimodal Models struggle with visual reasoning due to their text-centric nature and limitations of prior methods. This paper introduces a task-agnostic mechanism for LMMs to discover and use visual reasoning tokens without explicit supervision. The approach achieves state-of-the-art resul...

🔹 Publication Date: Published on Dec 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.21218
• PDF: https://arxiv.org/pdf/2512.21218

==================================

For more data science resources:
https://t.me/DataScienceT

#LMMs #VisualReasoning #AI #ComputerVision #DeepLearning
1
Forest Before Trees: Latent Superposition for Efficient Visual Reasoning

📝 Summary:
Laser introduces Dynamic Windowed Alignment Learning DWAL for visual reasoning. This method maintains global feature superposition, achieving state-of-the-art performance with significantly reduced computational costs and high efficiency.

🔹 Publication Date: Published on Jan 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.06803
• PDF: https://arxiv.org/pdf/2601.06803

==================================

For more data science resources:
https://t.me/DataScienceT

#VisualReasoning #MachineLearning #AIResearch #ComputerVision #EfficientAI
1