ML Research Hub
32.8K subscribers
4.15K photos
249 videos
23 files
4.48K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity

📝 Summary:
UnSAMv2 enables continuous segmentation granularity control for the SAM model without human annotations. It uses self-supervised learning on unlabeled data to discover mask-granularity pairs and a novel control embedding. UnSAMv2 significantly enhances SAM-2s performance across various segmentati...

🔹 Publication Date: Published on Nov 17

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.13714
• PDF: https://arxiv.org/pdf/2511.13714
• Project Page: https://yujunwei04.github.io/UnSAMv2-Project-Page/
• Github: https://github.com/yujunwei04/UnSAMv2

Spaces citing this paper:
https://huggingface.co/spaces/yujunwei04/UnSAMv2

==================================

For more data science resources:
https://t.me/DataScienceT

#AI #ComputerVision #SelfSupervisedLearning #ImageSegmentation #DeepLearning
SAM 3: Segment Anything with Concepts

📝 Summary:
SAM 3 is a unified model achieving state-of-the-art in promptable concept segmentation and tracking. It uses concept prompts for detecting, segmenting, and tracking objects, doubling accuracy over existing systems. The model and a new benchmark are open sourced.

🔹 Publication Date: Published on Nov 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.16719
• PDF: https://arxiv.org/pdf/2511.16719
• Project Page: https://ai.meta.com/sam3/
• Github: https://github.com/facebookresearch/sam3

==================================

For more data science resources:
https://t.me/DataScienceT

#ComputerVision #ImageSegmentation #ObjectTracking #AI #DeepLearning
MedSAM3: Delving into Segment Anything with Medical Concepts

📝 Summary:
MedSAM-3 is a text-promptable medical segmentation model fine-tuned on SAM 3 using semantic conceptual labels. It enables precise, open-vocabulary text-based segmentation of anatomical structures and integrates MLLMs for advanced reasoning. This approach significantly outperforms existing models ...

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19046
• PDF: https://arxiv.org/pdf/2511.19046
• Github: https://github.com/Joey-S-Liu/MedSAM3

==================================

For more data science resources:
https://t.me/DataScienceT

#MedicalAI #ImageSegmentation #DeepLearning #MLLMs #FoundationModels
The SAM2-to-SAM3 Gap in the Segment Anything Model Family: Why Prompt-Based Expertise Fails in Concept-Driven Image Segmentation

📝 Summary:
This paper highlights the gap between SAM2 and SAM3. SAM2 uses spatial prompts for geometric segmentation, but SAM3 is a concept-driven multimodal model with a unified vision-language architecture. SAM3 represents a new class of foundation model for concept-driven segmentation.

🔹 Publication Date: Published on Dec 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.06032
• PDF: https://arxiv.org/pdf/2512.06032
• Github: https://github.com/Applied-AI-Research-Lab/The-SAM2-to-SAM3-Gap-in-the-Segment-Anything-Model-Family

==================================

For more data science resources:
https://t.me/DataScienceT

#ImageSegmentation #FoundationModels #ComputerVision #MultimodalAI #AIResearch
1