ML Research Hub
32.8K subscribers
4.13K photos
244 videos
23 files
4.46K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Easy Dataset: A Unified and Extensible Framework for Synthesizing LLM Fine-Tuning Data from Unstructured Documents

📝 Summary:
Easy Dataset is a framework that synthesizes LLM fine-tuning data from unstructured documents using a GUI and LLMs. It generates domain-specific question-answer pairs with human oversight. This improves LLM performance in specific domains while retaining general knowledge.

🔹 Publication Date: Published on Jul 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2507.04009
• PDF: https://arxiv.org/pdf/2507.04009
• Github: https://github.com/ConardLi/easy-dataset

==================================

For more data science resources:
https://t.me/DataScienceT

#LLM #DataSynthesis #FineTuning #AI #NLP
GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation

📝 Summary:
GraphGen is a framework that enhances synthetic data generation for LLMs by constructing fine-grained knowledge graphs. It targets high-value knowledge gaps, uses multi-hop sampling, and style-controlled generation to create diverse and accurate QA pairs. This approach outperforms conventional me...

🔹 Publication Date: Published on May 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2505.20416
• PDF: https://arxiv.org/pdf/2505.20416
• Project Page: https://huggingface.co/spaces/chenzihong/GraphGen
• Github: https://github.com/open-sciencelab/GraphGen

Datasets citing this paper:
https://huggingface.co/datasets/chenzihong/GraphGen-Data

Spaces citing this paper:
https://huggingface.co/spaces/chenzihong/GraphGen

==================================

For more data science resources:
https://t.me/DataScienceT

#LLMs #KnowledgeGraphs #SyntheticData #FineTuning #NLP
🤖🧠 How to Run and Fine-Tune Kimi K2 Thinking Locally with Unsloth

🗓️ 11 Dec 2025
📚 AI News & Trends

The demand for efficient and powerful large language models (LLMs) continues to rise as developers and researchers seek new ways to optimize reasoning, coding, and conversational AI performance. One of the most impressive open-source AI systems available today is Kimi K2 Thinking, created by Moonshot AI. Through collaboration with Unsloth, users can now fine-tune and ...

#KimiK2Thinking #Unsloth #LLMs #LargeLanguageModels #AI #FineTuning
1
SWE-Lego: Pushing the Limits of Supervised Fine-tuning for Software Issue Resolving

📝 Summary:
SWE-Lego achieves state-of-the-art software issue resolution through a lightweight supervised fine-tuning approach. It uses a high-quality dataset and refined training procedures like error masking and a difficulty-based curriculum, outperforming complex methods. Performance is further boosted by...

🔹 Publication Date: Published on Jan 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2601.01426
• PDF: https://arxiv.org/pdf/2601.01426
• Project Page: https://github.com/SWE-Lego/SWE-Lego
• Github: https://github.com/SWE-Lego/SWE-Lego

🔹 Models citing this paper:
https://huggingface.co/SWE-Lego/SWE-Lego-Qwen3-8B
https://huggingface.co/SWE-Lego/SWE-Lego-Qwen3-32B

Datasets citing this paper:
https://huggingface.co/datasets/SWE-Lego/SWE-Lego-Real-Data
https://huggingface.co/datasets/SWE-Lego/SWE-Lego-Synthetic-Data

==================================

For more data science resources:
https://t.me/DataScienceT

#SoftwareEngineering #MachineLearning #LLM #FineTuning #AIforCode