✨Can LLMs Estimate Student Struggles? Human-AI Difficulty Alignment with Proficiency Simulation for Item Difficulty Prediction
📝 Summary:
LLMs poorly estimate human cognitive difficulty for educational tasks. Scaling models does not improve alignment with humans; they converge to a machine consensus and fail to simulate student struggles or show introspection.
🔹 Publication Date: Published on Dec 21
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.18880
• PDF: https://arxiv.org/pdf/2512.18880
• Github: https://github.com/MingLiiii/Difficulty_Alignment
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#LLM #EducationalAI #ItemDifficulty #HumanAIAlignment #AIResearch
📝 Summary:
LLMs poorly estimate human cognitive difficulty for educational tasks. Scaling models does not improve alignment with humans; they converge to a machine consensus and fail to simulate student struggles or show introspection.
🔹 Publication Date: Published on Dec 21
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.18880
• PDF: https://arxiv.org/pdf/2512.18880
• Github: https://github.com/MingLiiii/Difficulty_Alignment
==================================
For more data science resources:
✓ https://t.me/DataScienceT
#LLM #EducationalAI #ItemDifficulty #HumanAIAlignment #AIResearch