D 100 Hallucinated Citations Found in 51 Accepted Papers at NeurIPS 2025
https://gptzero.me/news/neurips
I remember this was shared last month about ICLR where they found hallucinations in submitted papers, but I didn't expect to see them in accepted papers as well
https://redd.it/1qjz88r
@datascientology
https://gptzero.me/news/neurips
I remember this was shared last month about ICLR where they found hallucinations in submitted papers, but I didn't expect to see them in accepted papers as well
https://redd.it/1qjz88r
@datascientology
AI Detection Resources | GPTZero
GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers
GPTZero's analysis 4841 papers accepted by NeurIPS 2025 show there are at least 100 with confirmed hallucinations
D Saw this papaer from ICLR with scores 2,2,2,4 and got accepted, HOW
https://openreview.net/forum?id=05hNleYOcG
How is this even possible
https://redd.it/1qxdaqk
@datascientology
https://openreview.net/forum?id=05hNleYOcG
How is this even possible
https://redd.it/1qxdaqk
@datascientology
openreview.net
PLAGUE: Plug-and-play Framework for Lifelong Adaptive Generation of...
Large Language Models (LLMs) are improving at an exceptional rate. With the advent of agentic workflows, multi-turn dialogue has become the de facto mode of interaction with LLMs for completing...
[P] A Python library processing geospatial data for GNNs with PyTorch Geometric
https://redd.it/1r02y6y
@datascientology
https://redd.it/1r02y6y
@datascientology
Reddit
From the MachineLearning community on Reddit: [P] A Python library processing geospatial data for GNNs with PyTorch Geometric
Explore this post and more from the MachineLearning community
This media is not supported in your browser
VIEW IN TELEGRAM
Building an A.I. navigation software that will only require a camera, a raspberry pi and a WiFi connection (DAY 6)
https://redd.it/1ryiw07
@datascientology
https://redd.it/1ryiw07
@datascientology
D thoughts on the controversy about Google's new paper?
Openreview: https://openreview.net/forum?id=tO3ASKZlok
It's sad to see almost no one mention this on Reddit and people are being mean to people who point out concerns
Edit: google is allegedly doing this in their trending TurboQuant paper
1. Did not attribute a pervious work RaBitQ fully
2. Did unfair comparison with RaBitQ (single core CPU vs GPU)
https://redd.it/1s7m7rn
@datascientology
Openreview: https://openreview.net/forum?id=tO3ASKZlok
It's sad to see almost no one mention this on Reddit and people are being mean to people who point out concerns
Edit: google is allegedly doing this in their trending TurboQuant paper
1. Did not attribute a pervious work RaBitQ fully
2. Did unfair comparison with RaBitQ (single core CPU vs GPU)
https://redd.it/1s7m7rn
@datascientology
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
D How to break free from LLM's chains as a PhD student?
I didn't realize but over a period of one year i have become overreliant on ChatGPT to write code, I am a second year PhD student and don't want to end up as someone with fake "coding skills" after I graduate. I hear people talk about it all the time that use LLM to write boring parts of the code, and write core stuff yourself, but the truth is, LLMs are getting better and better at even writing those parts if you write the prompt well (or at least give you a template that you can play around to cross the finish line). Even PhD advisors are well convinced that their students are using LLMs to assist in research work, and they mentally expect quicker results. I am currently trying to cope with imposter syndrome because my advisor is happy with my progress. But deep down I know that not 100% of it is my own output. I have started feeling like LLMs have tied my hands so tightly that I can't function without them.
What would be some strategies to reduce the dependency on LLM for work?
https://redd.it/1sdmn97
@datascientology
I didn't realize but over a period of one year i have become overreliant on ChatGPT to write code, I am a second year PhD student and don't want to end up as someone with fake "coding skills" after I graduate. I hear people talk about it all the time that use LLM to write boring parts of the code, and write core stuff yourself, but the truth is, LLMs are getting better and better at even writing those parts if you write the prompt well (or at least give you a template that you can play around to cross the finish line). Even PhD advisors are well convinced that their students are using LLMs to assist in research work, and they mentally expect quicker results. I am currently trying to cope with imposter syndrome because my advisor is happy with my progress. But deep down I know that not 100% of it is my own output. I have started feeling like LLMs have tied my hands so tightly that I can't function without them.
What would be some strategies to reduce the dependency on LLM for work?
https://redd.it/1sdmn97
@datascientology
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
For Physical AI applications, why do most robotics companies use 3D cameras?
Hi there! I'm a regular guy working at a company that makes cameras and CCTVs. After watching how BIG "physical AI" was at CES 2026, my boss asked me to do research on whether my company could enter the market with some kind of a robotic vision system/module.
At first, my thought was that we could just start off by making active stereo cameras like RealSense since lots of companies seem to be making heavy use of stereo vision systems in their designs. But as I did more research, I was told multiple times that most calculations are actually done with 2D RGB images, not with the point cloud data which the 3D cameras are intended to produce.
Is this true? Are 3D cameras being used just as a temporary step before moving completely into multiple RGB cameras? Is there any consensus on how the robotic vision system would look like in the future?
Thank you for reading my post.
https://redd.it/1sh9gia
@datascientology
Hi there! I'm a regular guy working at a company that makes cameras and CCTVs. After watching how BIG "physical AI" was at CES 2026, my boss asked me to do research on whether my company could enter the market with some kind of a robotic vision system/module.
At first, my thought was that we could just start off by making active stereo cameras like RealSense since lots of companies seem to be making heavy use of stereo vision systems in their designs. But as I did more research, I was told multiple times that most calculations are actually done with 2D RGB images, not with the point cloud data which the 3D cameras are intended to produce.
Is this true? Are 3D cameras being used just as a temporary step before moving completely into multiple RGB cameras? Is there any consensus on how the robotic vision system would look like in the future?
Thank you for reading my post.
https://redd.it/1sh9gia
@datascientology
Reddit
From the computervision community on Reddit
Explore this post and more from the computervision community
ICML 2026 Extending the deadline for reviewer final justifications while not extending for Author-AC comments was a huge mistake D
Just as the title says, I believe the decision to extend the deadline for reviewers to post their final justifications while not allowing authors to contact their ACs was a big misstep. I have a reviewer who, in their final justification is questioning the reliability of experimental setup and evaluation, as was as the fairness of comparison, issues that were never brought up during the initial review or their response to our rebuttal. It seems as though they were looking for reasons to justify not wanting to move their score from weak accept. It now feels like, despite having otherwise strong reviews that are leaning accept, this review might tank the paper.
https://redd.it/1sjzr15
@datascientology
Just as the title says, I believe the decision to extend the deadline for reviewers to post their final justifications while not allowing authors to contact their ACs was a big misstep. I have a reviewer who, in their final justification is questioning the reliability of experimental setup and evaluation, as was as the fairness of comparison, issues that were never brought up during the initial review or their response to our rebuttal. It seems as though they were looking for reasons to justify not wanting to move their score from weak accept. It now feels like, despite having otherwise strong reviews that are leaning accept, this review might tank the paper.
https://redd.it/1sjzr15
@datascientology
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Was looking at a ICLR 2025 Oral paper and I am shocked it got oral D
After my last post about score analysis of ICLR, I am looking into the review itself now.
They evaled SQL code generation by LLM using nature language metric and not executation metric, and they tested it and found around 20% false positive rate. This is a major flaw how is it even getting oral?
https://openreview.net/forum?id=GGlpykXDCa
https://redd.it/1slxqac
@datascientology
After my last post about score analysis of ICLR, I am looking into the review itself now.
They evaled SQL code generation by LLM using nature language metric and not executation metric, and they tested it and found around 20% false positive rate. This is a major flaw how is it even getting oral?
https://openreview.net/forum?id=GGlpykXDCa
https://redd.it/1slxqac
@datascientology
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community