[R] Is this paper Nonsense ? [DCdetector: Dual Attention Contrastive Representation Learning for Time Series Anomaly Detection]
A KDD 2023 paper's official code shows a total loss of 0.0 throughout training, suggesting fundamental flaws.
A viral Reddit investigation has cast serious doubt on the validity of 'DCdetector: Dual Attention Contrastive Representation Learning for Time Series Anomaly Detection,' a paper accepted at the prestigious KDD 2023 conference. When a user cloned and ran the official source code, they discovered the model's total training loss was consistently logged as 0.0, while its component losses (series loss and prior loss) showed normal, non-zero values increasing over epochs. This suggests a fundamental bug in the loss function implementation, meaning the model was not being optimized as described in the paper. The finding is significant because the paper has been cited hundreds of times in the deep learning anomaly detection space.
This incident highlights a growing crisis of reproducibility in AI research, where impressive results published at top conferences may not withstand scrutiny of their underlying code. The fact that such an apparent bug passed peer review at KDD—a premier venue for data mining research—raises questions about the review process's rigor, especially for computationally complex work. It underscores the critical need for mandatory code submission, proper artifact evaluation, and more transparent reporting of training dynamics to separate robust science from erroneous or non-replicable claims.
- The official code for the KDD 2023 paper 'DCdetector' logs a total loss of 0.0 throughout training, indicating a critical bug.
- Component losses (series and prior) show normal progression, but the combined loss is zero, invalidating the reported training procedure.
- The paper has hundreds of citations, amplifying the impact of this potential error on the anomaly detection research field.
Why It Matters
This exposes flaws in AI peer review, undermining trust in published results and forcing a reckoning on code quality and reproducibility standards.