Learning under noisy supervision is governed by a feedback-truth gap
A new study shows AI models and humans both prioritize fast feedback over slow truth, creating a universal learning bias.
Researchers Elan Schonfeld and Elias Wisnia published a paper identifying a 'feedback-truth gap' that occurs when learning systems absorb feedback faster than they can evaluate underlying truth. Testing across 2,700 neural network runs and 317 human participants, they found this gap appears universally in noisy supervision scenarios. Dense neural networks accumulate the gap as memorization, while sparse architectures and humans employ regulatory mechanisms to manage it, fundamentally constraining learning under imperfect labels.
Why It Matters
This explains why AI models trained on noisy data develop biases and suggests architectural changes for more robust learning systems.