An Eye for Trust: An Exploration of Developers' Trust Perceptions Through Urgency and Reputation
Researchers used eye tracking on 37 developers to uncover subconscious trust factors in code review.
A team of researchers from multiple institutions, led by Zohreh Sharafi, conducted the first controlled experiment using eye tracking to measure how urgency and reputation influence developers' trust in code. The study, titled 'An Eye for Trust: An Exploration of Developers' Trust Perceptions Through Urgency and Reputation,' involved 37 developers reviewing code patches. The researchers manipulated variables like the stated priority level of a task and the purported experience level of the code's author (junior vs. senior) to see how these factors shaped the review process.
Eye-tracking data revealed significant behavioral changes: developers spent different amounts of time, exhibited varied cognitive load, and distributed their visual attention differently when reviewing identical code labeled as coming from a senior versus a junior developer. Code marked as 'urgent' also altered review patterns. However, these subconscious biases in the review stage did not translate into the final decision of whether to adopt and implement the code.
Interestingly, when surveyed, participants nominated code functionality, quality, and comprehensibility as their primary evaluation criteria, largely overlooking the substantial influence that urgency and author reputation had on their own review behavior. This disconnect highlights a gap between developers' stated priorities and the subconscious factors that guide their attention and cognitive effort during code assessment.
- Eye tracking on 37 developers showed subconscious bias: identical code received different visual attention based on labeled author seniority.
- Urgency (task priority) significantly altered review behavior, including evaluation time and cognitive load, but not final adoption decisions.
- Developers claimed to judge code on functionality and quality, yet were unaware of how strongly reputation and urgency influenced their process.
Why It Matters
This research is crucial for improving code review platforms, AI code generation trust, and creating guidelines to mitigate subconscious bias in software engineering.