Almost all developers want AI help, but it has to explain itself first
New research reveals developers' top AI demand: transparency and reasoning before autonomy.
A comprehensive survey by Grafana Labs reveals a critical tension in the developer adoption of AI. While an overwhelming 92% of developers see AI's value in preemptively detecting system issues and 91% appreciate its potential for root cause analysis, a staggering 95% insist that AI must explain its reasoning before they can fully trust it. This demand for explainable AI stems from a core concern: developers fear that the manual labor of providing context to AI models is erasing the very productivity gains the technology promises. The lack of transparency creates a trust deficit, with 15% of respondents explicitly stating they do not trust AI to take autonomous actions.
The report further identifies that effective AI assistance is intrinsically linked to solving observability challenges. While 77% of organizations with centralized observability report saving time and money, many still struggle with complex, siloed data setups. This fragmentation is a key obstacle, as AI needs access to unified data to function optimally. The research suggests AI could directly address major pain points like alert fatigue, cited by 30% of workers as slowing incident response. The path forward, according to Grafana Labs, is clear: developers want AI tools that first prove their trustworthiness through transparency and context-awareness, maturing into true productivity partners before moving toward full autonomy.
- 95% of developers require AI to explain its reasoning to validate outputs before trusting it.
- 92% see AI's value for detecting issues pre-downtime, but 15% distrust autonomous AI actions.
- The top developer worry is that manually explaining context to AI negates its promised time savings.
Why It Matters
For AI to become a true developer copilot, toolmakers must prioritize explainability and seamless data integration over pure automation.