More Than "Means to an End": Supporting Reasoning with Transparently Designed AI Data Science Processes
A new paper argues AI data science tools need intermediate artifacts, not just end-to-end answers, for critical thinking.
A team of researchers from Carnegie Mellon University and UC San Francisco has published a paper titled 'More Than "Means to an End": Supporting Reasoning with Transparently Designed AI Data Science Processes.' The work, accepted to a CHI 2026 workshop, critiques the current trend of end-to-end generative AI tools for data science, arguing they often fail to support the critical reasoning needed for open-ended, high-stakes tasks. By analyzing two AI systems built for medical data analysis, the authors identify a key design principle for success: the intentional creation of intermediate artifacts.
These artifacts—which can include human-readable query languages, clear concept definitions, or curated input-output examples—act as touchpoints for user reasoning. Despite other parts of the AI pipeline remaining opaque, these transparent intermediates allow users to evaluate alternative approaches, reformulate their initial problems, and inject their own domain expertise. The paper posits that this design shifts the AI from being a black-box solution provider to a 'tool for thought,' fostering a more collaborative and reflective analytical process. The authors invite the Human-Computer Interaction community to further explore when and how to design these intermediates to effectively augment human cognition in data science.
- Critiques end-to-end AI data science tools for hindering user reasoning and problem reformulation.
- Finds success in medical AI systems came from transparent intermediate artifacts like query languages.
- Proposes designing AI as a 'tool for thought' to combine machine power with human expertise.
Why It Matters
This framework could lead to more trustworthy and collaborative AI assistants for analysts in healthcare, finance, and research.