ReSS: Learning Reasoning Models for Tabular Data Prediction via Symbolic Scaffold
New method bridges decision trees and LLMs to create explainable models for finance and healthcare.
A research team led by Chenlang Yi has introduced ReSS (Reasoning Models for Tabular Data Prediction via Symbolic Scaffold), a novel framework that bridges symbolic and neural approaches to create specialized AI models for tabular data. The system first uses decision trees to extract instance-level decision paths as symbolic scaffolds, which serve as logical blueprints. These scaffolds, combined with input features and labels, then guide a large language model to generate grounded natural-language reasoning that strictly adheres to the underlying decision logic.
The resulting high-quality dataset trains a specialized tabular reasoning model through fine-tuning, enhanced by a scaffold-invariant data augmentation strategy to improve generalization. The researchers introduced quantitative metrics including hallucination rate, explanation necessity, and explanation sufficiency to rigorously assess faithfulness. Experimental results on medical and financial benchmarks show ReSS-trained models outperform traditional decision trees and standard fine-tuning approaches by up to 10% while producing consistent, verifiable reasoning.
This approach directly addresses the dual challenges of scalable data curation and reasoning consistency in high-stakes domains. Unlike general-purpose LLMs that require extensive fine-tuning to master domain-specific tabular reasoning, ReSS provides a systematic method to ensure both accuracy and explainability. The framework represents a significant step toward trustworthy AI systems that can be deployed in regulated industries where decision transparency is non-negotiable.
- Bridges symbolic decision trees with neural LLMs to create specialized tabular reasoning models
- Improves accuracy by up to 10% over traditional methods on medical and financial benchmarks
- Introduces quantitative metrics (hallucination rate, explanation necessity/sufficiency) to assess reasoning faithfulness
Why It Matters
Enables deployable, explainable AI for critical decisions in healthcare, finance, and other regulated industries where transparency is mandatory.