TrustTrade: Human-Inspired Selective Consensus Reduces Decision Uncertainty in LLM Trading Agents
New framework mimics human skepticism, prioritizing consistent signals to reduce hallucinations and stabilize returns.
A team of researchers including Minghan Li, Rachel Gonsalves, and Weiyue Li has published a paper introducing TrustTrade (Trust-Rectified Unified Selective Trader), a novel framework designed to make AI-powered financial trading agents more reliable. The core problem they address is the 'uniform trust' bias in current LLM agents, where all retrieved information is implicitly treated as equally factual. This makes systems vulnerable to multi-source noise and misinformation, leading to amplified hallucinations and unstable trading performance.
TrustTrade's solution is a human-inspired, multi-agent selective consensus framework. It aggregates information from multiple independent LLM agents and dynamically weights trading signals based on their semantic and numerical agreement. Consistent signals are prioritized, while divergent or weakly grounded inputs are selectively discounted. The system also incorporates deterministic temporal signals as anchors and a reflective memory mechanism that adapts risk preferences in real-time without additional training.
The result is a significant reduction in noise amplification and hallucination-driven volatility. In controlled backtesting within high-noise market environments from 2024 and 2026, TrustTrade successfully calibrated LLM trading behavior. It moved agents away from extreme, high-risk/high-return or low-risk/low-return regimes and toward a more stable, human-aligned mid-risk and mid-return profile. This represents a crucial step in bridging the behavioral gap between naive AI agents and experienced human traders who naturally filter and cross-validate information.
- Addresses 'uniform trust' bias where LLMs treat all data as equally reliable, a major source of trading risk.
- Uses multi-agent consensus to dynamically weight signals, prioritizing agreement and discounting noisy or divergent information.
- Backtesting in high-noise 2024/2026 markets showed a shift from extreme risk profiles to stable, human-aligned performance.
Why It Matters
Makes autonomous AI trading systems more robust and less prone to costly errors caused by misinformation or data noise.