FrameRef: A Framing Dataset and Simulation Testbed for Modeling Bounded Rational Information Health
Researchers release a 1M+ claim dataset to test how small algorithmic nudges compound into major belief shifts.
Researchers Victor De Lima, Jiqun Liu, and Grace Hui Yang built FrameRef, a dataset of 1,073,740 claims systematically reframed across five dimensions (authoritative, emotional, etc.). They also created a simulation testbed using fine-tuned LLM personas and Monte Carlo sampling. This allows systematic study of how ranking and recommendation algorithms can, over time, steer user beliefs and information health through subtle, compounding framing effects.
Why It Matters
Provides a crucial tool for auditing and designing more responsible, less manipulative search and recommendation AI systems.