Models & Releases

Research: Prompt Repetition Improves Non-Reasoning LLMs (sending the same prompt twice)

A simple copy-paste trick improves non-reasoning task accuracy by 21-97% across major models.

Deep Dive

Researchers from Endonium discovered that duplicating a prompt before sending it significantly improves LLM performance. Their study shows accuracy gains of 21-97% on tasks like classification and extraction for models like GPT-4 and Claude 3. Users can implement this by simply copying their entire query and pasting it again before submission, a technique that requires no complex engineering or prompt tuning.

Why It Matters

This simple, free trick can immediately improve output quality for common business tasks without changing models or workflows.