Research & Papers

Linear Image Generation by Synthesizing Exposure Brackets

A new DiT-based method generates RAW-like images with full dynamic range for AI editing.

Deep Dive

Researchers from Tsinghua University and the Inception Institute of AI have published a paper introducing a novel approach to generating linear images directly from text prompts. The method, titled 'Linear Image Generation by Synthesizing Exposure Brackets,' addresses the limitation of current generative models that produce display-referred images, which compress dynamic range and limit post-processing flexibility.

The team’s approach represents a linear image as a sequence of exposure brackets—each capturing a specific portion of the dynamic range—using a Diffusion Transformer (DiT)-based flow-matching architecture. This allows the model to preserve extreme highlights and shadows, providing richer information than traditional display-referred images. The paper, accepted at CVPR 2026, also demonstrates downstream applications like text-guided linear image editing and structure-conditioned generation via ControlNet, offering professionals unprecedented control over AI-generated visuals.

Key Points
  • Generates RAW-like linear images (scene-referred) with full dynamic range from text prompts
  • Uses a DiT-based flow-matching architecture to synthesize exposure brackets for extreme highlights/shadows
  • Accepted at CVPR 2026; enables professional post-processing and editing

Why It Matters

Unlocks high-fidelity, AI-generated images for professional editing with unmatched dynamic range and editability.