Research & Papers

DreamCAD: Scaling Multi-modal CAD Generation using Differentiable Parametric Surfaces

New AI framework creates professional CAD files from simple text prompts, no manual modeling required.

Deep Dive

A research team led by Mohammad Sadil Khan has introduced DreamCAD, a breakthrough framework that generates editable, professional-grade CAD models directly from text descriptions, images, or 3D point clouds. Unlike previous methods that required expensive, manually annotated CAD datasets with explicit design histories, DreamCAD learns from millions of unannotated 3D meshes by representing each Boundary Representation (BRep) as a set of parametric patches like Bézier surfaces. This approach uses a differentiable tessellation method to convert these patches into meshes during training, enabling large-scale learning while maintaining the ability to reconstruct connected, editable surfaces.

The team also created CADCap-1M, the largest CAD captioning dataset to date with over 1 million descriptions generated using GPT-5, specifically to advance text-to-CAD research. This dataset addresses a critical bottleneck in the field by providing high-quality training data at scale. In benchmarks against ABC and Objaverse datasets, DreamCAD achieved state-of-the-art performance across all input modalities—text, image, and point cloud—demonstrating superior geometric fidelity and earning over 75% user preference in evaluations. The framework's ability to produce directly editable CAD files from natural language prompts represents a significant leap toward automating professional design workflows.

The implications for engineering and product design are substantial, as DreamCAD bridges the gap between conceptual description and technical implementation. By eliminating the need for CAD-specific annotations and leveraging existing 3D data, the system opens up new possibilities for rapid prototyping, design exploration, and automated manufacturing preparation. The researchers plan to make both the code and the massive CADCap-1M dataset publicly available, potentially accelerating further innovation in AI-assisted design tools.

Key Points
  • Generates editable CAD files (BReps) from text, images, or point clouds without CAD-specific training data
  • Trained on CADCap-1M, a new 1M+ caption dataset created using GPT-5 for text-to-CAD research
  • Achieves over 75% user preference and state-of-the-art performance on ABC and Objaverse benchmarks

Why It Matters

Automates the transition from concept to technical CAD model, potentially revolutionizing product design and engineering workflows.