Image & Video

Can we replicate 2003 DALL·E 3 yet?

Creators with 25K surreal images ask if new models or LoRAs can replicate DALL-E 3's unique experimental aesthetic.

Deep Dive

A viral discussion on Reddit has reignited debate over the unique creative capabilities of older AI image models. User Master-Client6682 posted a collection of surreal, bizarre, and experimental 35mm-style photographs generated by DALL-E 3 in 2023, asking the community if any current models—such as Midjourney v6, Stable Diffusion 3, or OpenAI's own DALL-E 3 successor—can replicate its distinctive 'weird' aesthetic. The user possesses an archive of 25,000 such images and is exploring whether training a custom LoRA (Low-Rank Adaptation) on a modern open-source model could recapture that specific artistic voice.

The core technical question revolves around fine-tuning. The user asks if a LoRA, a parameter-efficient training method, could be applied to a base model like SDXL or the new Flux model to mimic DALL-E 3's 2023 output style. This highlights a perceived trade-off: as major models like GPT-4V with DALL-E 3 and Claude 3.5 have incorporated stronger safety filters and alignment, some users feel the raw, experimental, and 'discordant' creativity of earlier versions has been diminished. The community challenge—'show me please'—is a direct call for evidence that current open-source or commercial tools can match this lost creative niche, pushing the boundaries of model customization and personalization.

Key Points
  • User seeks to replicate 2023 DALL-E 3's 'surreal' and 'bizarre' aesthetic with 25K image dataset.
  • Technical proposal involves training a LoRA on modern base models like SDXL or Flux for style transfer.
  • Highlights a perceived creativity vs. safety trade-off in newer AI image models from OpenAI and others.

Why It Matters

For professionals, it underscores the challenge of preserving unique AI artistic styles amid evolving model safety and alignment protocols.