Image & Video

I trained a model on childhood photos to simulate memory recall - [Erased re-upload + more info in comments]

A personal AI experiment fine-tuned on family photos produces visuals that evoke layered, half-remembered emotions.

Deep Dive

A developer has conducted a deeply personal AI experiment by fine-tuning the Stable Diffusion XL (SDXL) model on roughly 60 scanned photographs from their childhood family album. The goal was to create a system capable of simulating memory recall, generating visuals that bridge past and present. The resulting Low-Rank Adaptation (LoRA)—a small, efficient add-on to the base model—produced images that the creator describes as evoking "layered emotions and fragments of distant, half-recalled memories," suggesting the model captured more than just visual patterns.

The project was showcased through two technical demonstrations. The first integrated the custom LoRA with Archaia, an audio-reactive geometry system the developer built in TouchDesigner, creating a synesthetic experience. The second showed the LoRA running in real-time parallel processing using StreamDiffusion, an open-source framework for high-speed image generation. This highlights the accessibility of such personal AI projects, leveraging existing open-source tools for custom, emotionally resonant applications beyond standard text-to-image generation.

Key Points
  • Fine-tuned SDXL on ~60 personal childhood photos to create a custom memory-recall LoRA
  • Demonstrated integration with audio-visual system Archaia and real-time execution via StreamDiffusion
  • Resulting visuals described as evoking layered emotions and fragments of half-remembered memories

Why It Matters

Shows how personal data and open-source AI can create deeply subjective tools for introspection and artistic expression.