Image & Video

My LTX2 Night of the Living Dead Submission

Creator uses LTX2, ZImage, and custom-trained Loras to generate a unique AI film submission.

Deep Dive

A viral Reddit post showcases a creator's submission to an LTX2 AI video generation contest, themed around 'Night of the Living Dead.' The user, u/jordek, humorously notes their entry is "the most boring one," but the technical breakdown reveals a sophisticated pipeline. The project involved training two custom LoRAs (Low-Rank Adaptations)—specialized small model files that modify a base AI model's output—for a fictional character and a cat modeled after the creator's own recently passed pet. The base visual generation used the ZImage model, with LTX2 LoRAs applied for stylistic control, while assets like a radio were created with Nano Banana. The final composition was assembled using ComfyUI (a node-based interface for Stable Diffusion) and DaVinci Resolve for editing, highlighting the emerging multi-model, multi-software workflow for AI filmmaking.

The submission underscores the technical frontier of personalized AI video generation, where creators are no longer just prompting but actively training custom model components. Using LoRAs allows for consistent character generation—a major hurdle in AI video—though the creator noted consistency still varied. The mention of an unsuccessful attempt to generate a "hammering guy" also honestly reflects current limitations in precise motion and action control. This project is a tangible example of how open-source tools like ComfyUI and LTX2 are enabling a new tier of creator-driven content, moving beyond generic outputs to bespoke, emotionally resonant stories. It signals a shift where the creator's role is evolving into that of a technical director, orchestrating multiple specialized AI tools to achieve a specific artistic vision.

Key Points
  • Creator trained two custom LoRAs for a main character and a pet cat using the ZImage base model.
  • Workflow integrated multiple AI tools: ComfyUI for generation, DaVinci Resolve for editing, and Nano Banana for assets.
  • Project highlights both the potential for personalized AI video and current challenges in maintaining character consistency.

Why It Matters

Demonstrates the advanced, multi-tool pipelines creators are using to move AI video from generic outputs to personalized storytelling.