EditAnything IC-LoRA - LTX-2.3
An experimental AI model trained on 8,000 video pairs lets users edit footage using simple text commands.
A new experimental AI tool called 'EditAnything IC-LoRA' for the LTX-2.3 video model is generating buzz for its ability to manipulate video content through simple text prompts. Developed by Alissonerdx and shared on Hugging Face and CivitAI, this LoRA (Low-Rank Adaptation) is a fine-tuned add-on that teaches the base model to perform specific editing tasks. It was trained on approximately 8,000 video-text pairs and is structured around four core command patterns: adding objects with specific attributes, removing subjects, replacing elements, and converting the entire video's style (e.g., to anime).
Currently in an active, experimental training phase, the model is not yet optimized for professional production. Its performance hinges on tuning key inference parameters, notably the CFG (Classifier-Free Guidance) scale, with a starting recommendation of CFG=1. Users are encouraged to experiment with combinations of CFG, LoRA strength, and step count to achieve desired edits like 'Add a wide, genuine smile to the person's face' or 'Remove the blue car in the background.' The developer is actively soliciting community feedback on prompt adherence, edit strength, and consistency to guide further training iterations.
- Trained on ~8,000 video pairs to follow 'Add', 'Remove', 'Replace', and 'Convert/Style' text commands.
- Performance depends on tuning the CFG (Classifier-Free Guidance) scale, with a recommended starting point of 1.
- Currently experimental and shared on Hugging Face/CivitAI to gather user feedback for ongoing training.
Why It Matters
It demonstrates a significant step towards intuitive, prompt-based video editing, potentially automating complex post-production tasks.