ACE-Step 1.5 XL Base — BF16 version (converted from FP32)
The converted model cuts VRAM usage from ~18.8GB to ~7.5GB while maintaining identical output quality.
Independent developer Marcorez8 has released a BF16-precision version of the ACE-Step 1.5 XL Base model, a significant technical optimization for the AI community. The original model, stored in FP32 (32-bit floating point) format, occupied approximately 18.8 GB of memory. By converting the weights to BF16 (Brain Floating Point 16), Marcorez8 has reduced the model's footprint to just 7.5 GB—a 60% reduction in VRAM usage. Crucially, this conversion is lossless for practical purposes; the model maintains the same output quality and capabilities as the original, making it a strictly superior option for deployment.
This optimization directly lowers the hardware barrier to using and experimenting with state-of-the-art models. The ACE-Step 1.5 XL Base is specifically designed as a foundational model for fine-tuning, such as applying LoRA (Low-Rank Adaptation) to train custom artistic styles. The reduced memory footprint means creators and researchers can now run this model on more affordable GPUs with less VRAM, democratizing access. Marcorez8 has also converted the XL Turbo variant, and recommends using tools like Side Step for the fine-tuning process. Both converted models are hosted on Hugging Face, integrating seamlessly into the modern AI development workflow.
- Model size reduced by 60%, from ~18.8GB (FP32) to ~7.5GB (BF16) with no quality loss.
- Enables fine-tuning of the ACE-Step 1.5 XL Base model on GPUs with significantly less VRAM.
- Hosted on Hugging Face for easy community access and integration into existing pipelines.
Why It Matters
Dramatically lowers the cost and hardware requirements for artists and developers to create custom AI models.