The Appeal and Reality of Recycling LoRAs with Adaptive Merging
A massive new study shatters a core belief about fine-tuning AI models.
Deep Dive
A new paper analyzing the 'recycling' of nearly 1,000 user-contributed LoRA modules for Llama 3.1 8B-Instruct reveals a surprising truth. Adaptive merging methods show limited benefit over simply training a new LoRA. Crucially, the study found that the specific LoRAs chosen barely matter—merging LoRAs with randomly initialized weights performed similarly. This suggests past successes may be due to regularization, not positive knowledge transfer between tasks.
Why It Matters
This could dramatically simplify and reduce the cost of model fine-tuning by questioning the need for curated LoRA libraries.