Open Source

Heretic 1.2 released: 70% lower VRAM usage with quantization, Magnitude-Preserving Orthogonal Ablation ("derestriction"), broad VL model support, session resumption, and more

The leading AI censorship-removal tool just got a massive, resource-friendly upgrade.

Deep Dive

Heretic 1.2, the leading software for removing censorship from language models, has been released with a major new LoRA-based engine. It supports 4-bit quantization, cutting VRAM requirements by up to 70% while exporting models in full precision. The update also introduces the highly-requested 'Magnitude-Preserving Orthogonal Ablation' technique for higher quality outputs, broad vision-language model support, and automatic session saving/resumption to prevent data loss from crashes.

Why It Matters

This dramatically lowers the hardware barrier for developers and researchers to create and experiment with uncensored, open-source AI models.