b8991
The popular LLM inference engine patches a potential crash in HF cache lookup.
Deep Dive
llama.cpp released version b8991, which includes a fix for null getpwuid checking in the Hugging Face cache. The release provides prebuilt binaries for macOS, Linux, Android, Windows, and openEuler across various architectures and backends.
Key Points
- Fixed null pointer dereference in getpwuid when resolving Hugging Face cache path.
- Issue #22550, contributed by Hugging Face engineer Adrien Gallouët.
- Pre-built binaries available for macOS, Linux, Windows, iOS, Android, and openEuler.
Why It Matters
Ensures reliability for the widely-used open-source LLM inference tool, critical for local AI deployment.