Developer Tools

b8977

New release supports ROCm, Vulkan, SYCL, and more platforms...

Deep Dive

The llama.cpp project released b8977. This version includes macOS Apple Silicon (arm64), macOS Apple Silicon with KleidiAI enabled, macOS Intel (x64), iOS XCFramework, Linux Ubuntu x64 (CPU), Ubuntu arm64 (CPU), Ubuntu s390x (CPU), Ubuntu x64 (Vulkan), Ubuntu arm64 (Vulkan), Ubuntu x64 (ROCm 7.2), Ubuntu x64 (OpenVINO), Ubuntu x64 (SYCL FP32), Ubuntu x64 (SYCL FP16), Android arm64 (CPU), Windows x64 (CPU), Windows arm64 (CPU), Windows x64 (CUDA 12), Windows x64 (CUDA 13), Windows x64 (Vulkan), Windows x64 (SYCL), Windows x64 (HIP), openEuler x86 (310p), openEuler x86 (910b, ACL Graph), openEuler aarch64 (310p), and openEuler aarch64 (910b, ACL Graph).

Key Points
  • Adds AMD ROCm 7.2 support on Ubuntu x64 for Radeon GPUs
  • New Intel SYCL builds for FP32 and FP16 on Ubuntu and Windows
  • CUDA 12 and 13 DLLs included for Windows GPU acceleration

Why It Matters

Enables developers to run local LLMs across diverse hardware, from consumer laptops to enterprise clusters.