Llama.cpp

Software library for LLM inference From Wikipedia, the free encyclopedia

llama.cpp is an open source software library that performs inference on various large language models such as Llama.[3] It is co-developed alongside the GGML project, a general-purpose tensor library.[4]

Original authorGeorgi Gerganov
DevelopersGeorgi Gerganov and community
Initial releaseMarch 10, 2023; 2 years ago (2023-03-10)[1]
Written inC++, C
Quick facts Original author, Developers ...
llama.cpp
Original authorGeorgi Gerganov
DevelopersGeorgi Gerganov and community
Initial releaseMarch 10, 2023; 2 years ago (2023-03-10)[1]
Written inC++, C
TypeLibrary for large language models
LicenseMIT License[2]
Repositorygithub.com/ggml-org/llama.cpp
Close

Command-line tools are included with the library,[5] alongside a server with a simple web interface.[6][7]

Background

Towards the end of September 2022, Georgi Gerganov started work on the GGML library, a C library implementing tensor algebra. Gerganov developed the library with the intention of strict memory management and multi-threading. The creation of GGML was inspired by Fabrice Bellard's work on LibNC.[8]

Before llama.cpp, Gerganov worked on a similar library called whisper.cpp which implemented Whisper, a speech to text model by OpenAI.[9]

Development

llama.cpp began development in March 2023 by Georgi Gerganov as an implementation of the Llama inference code in pure C/C++ with no dependencies. This improved performance on computers without GPU or other dedicated hardware, which was a goal of the project.[3][10][11] llama.cpp gained traction with users who lacked specialized hardware, as it could run on just a CPU.

While initially designed for CPUs, GPU and NPU backend support was later added.[12] As of August 2025 it has more than 85,000 stars on GitHub.[13]

On April 30th, 2024, FlashAttention was introduced.

On April 10th, 2025, libmtmd was introduced, which reinvigorated support for multimodal models that has been stagnant previously.

On December 17th, 2025, full acceleration on Android and ChromeOS devices was introduced via a new GUI binding[14], which unlocks native app development beyond the previous approach of cross-compiling and running CLI [10][15][16] in an adb shell.

Architecture

llama.cpp supports multiple hardware targets, including x86, ARM, Metal, BLAS, BLIS, zDNN, ZenDNN, SYCL, MUSA, CUDA, HIP, CANN, OpenCL, RPC and Vulkan (version 1.2 or greater).[17][18][19][20] These back-ends make up the GGML tensor library which is used by the front-end model-specific llama.cpp code.[21] llama.cpp makes use of several CPU extensions for optimization:

llama.cpp supports a variety of features aimed at inference on edge devices, such as:

In addition, llama.cpp supports a variety of features and APIs for frontend communication, such as:

  • OpenAI-compatible endpoints like v1/chat/completions.
  • Grammar-based output formatting as JSON.[11]

GGUF file format

Quick facts GGUF, Filename extension ...
GGUF
Filename extension.gguf
Magic number0x47 0x47 0x55 0x46
Developed byGeorgi Gerganov and community
Initial releaseAugust 22, 2023; 2 years ago (2023-08-22)[24]
Latest release
v3[25]
Type of formatMachine-learning tensors
Close

The GGUF (GGML Universal File)[26] file format is a binary format that stores both tensors and metadata in a single file, and is designed for fast saving, and loading of model data.[27] It was introduced in August 2023 by the llama.cpp project to better maintain backwards compatibility as support was added for other model architectures.[12][28] It superseded previous formats used by the project such as GGML.

GGUF files are typically created by converting models developed with a different machine learning library such as PyTorch.[27]

Design

GGUF focuses on quantization, the act of reducing precision in the model weights. This can lead to reduced memory usage and increased speed, albeit at the cost of reduced model accuracy.[29][28]

GGUF supports 2-bit to 8-bit quantized integer types,[30] common floating-point data formats such as float32, float16, and bfloat16, and 1.58 bit quantization.[5]

GGUF contains information necessary for running a GPT-like language model such as the tokenizer vocabulary, context length, tensor info and other attributes.[31]

Byte-level structure (little-endian)

More information Bytes, Description ...
BytesDescription[32]
4GGUF magic number, currently set to 0x47 0x47 0x55 0x46
4GGUF version, currently set to 3
8UINT64 tensor_count: number of tensors
8UINT64 metadata_kv_count: number of metadata key-value pairs
VariableMetadata block, containing metadata_kv_count key-value pairs
VariableTensors info block, containing tensor_count values
Variableuint8_t tensor_data[], weight bits block
Close

Metadata block

// example metadata
general.architecture:  'llama',
general.name:          'LLaMA v2',
llama.context_length:  4096,
... ,
general.file_type:     10, // (typically indicates quantization level, here "MOSTLY_Q2_K")
tokenizer.ggml.model: 'llama',
tokenizer.ggml.tokens: [
   '<unk>', '<s>', '</s>', '<0x00>', '<0x01>', '<0x02>',
   '<0x03>', '<0x04>', '<0x05>', '<0x06>', '<0x07>', '<0x08>',
   ...
],
...

Tensors info block

// n-th tensor
name:         GGUF string, // ex: "blk.0.ffn_gate.weight"
n_dimensions: UINT32,      // ex: 2
dimensions:   UINT64[],    // ex: [ 4096, 32000 ]
type:         UINT32,      // ex: 10 (typically indicates quantization level, here "GGML_TYPE_Q2_K")
offset:       UINT64       // starting position within the tensor_data block, relative to the start of the block
// (n+1)-th tensor
...

References

Related Articles

Wikiwand AI