LLM Quantization (INT8/INT4/GPTQ/AWQ/GGUF) for Optimization

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
LLM Quantization (INT8/INT4/GPTQ/AWQ/GGUF) for Optimization
Medium
from 1 business day to 3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

LLM quantization (INT8/INT4/GPTQ/AWQ/GGUF) for optimization

Quantization is reducing precision of model weight representation (from fp32 or bf16 to INT8, INT4, INT3 and lower). This reduces model size and speeds up inference with minimal quality loss. For LLMs, quantization is a key deployment tool on limited hardware.

Quantization formats: comparison table

Format Precision Compression (vs fp16) Quality Application
fp16 16-bit float Baseline GPU inference
INT8 (bitsandbytes) 8-bit int -0.5–1% GPU, easy
GPTQ INT4 4-bit group-quant -1–2% GPU, production
AWQ INT4 4-bit activation-aware -0.5–1.5% GPU, better than GPTQ
GGUF Q4_K_M 4-bit mixed -1–2% CPU/GPU llama.cpp
GGUF Q8_0 8-bit -0.3–0.5% CPU/GPU llama.cpp
GGUF Q2_K 2-bit -5–10% Extreme case
EXL2 2–8 bit mixed 2–8× Configurable GPU, ExLlamaV2

GPTQ: Post-Training Quantization with error correction

GPTQ quantizes layer-by-layer, minimizing error on small calibration dataset:

from transformers import AutoModelForCausalLM, GPTQConfig

gptq_config = GPTQConfig(
    bits=4,
    dataset="c4",           # Calibration dataset
    desc_act=True,          # Better for perplexity
    group_size=128,         # Quantization group size
    damp_percent=0.1,
)

model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Meta-Llama-3.1-8B-Instruct",
    quantization_config=gptq_config,
    device_map="auto"
)

model.save_pretrained("./llama3-8b-gptq-int4")

Calibration takes 30–120 minutes on CPU or GPU depending on model size.

AWQ: Activation-Aware Weight Quantization

AWQ identifies "important" weights by activations and protects them from aggressive quantization:

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model = AutoAWQForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")

quant_config = {
    "zero_point": True,
    "q_group_size": 128,
    "w_bit": 4,
    "version": "GEMM"  # or "GEMV" for small batches
}

model.quantize(tokenizer, quant_config=quant_config)
model.save_quantized("./llama3-8b-awq")

AWQ usually delivers better quality than GPTQ at same bitness, especially on reasoning tasks.

GGUF: universal format for llama.cpp

GGUF (GGML Universal Format) — format for deployment via llama.cpp, supporting CPU inference and partial GPU offloading:

# Convert HuggingFace model to GGUF
python convert_hf_to_gguf.py \
  --model meta-llama/Meta-Llama-3.1-8B-Instruct \
  --outtype f16 \
  --outfile llama3-8b-f16.gguf

# Quantize to Q4_K_M (recommended balance)
./quantize llama3-8b-f16.gguf llama3-8b-q4km.gguf Q4_K_M

GGUF quantization variants (from best quality to smallest size):

  • Q8_0: 8-bit, ~8.5GB for 8B model, excellent quality
  • Q6_K: 6-bit, ~6.1GB, high quality
  • Q5_K_M: 5-bit mixed, ~5.1GB, good quality
  • Q4_K_M: 4-bit mixed, ~4.1GB, recommended for most tasks
  • Q3_K_M: 3-bit, ~3.2GB, noticeable degradation

Practical example: format choice for on-premise deployment

Task: deploy fine-tuned Llama 3.1 8B on server with 2×RTX 3090 (48GB total VRAM) for 50 concurrent users.

Requirements: P95 latency < 3s, throughput > 100 tok/s.

Format VRAM Throughput (vLLM) Latency P95 Quality (rating)
bf16 16 GB 180 tok/s 1.8s 100%
AWQ INT4 5 GB 280 tok/s 1.2s 98.5%
GPTQ INT4 5 GB 260 tok/s 1.3s 98%
GGUF Q4_K_M 4.1 GB (CPU) 40 tok/s 8s 98%

Choice: AWQ INT4 — fits in one 3090 24GB with headroom, throughput 280 tok/s meets requirement, quality minimally degraded.

Inference with quantized model via vLLM

from vllm import LLM, SamplingParams

# AWQ model
llm = LLM(
    model="./llama3-8b-awq",
    quantization="awq",
    dtype="auto",
    gpu_memory_utilization=0.85,
)

# GPTQ model
llm = LLM(
    model="./llama3-8b-gptq-int4",
    quantization="gptq",
    dtype="auto",
)

outputs = llm.generate(["Hello, how are you?"], SamplingParams(max_tokens=200))

Quantization timeline

  • GPTQ/AWQ quantization of 8B model: 1–3 hours
  • GPTQ/AWQ quantization of 70B model: 6–18 hours
  • GGUF conversion: 15–60 minutes
  • Testing and optimal format selection: 1–3 days
  • Total: 2–5 days