How It Works
Running large language models (LLMs) locally requires enough GPU VRAM to store model weights and key-value (KV) cache memory. The required memory increases with larger models and longer context lengths.
This calculator estimates total GPU memory usage based on model size, quantization level (4-bit, 8-bit, or FP16), and context length. It helps determine whether a model runs entirely on GPU, requires system RAM offloading, or exceeds your hardware limits.
Supported models include Llama 3 (8B, 70B), Mistral 7B, Mixtral, and other popular open-source LLMs used for local inference.