Skip to main content

Selecting an Instance: Key Considerations for LLMs

Learn about the key considerations for selecting the right instance for your LLM workload

The primary technical constraint for running any Large Language Model (LLM) is the available GPU memory. For a model to operate efficiently, all of its components including its parameters, the data being processed, and intermediate calculation states must fit into the memory of one or more GPUs. Understanding how these components consume memory is crucial for selecting the right hardware for your task, whether it's training, fine-tuning, or inference.

Model parameters (weights)

At its core, an LLM consists of billions of numerical values, known as parameters or weights. These parameters are the learned knowledge of the model. The total memory required to store them is a direct function of the number of parameters and the numerical precision used.

Understanding precision and quantization

Precision refers to the data type used to store the model's parameters. Using lower precision reduces the memory footprint and can significantly increase computation speed on supported hardware.

  • FP16 (half precision): The standard for model training and fine-tuning. It provides a good balance between numerical accuracy and memory usage.
  • Quantization (INT8 & INT4): This is a powerful technique, primarily used for inference, that converts the model's FP16 weights into lower-precision 8-bit or 4-bit integers. This process "compresses" the model, allowing you to run a larger model on the same hardware or the same model on less expensive hardware. The trade-off is a minor, and often imperceptible, reduction in accuracy, which is generally acceptable for production inference workloads.

Use the following formulas to estimate the GPU memory required to load the model's weights:

  • for FP16 ≈ Model Size (in Billions) × 2
  • for INT8 ≈ Model Size (in Billions) × 1
  • for INT4 ≈ Model Size (in Billions) × 0.5

The impact of the KV cache (for inference)

During inference, a significant portion of GPU memory is consumed by the key-value (KV) cache. To generate new text, an LLM must pay attention to the preceding tokens. The KV cache stores these intermediate attention calculations so they don't have to be recomputed for every new word generated. While this dramatically speeds up inference, the cache's memory footprint is dynamic and grows with the complexity of the request. Its size scales linearly with both the sequence length (the size of the input prompt plus the generated text) and the batch size (the number of concurrent requests being processed). Managing the KV cache is a critical aspect of optimizing for high-throughput inference with long context windows.

GPU memory for training

Optimizer states and gradients make training or fine-tuning a model far more memory-intensive than inference. In addition to the model weights and data batches, the GPU must also store:

  • Optimizer states: Most modern optimizers, like Adam or AdamW, maintain momentum and variance states for each parameter to stabilize and accelerate training. This can consume twice as much memory as the model's parameters themselves.
  • Gradients: The calculated "directions" for updating each model weight. The memory required is typically equal to the size of the model parameters in FP16.

As a result, fine-tuning a model can require 3x to 4x more GPU memory than simply running it for inference.

Running models larger than a single GPU's memory requires high-speed interconnects to link GPUs together, a practice known as model parallelism. The performance of this approach depends on a two-level fabric:

  • The GPU Fabric (NVLink): This is an ultra-high-speed, direct link between GPUs. In standard servers like our B200 or H100 instances, NVLink connects the 8 GPUs within the server chassis. However, our Grace Blackwell NVL72 instances (gb200-4x) evolve this by using an NVLink Switch fabric to extend this seamless memory domain across all 72 GPUs in an entire rack, effectively turning the rack into one giant GPU.
  • The System Fabric (InfiniBand): This is the high-performance network that scales your workload further. For standard servers, InfiniBand connects multiple Nodes together. For Grace Blackwell NVL72 systems, its role is elevated to connecting multiple racks, enabling the construction of massive AI supercomputers from these powerful rack-scale units.
GPUSystem ConfigTotal GPU MemoryMax Inference Model Size (FP8)*Max Training Model Size (BF16)Examples of Models That Fit
GB300NVL72 (72 GPUs)20.7 TB~10.3 Trillion~1.3 TrillionInference: Next-generation, multi-trillion parameter models at high fidelity
Training: Foundation models in the 1T+ parameter range
GB300NVL64 (64 GPUs)18.4 TB~9.2 Trillion~1.15 TrillionInference: Very large frontier models at high fidelity
Training: Efficiently training 1T parameter models from scratch
GB200NVL72 (72 GPUs)13.8 TB~13.8 Trillion~860 BillionInference: GPT-5, Claude Opus, Gemini Ultra, other large proprietary models
Training: Nemotron-4 340B, Llama 3.1 405B (with plenty of room)
GB200NVL64 (64 GPUs)12.3 TB~12.3 Trillion~768 BillionInference: GPT-5, Claude Opus, Gemini Ultra, other large proprietary models
Training: Llama 3.1 405B
B2008-GPU system1.54 TB~1.54 Trillion~96 BillionInference: Llama 3.1 405B, Nemotron-4 340B, Gemini Pro
Training: Llama 3 70B, Mixtral 8x7B
H2008-GPU system1.13 TB~1.13 Trillion~70 BillionInference: Falcon-180B, models up to ~1T
Training: Llama 3 70B, other models up to ~70B from scratch
H1008-GPU system640 GB~640 Billion~40 BillionInference: Llama 3.1 405B, Nemotron-4 340B
Training: Qwen2 32B, fine-tuning large models
RTX Pro 6000 (Blackwell)1 GPU96 GB~86 Billion~6 BillionInference: Llama 3 70B, Mixtral 8x7B
Training: Models up to ~6B from scratch; fine-tuning larger models
L40S1 GPU48 GB~43 Billion~3 BillionInference: Qwen2 32B, LLaVA 34B
Training: Small custom models (~2B); efficient fine-tuning
L401 GPU48 GB~43 Billion~3 BillionInference: Qwen2 32B, LLaVA 34B
Training: Small custom models (~2B); fine-tuning
GH2001 GPU576 GB~576 Billion~6 BillionInference: Llama 3.1 405B, Nemotron-4 340B
Training: Small custom models (~6B)
A1008-GPU system640 GB~640 Billion~40 BillionInference: Llama 3.1 405B, Falcon-180B
Training: Qwen2 32B, fine-tuning large models

*Figures represent theoretical estimates for a single model. A typical production scenario often involves running multiple, smaller models concurrently.

Learn more