GB200 NVL72-Powered Instances
Learn how GB200 delivers efficient, performant, and energy-conscious AI compute power
AI models with trillion parameters are becoming increasingly common, and the demand for computational power is surging. Traditional GPU solutions are struggling to meet these demands, leading to development bottlenecks, high energy consumption, and escalating costs.
CoreWeave's GB200 NVL72-powered instances address these challenges by harnessing the groundbreaking architecture of NVIDIA's GB200 Grace Blackwell Superchip and NVLink Switch System. Each NVL72 rack delivers up to 1.44 exaFLOPS of AI compute power and 13.5TB of fifth-generation, NVLink connected, high-bandwidth GPU memory. Liquid cooling improves the overall efficiency by consuming less energy than traditional air-cooled systems.
These instances represent the pinnacle of our high-performance computing offerings. Customers should choose these instances when they need maximum performance for large-scale AI training and inference, unprecedented memory capacity for massive datasets, and ultra-fast GPU-to-GPU communication for distributed computing.
To ensure optimal performance, GB200 NVL72-powered instances must be deployed as full racks of 18 Nodes. When deploying larger Node Pools, targetNodes
must be a multiple of 18, such as 36 or 54. CKS enforces this restriction and will not deploy GB200 instances in partial racks.
apiVersion: compute.coreweave.com/v1alpha1kind: NodePoolmetadata:name: example_nodepoolspec:instanceType: gb200-4xtargetNodes: 18
Because GB200 NVL72-powered instances must be deployed as full racks, CoreWeave's Day 2+ automation cannot automatically replace a misbehaving Node with one from a different rack. Instead, NVL72-powered Nodes must be physically exchanged in the same rack. As a best practice, workloads should tolerate up to two unavailable Nodes per rack for maintenance purposes. If a rack experiences more than two unavailable Nodes, the entire rack is cordoned and drained for service.
Manage Pod affinity
To fully leverage the capabilities of the NVL72 architecture's shared NVLink fabric, all Nodes from the same job must schedule onto the same rack with the same NVLink domain for optimal performance. This is especially important for large-scale distributed computing tasks, where efficient communication between GPUs can dramatically reduce processing times.
Slurm users should use the Topology/Block Plugin for Slurm to control job placement. See Block Scheduling in Slurm to learn more.
Control placement with NVLink domain
Kubernetes controls Pod placement with affinity rules that steer Pods toward Nodes with specific labels. In CKS, all Nodes are labeled with their NVLink domain, allowing precise control over Pod placement. To ensure multiple Pods are scheduled onto the same NVL72 rack, set their affinity toward Nodes within the same NVLink domain.
In the GB200 NVL72 architecture, all Nodes within the same rack share the same unique ds.coreweave.com/nvlink.domain
label. If a Node Pool spans multiple racks, Pods can reference multiple NVLink domains in matchExpressions.values
.
For example, this Pod affinity rule targets a single NVLink domain:
affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: ds.coreweave.com/nvlink.domainoperator: Invalues:- <NVLINK_DOMAIN>Your NVLink domain
Control placement with InfiniBand labels
In some cases, it's useful to deploy Pods on Nodes in different NVLink domains while controlling the InfiniBand network location. To support this, CKS labels each Node with information about its InfiniBand fabric, superpod, and rack number.
ib.coreweave.cloud/fabric
is the InfiniBand fabric name.ib.coreweave.cloud/superpod
is the InfiniBand superpod number.node.coreweave.cloud/rack
is the rack number, which is the same as the NVLink domain in the NVL72 architecture.
See Reference: InfiniBand Labels for more information.
For example, this Pod affinity rule targets a specific InfiniBand fabric, superpod, and rack:
affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: ib.coreweave.cloud/fabricoperator: Invalues:- <FABRIC_NAME>Your fabric name- key: ib.coreweave.cloud/superpodoperator: Invalues:- <SUPERPOD_NUMBER>Your superpod number- key: node.coreweave.cloud/rackoperator: Invalues:- <RACK_NUMBER>Your rack number
More resources
To learn more about our platform, see the following resources: