Skip to main content

Advanced Label Selectors

Use advanced label selectors for precise workload scheduling

Selecting the right hardware for your workloads is key for performance. While the basic node labels as listed in the Node Types list are typically all that is needed to include in a Deployment manifest to schedule workloads properly, there are some situations in which more specific designations may be required, such as some instances of deploying custom containers.

On CoreWeave, compute nodes are tagged with Kubernetes labels, which specify their hardware type. In turn, Kubernetes affinities are specified in workload Deployment manifests to select these labels, ensuring that your workloads are correctly scheduled to the desired hardware type.

Note

For any questions about advanced scheduling or other special requirements, please contact support.

The following labels are attached to nodes on CoreWeave Cloud, and may be selected using affinities in the Deployment manifests, as demonstrated in the following affinity usage examples.

CPU model

LabelRefers toValue options
node.coreweave.cloud/cpuThe CPU family of the CPU on the nodeSee CPU-only instances for a list of types and their values

Affinity usage example

Example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.coreweave.cloud/cpu
operator: In
values:
- amd-epyc-rome
- intel-xeon-v4

GPU count

Important

Using this selector is not recommended. Instead, request GPU resources by setting the GPU count in the Deployment spec; see the guide on deploying Custom Containers for examples.

LabelRefers toValue options
gpu.nvidia.com/countNumber of GPUs provisioned in the node4 to 8; must be included as a string

Affinity usage example

Example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/count
operator: In
values:
- "3"

GPU model

LabelRefers toValue options
gpu.nvidia.com/classThe GPU model provisioned in the nodeSee Node Types for a list of types and their values

Affinity usage example

Example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/class
operator: In
values:
- A40

GPU VRAM

LabelRefers toValue options
gpu.nvidia.com/vramThe GPU VRAM, in Gigabytes, on the GPUs provisioned in the nodeSee the VRAM column in the GPU-enabled Node Types list

Affinity usage example

Example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/vram
operator: In
values:
- "8"
LabelRefers toValue options
ethernet.coreweave.cloud/speedUplink speed from node to the backbone10G, 40G, 100G

Affinity usage example

Example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ethernet.coreweave.cloud/speed
operator: In
values:
- 40G
Important

This label is currently applicable only for Tesla_V100 nodes.

LabelRefers toValue options
gpu.nvidia.com/nvlinkDenotes whether or not GPUs are interconnected with NVLinktrue, false

Affinity usage example

Example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/nvlink
operator: In
values:
- true

Data center region

LabelRefers toValue options
topology.kubernetes.io/regionThe region the node is placed inORD1, LAS1, LGA1

Affinity usage example

Example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/region
operator: In
values:
- ORD1