Comment on page
Advanced Label Selectors
Use advanced label selectors for precise workload scheduling
Selecting the right hardware for your workloads is key for performance. While the basic node labels as listed in the Node Types list are typically all that is needed to include in a Deployment manifest to schedule workloads properly, there are some situations in which more specific designations may be required, such as some instances of deploying custom containers.
On CoreWeave, compute nodes are tagged with Kubernetes labels, which specify their hardware type. In turn, Kubernetes affinities are specified in workload Deployment manifests to select these labels, ensuring that your workloads are correctly scheduled to the desired hardware type.
Note
The following labels are attached to nodes on CoreWeave Cloud, and may be selected using affinities in the Deployment manifests, as demonstrated in the following affinity usage examples.
Label | Refers to | Value options |
---|---|---|
node.coreweave.cloud/cpu | The CPU family of the CPU on the node |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.coreweave.cloud/cpu
operator: In
values:
- amd-epyc-rome
- intel-xeon-v4
Important
Using this selector is not recommended. Instead, request GPU resources by setting the GPU count in the Deployment spec; see the guide on deploying Custom Containers for examples.
Label | Refers to | Value options |
---|---|---|
gpu.nvidia.com/count | Number of GPUs provisioned in the node | 4 to 8 ; must be included as a string |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/count
operator: In
values:
- "3"
Label | Refers to | Value options |
---|---|---|
gpu.nvidia.com/class | The GPU model provisioned in the node |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/class
operator: In
values:
- A40
Label | Refers to | Value options |
---|---|---|
gpu.nvidia.com/vram | The GPU VRAM, in Gigabytes, on the GPUs provisioned in the node |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/vram
operator: In
values:
- "8"
Label | Refers to | Value options |
---|---|---|
ethernet.coreweave.cloud/speed | Uplink speed from node to the backbone | 10G , 40G , 100G |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ethernet.coreweave.cloud/speed
operator: In
values:
- 40G
Important
This label is currently applicable only for
Tesla_V100
nodes.Label | Refers to | Value options |
---|---|---|
gpu.nvidia.com/nvlink | Denotes whether or not GPUs are interconnected with NVLink | true , false |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/nvlink
operator: In
values:
- true
Label | Refers to | Value options |
---|---|---|
topology.kubernetes.io/region | The region the node is placed in | ORD1 , LAS1 , LGA1 |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/region
operator: In
values:
- ORD1
Last modified 8mo ago