Skip to main content

Kueue

Run Kueue on CKS

Chart referenceDescription
coreweave/cks-kueueCoreWeave's Helm chart for deploying Kueue on CKS clusters

About Kueue

Kueue is a Kubernetes-native system that manages jobs using quotas. Kueue makes job decisions based on resource availability, job priorities, and the quota policies defined in your cluster queues. Kueue can determine when a job should wait for available resources, when a job should start (Pods created), and when a job should be preempted (active Pods deleted).

CKS supports Kueue out of the box. To make it as easy as possible to get started, CoreWeave provides a Helm Chart for installing Kueue. The cks-kueue Chart also includes a kueue subchart, used to configure Kueue for deployment into your CKS cluster.

Info

When you install Kueue through our Helm chart, Kueue metrics are automatically scraped and ingested into the Kueue Metrics Dashboard in CoreWeave Grafana.

Usage

Add the CoreWeave Helm repo.

Example
$
helm repo add coreweave https://charts.core-services.ingress.coreweave.com

Then, install Kueue on your CKS cluster.

Example
$
helm install kueue coreweave/cks-kueue --namespace=kueue-system --create-namespace

Sample Kueue configuration

After installing the cks-kueue chart, use the following sample configuration to set up a basic Kueue environment for CKS. This configuration includes several key Kueue components:

  • ResourceFlavor: Defines the characteristics of compute resources (CPU, memory, GPUs) available in your cluster
  • ClusterQueue: Establishes resource quotas and admission policies across your entire cluster
  • LocalQueue: Creates namespaced queues that reference a ClusterQueue for job submission
  • WorkloadPriorityClass: Defines priority levels for jobs to determine scheduling order and preemption behavior

The configuration also defines two priority classes for different job types: production jobs with high priority and development jobs with lower priority.

Example
# ResourceFlavor defines the compute resources available in your cluster
# This flavor represents the standard CKS node configuration
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
name: default-flavor
---
# ClusterQueue establishes resource quotas and admission policies
# This queue allows jobs to consume up to the specified resource limits
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
name: "cluster-queue"
spec:
# Enable preemption of lower priority jobs when higher priority jobs need resources
preemption:
withinClusterQueue: LowerPriority
# Allow jobs from all namespaces to use this queue
namespaceSelector: {} # Match all namespaces.
resourceGroups:
- coveredResources: ["cpu", "memory", "nvidia.com/gpu", "rdma/ib"]
flavors:
- name: "default-flavor"
resources:
- name: "cpu"
nominalQuota: 254 # Total CPU cores available
- name: "memory"
nominalQuota: 2110335488Ki # Total memory available (~2TB)
- name: "nvidia.com/gpu"
nominalQuota: 16 # Total GPUs available
- name: "rdma/ib"
nominalQuota: 12800 # Total number of RDMA Nodes available
---
# LocalQueue creates a namespaced queue for job submission
# Jobs submitted to this queue will use the cluster-queue resources
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
namespace: "default"
name: "default"
spec:
clusterQueue: "cluster-queue"
---
# WorkloadPriorityClass defines priority levels for job scheduling
# Higher values = higher priority (jobs with higher priority can preempt lower priority jobs)
apiVersion: kueue.x-k8s.io/v1beta1
kind: WorkloadPriorityClass
metadata:
name: prod-priority
value: 1000
description: "Priority class for prod jobs"
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: WorkloadPriorityClass
metadata:
name: dev-priority
value: 100
description: "Priority class for development jobs"
---

Observability

CoreWeave Grafana provides a Kueue Metrics Dashboard which you can use to monitor your Kueue cluster.

Topology Aware Scheduling (TAS)

Topology-Aware Scheduling allows Kueue to make smarter scheduling decisions by considering the physical topology of your cluster's Nodes. This is important for HPC and AI and ML workloads, where network latency between Nodes can be a performance bottleneck. TAS can co-locate a job's Pods to minimize communication overhead and maximize performance.

The TopologyAwareScheduling feature in the Kueue controller is enabled by default. However, to use it, you need to adjust some of the Kueue resources.

Once the Helm chart is installed and the Kueue CRDs exist, set the following values to create topologies based on CKS Node labels for Kueue to use:

Example
$
helm upgrade kueue coreweave/cks-kueue --namespace=kueue-system --values - <<EOF
topologies:
- name: infiniband
levels:
- backend.coreweave.cloud/fabric
- backend.coreweave.cloud/leafgroup
- name: multinode-nvlink-ib
levels:
- backend.coreweave.cloud/fabric
- backend.coreweave.cloud/leafgroup
- ds.coreweave.com/nvlink.domain
EOF
  • infiniband topology is for instance types that are a part of infiniband fabrics like the H100, H200, and B200.

  • multinode-nvlink-ib topology extends the infiniband topology to also include instance types with rack-scale NVLINK like the GB200s.

After the Helm chart is upgraded, you will see the new Topology CRs deployed in the cluster.

The following example configuration is an adjustment of the one shown above. It demonstrates how to use the Topology resources by referencing them in ResourceFlavor resources, which are then used by ClusterQueue and LocalQueue resources.

Example
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
name: infiniband-flavor
spec:
topologyName: infiniband # References the infiniband Topology CR
nodeLabels:
backend.coreweave.cloud/flavor: "infiniband"
---
# This flavor enables topology-aware scheduling across NVLINK domains
apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
name: gb200-flavor
spec:
topologyName: multinode-nvlink-ib # References the multinode-nvlink-ib Topology CR
nodeLabels:
node.kubernetes.io/instance-type: gb200-4x
---
# ClusterQueue for infiniband-connected workloads
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
name: "infiniband-queue"
spec:
preemption:
withinClusterQueue: LowerPriority
namespaceSelector: {}
resourceGroups:
- coveredResources: ["cpu", "memory", "nvidia.com/gpu", "rdma/ib"]
flavors:
- name: "infiniband-flavor"
resources:
- name: "cpu"
nominalQuota: 2048 # 16 nodes * 128 vCPU per node
- name: "memory"
nominalQuota: 34359738368Ki # 16 nodes * 2Ti per node = 32Ti
- name: "nvidia.com/gpu"
nominalQuota: 128 # 16 nodes * 8 GPUs per node
- name: "rdma/ib"
nominalQuota: 12800
---
# ClusterQueue for GB200 workloads with multinode-NVLINK
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
name: "gb200-queue"
spec:
preemption:
withinClusterQueue: LowerPriority
namespaceSelector: {}
resourceGroups:
- coveredResources: ["cpu", "memory", "nvidia.com/gpu", "rdma/ib"]
flavors:
- name: "gb200-flavor"
resources:
- name: "cpu"
nominalQuota: 2304 # 16 nodes * 144 vCPU per node
- name: "memory"
nominalQuota: 15000000000Ki # 16 nodes * 960 GB per node = 15.36 TB
- name: "nvidia.com/gpu"
nominalQuota: 64 # 16 nodes * 4 GPUs per node
- name: "rdma/ib"
nominalQuota: 12800
---
# LocalQueue for infiniband workloads in the default namespace
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
namespace: "default"
name: "infiniband-local"
spec:
clusterQueue: "infiniband-queue"
---
# LocalQueue for GB200 workloads in the default namespace
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
namespace: "default"
name: "gb200-local"
spec:
clusterQueue: "gb200-queue"
---
# WorkloadPriorityClass definitions (same as basic example)
apiVersion: kueue.x-k8s.io/v1beta1
kind: WorkloadPriorityClass
metadata:
name: prod-priority
value: 1000
description: "Priority class for prod jobs"
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: WorkloadPriorityClass
metadata:
name: dev-priority
value: 100
description: "Priority class for development jobs"
---

Example Jobs with Topology Constraints

You can use the kueue.x-k8s.io/podset-required-topology annotation to ensure that all Pods in a job are scheduled within the same topology domain.

Example: Four Pods on One Leafgroup (Infiniband Queue)

This example schedules four Pods within a single leafgroup:

Example
apiVersion: batch/v1
kind: Job
metadata:
name: test-tas-job
namespace: default
labels:
kueue.x-k8s.io/queue-name: infiniband-local
spec:
parallelism: 4
completions: 4
template:
metadata:
annotations:
kueue.x-k8s.io/podset-required-topology: "backend.coreweave.cloud/leafgroup"
spec:
containers:
- name: training
image: busybox
command: ["sleep", "30s"]
resources:
requests:
cpu: "32"
memory: "256Gi"
nvidia.com/gpu: "8"
rdma/ib: "1"
limits:
cpu: "32"
memory: "256Gi"
nvidia.com/gpu: "8"
rdma/ib: "1"
restartPolicy: Never

This example schedules four Pods within a single NVLINK domain for GB200 Nodes:

Example
apiVersion: batch/v1
kind: Job
metadata:
name: gb200-test-tas
labels:
kueue.x-k8s.io/queue-name: gb200-local
spec:
parallelism: 4
completions: 4
template:
metadata:
annotations:
kueue.x-k8s.io/podset-required-topology: "ds.coreweave.com/nvlink.domain"
spec:
containers:
- name: training
image: your-training-image:latest
resources:
requests:
cpu: "32"
memory: "256Gi"
nvidia.com/gpu: "4"
rdma/ib: "1"
limits:
cpu: "32"
memory: "256Gi"
nvidia.com/gpu: "4"
rdma/ib: "1"
restartPolicy: Never