Skip to main content

Create a Node Pool

Create a Node Pool to manage Nodes as a single entity

To add Nodes to CKS clusters, you must first create and deploy a Node Pool to associate with a given cluster.

Node Pools are deployed as Kubernetes Custom Resources (CRs), which allocate the number, type, and regional placement of Nodes for use by a specified CKS cluster.

Node Pools can be deployed either directly using Kubernetes, or on the Cloud Console using the Node Pools dashboard.

Prerequisites

  • An active CoreWeave account
  • kubectl installed locally
  • An active API Access Token, with an associated Kubeconfig

Create a Node Pool using the Cloud Console

Log in to your CoreWeave account. Then, navigate to the Node Pools link from the left-hand navigation.

Select the CKS cluster to which the Node Pool should be added from the drop-down menu at the top of the page.

Next, click the Create Node Pool button at the top right corner of the dashboard.

The creation page features a YAML editor on the right-hand side, with a corresponding GUI on the left-hand side.

Configure the essential aspects of the Node Pool as desired:

FieldTypeYAML FieldDescription
NameStringmetadata.nameThe name of the Node Pool
ClusterStringspec.clusterThe cluster to which the Node Pool will be added
Compute ClassStringspec.computeClassThe compute class of the Node Pool (default or spot) - the default is default
Target NodesIntegerspec.targetNodesThe quantity of desired Nodes in the Node Pool (minimum: 0)
Instance TypeStringspec.instanceTypeThe desired instance type for the Node Pool

Node Pool quotas

When Node Pools are created, the organization's quota is checked to ensure the organization has enough quota to create the Node Pool. If the organization does not have enough quota, the Node Pool creation will fail. Error messages are displayed in the output of kubectl describe on the nodepool resource:

Example
Type Reason Age From Message
---- ------ ---- ---- -------
Warning CWOverQuota 2m6s (x21 over 47m) nodePoolOperator TargetNodes pushes org over quota by <OVERAGE>. Quota limit is <QUOTA> for instance type <INSTANCE_TYPE> in zone <ZONE>
Warning CWNodePoolQuotaCheckFailed 2m6s (x21 over 47m) nodePoolOperator Internal quota error
Info

If a quota's maximum is exceeded by the number set in targetNodes, the Node Pool creation will fail. For example, if the quota's maximum is 10, and targetNodes is set to 15, the Node Pool creation will fail completely - it will not partially provision. targetNodes may be set to equal the quota maximum. For example, if the quota maximum is 10, and targetNodes is set to 10, Node Pool creation will succeed.

For more information, see the Node Pool reference page.

For more details regarding your organization's quota, please contact your CoreWeave representative.

Configure taints, labels, and annotations

At the bottom of the creation page, configure any desired taints, annotations, or labels for the Node Pool.

Info

For more information about Node taint scheduling, see Taints and Tolerations in the official Kubernetes documentation. For information about the valid syntax for labels and annotations in CKS and vanilla Kubernetes, see Labels and Selectors in the official Kubernetes documentation.

Deploy the Node Pool

Once the Node Pool is ready to deploy, click the Submit button to deploy the Node Pool. Click the Reset button to clear all fields.

Once you click the Submit button, you will be directed back to the Node Pools dashboard. The new Node Pool is listed in a Pending state until it has completed deployment, when its status changes to Healthy.

Learn more

To learn more about other Node Pool conditions, see the Node Pool reference on conditions.

Create a Node Pool using Kubernetes

First, configure a Node Pool manifest. Here's an example of a default Node Pool:

example-nodepool.yaml
apiVersion: compute.coreweave.com/v1alpha1
kind: NodePool
metadata:
name: example-nodepool
spec:
computeClass: default
autoscaling: false
instanceType: gd-8xh100ib-i128
maxNodes: 0
minNodes: 0
targetNodes: 2
nodeLabels:
my-label/node: "true"
nodeAnnotations:
my-annotation/node: "true"
nodeTaints:
- key: node-taint
value: "true"
effect: NoSchedule

The fields in this manifest map to the following values:

FieldTypeDescriptionDefault
computeClassStringThe compute class of the Node Pool (default or spot). default is used for Reserved and On-Demand instances, and is the default. spot is used for Spot instances that can be preempted.default
instanceTypeStringGPU Instance or CPU Instance typeN/A
autoscalingBooleanWhether a cluster has autoscaling enabledfalse
targetNodesIntegerThe quantity of desired Nodes in the Node Pool (minimum 0)N/A
minNodesIntegerThe minimum number of Nodes in the cluster (Optional if autoscaling: false)0
maxNodesIntegerThe maximum number of Nodes in the cluster (Optional if autoscaling: false)0
nodeLabelsMapLabels to apply to the Nodes in the Node Pool, to organize Nodes for specific purposes and Pod scheduling (Optional)N/A
nodeTaintsMapTaints to apply to the Nodes in the Node Pool, to schedule only Pods with matching tolerations (Optional)N/A

In the example above, the Node Pool manifest creates a Node Pool with the following characteristics:

KeyExample valueDescription
nameexample-nodepoolThe name of the Node Pool
computeClassdefaultThe compute class of the Node Pool, in this case a default Node Pool for Reserved and On-Demand instances
autoscalingfalseAutoscaling is not enabled
instanceTypegd-8xh100ib-i128The type of instances to include in the Node Pool, in this case 8 GPU-count H100s with InfiniBand (gd-8xh100ib-i128)
minNodes0The minimum number of Nodes that must be in the Node Pool - in this case, that number is not set (set to 0)
maxNodes0The maximum number of Nodes that must be in the Node Pool - in this case, that number is not set (set to 0)
targetNodes2The number of desired Nodes that should be in the Node Pool, in this case 2
nodeLabelsmy-label/node: "true"The label to place on all Nodes within the Node Pool
nodeTaints[{ key: "my-label/node", value: "true", effect: "NoSchedule" }]The taint to place on all Nodes in the Node Pool

Autoscaling

To enable autoscaling for a Node Pool, set autoscaling: true. The autoscaler will then adjust targetNodes based on workload demand, keeping it between minNodes and maxNodes. See Autoscale Node Pools for details.

Apply the manifest

Once you have configured the manifest, apply it with kubectl:

Example
$
kubectl apply -f example-nodepool.yaml

When the manifest is applied, CKS provisions the cluster with a Node Pool comprised of Nodes that match the manifest's specifications.

Verify the Node Pool

Verify that the Node Pool resource has been created properly by using kubectl get on the nodepool resource. For example:

Example command
$
kubectl get nodepool example-nodepool

The output displays details about the Node Pool name, including the type and number of instances it contains:

Example output
NAME INSTANCE TYPE TARGETNODES ALLOCATEDNODES CURRENTNODES ALLOCATED ALLOCATEDREASON ACCEPTED AGE
example-nodepool gd-8xg100-i128 2 Accepted 24h

List all available Node Pools

To view all available Node Pools in a cluster, use kubectl get nodepool. This returns a list of all current Node Pools in the cluster, as well as their current condition. For example:

Example command
$
kubectl get nodepool
Example output
NAME INSTANCE TYPE TARGETNODES ALLOCATEDNODES CURRENTNODES ALLOCATED ALLOCATEDREASON ACCEPTED AGE
example-nodepool gd-8xg100-i128 2 Accepted 24h
NodePool-2 cd-hp-a96-genoa 2 2 2 True Complete Accepted 2d22h
NodePool-3 gd-8xh100ib-i128 3 3 3 True Complete Accepted 2d15h

View the Node Pool

To see additional details on any Node Pool, target the Node Pool with kubectl describe.

For example, where the Node Pool's metadata.name is example-nodepool:

Example command
$
kubectl describe nodepool example-nodepool
Example output
Name: example-nodepool
Namespace:
Labels: <none>
Annotations: compute.coreweave.com/initialized: 2025-08-22T22:20:23Z
nodepools.compute.coreweave.com/instance-type: valid
API Version: compute.coreweave.com/v1alpha1
Kind: NodePool
Metadata:
Creation Timestamp: 2025-08-26T22:20:23Z
Finalizers:
compute.coreweave.com/nodepool-finalizer
Generation: 2
Resource Version: 23543
UID: 32c18466-08a3-44c8-920f-2216f43c523a
Spec:
Autoscaling: false
Compute Class: default
Instance Type: gd-1xgh200
Lifecycle:
Disable Unhealthy Node Eviction: false
Scale Down Strategy: IdleOnly
Max Nodes: 0
Min Nodes: 0
Target Nodes: 1
Status:
Allocated Nodes: 1
Conditions:
Last Transition Time: 2025-08-26T22:20:25Z
Message: successfully validated
Reason: Accepted
Status: True
Type: Accepted
Last Transition Time: 2025-08-26T22:20:25Z
Message: successfully validated
Reason: Valid
Status: True
Type: Validated
Last Transition Time: 2025-08-26T22:34:51Z
Message: current node count 1 equals target node count 1
Reason: Complete
Status: True
Type: Allocated
Last Transition Time: 2025-08-26T22:34:51Z
Message: current node count 1 equals target node count 1
Reason: TargetMet
Status: True
Type: AtTarget
Last Transition Time: 2025-08-26T22:20:26Z
Message: capacity available to meet requested targetNodes
Reason: Sufficient
Status: True
Type: SufficientCapacity
Last Transition Time: 2025-08-26T22:20:26Z
Message: capacity available to meet requested targetNodes
Reason: Sufficient
Status: True
Type: Capacity
Last Transition Time: 2025-08-26T22:20:26Z
Message: nodePool is under quota for instance type gd-1xgh200 in zone US-EAST-04A
Reason: Under
Status: True
Type: Quota
Current Nodes: 1
Node Profile: tnt-cw9a2f-docs-work-31c18466-08a3-44c8-920f-2116f23c5f3a
Events: <none>
Info

For more information on Node Pool conditions, see Node Pool Reference: conditions.

Field nameDescription
INSTANCE TYPEThe instance type of all Nodes in the Node Pool.
TARGETNODESThe ideal number of Nodes in the Node Pool.
INPROGRESSThe count of Nodes progressing into the Node Pool.
CURRENTNODESThe count of Nodes in-cluster associated with the Node Pool.
VALIDATEDDisplays the result of nodePool validation.
CAPACITYDisplays the last result for capacity checks for the instance type.
QUOTADisplays the last result for quota checks for the instance type.
Learn more

For more information on Node Pool creation, see the Node Pool reference page.