Skip to main content

Create a Node Pool

Create a Node Pool to manage Nodes as a single entity

To add Nodes to CKS clusters, you must first create and deploy a Node Pool to associate with a given cluster.

Node Pools are deployed as Kubernetes Custom Resources (CRs), which allocate the number, type, and regional placement of Nodes for use by a specified CKS cluster.

Node Pools can be deployed either directly using Kubernetes, or on the Cloud Console using the Node Pools dashboard.

Important

CoreWeave Kubernetes Service Self-Service capabilities are currently only available in US-EAST-04.

Prerequisites

  • An active CoreWeave account
  • kubectl installed locally
  • An active API Access Token, with an associated Kubeconfig

Create a Node Pool using the Cloud Console

Log in to your CoreWeave account. Then, navigate to the Node Pools link from the left-hand navigation.

Select the CKS cluster to which the Node Pool should be added from the drop-down menu at the top of the page.

Next, click the Create Node Pool button at the top right corner of the dashboard.

The creation page features a YAML editor on the right-hand side, with a corresponding GUI on the left-hand side.

Configure the essential aspects of the Node Pool as desired:

FieldTypeDescription
NameStringThe name of the Node Pool
ClusterStringThe cluster to which the Node Pool will be added
Target NodesIntegerThe quantity of desired Nodes in the Node Pool (minimum: 0)
Instance TypeStringThe desired instance type for the Node Pool

Node Pool quotas

When Node Pools are created, the organization's quota is checked to ensure the organization has enough quota to create the Node Pool. If the organization does not have enough quota, the Node Pool creation will fail. Error messages are displayed in the output of kubectl describe on the nodepool resource:

Example
Type Reason Age From Message
---- ------ ---- ---- -------
Warning NPOCWOverQuota 2m6s (x21 over 47m) nodePoolOperator NodePool targetNodes pushes you over quota for <INSTANCE TYPE> in <REGION>
Info

If a quota's maximum is exceeded by the number set in targetNodes, the Node Pool creation will fail. For example, if the quota's maximum is 10, and targetNodes is set to 15, the Node Pool creation will fail completely - it will not partially provision. targetNodes may be set to equal the quota maximum. For example, if the quota maximum is 10, and targetNodes is set to 10, Node Pool creation will succeed.

If the quota does not exist, the message displayed is:

Example
Quota not found for instance type {INSTANCE_TYPE} in zone {ZONE}.

If the quota does exist, but is not sufficient for the requested Node Pool resources, you may encounter the following error message:

Example
NodePool targetNodes pushes you over quota for {INSTANCE_TYPE} in {ZONE}.
ReasonMessageDescription
NodePoolQuotaCheckFailedNodePool targetNodes pushes you over quota for {INSTANCE_TYPE} in {ZONE}A quota does not exist. Please contact support.
NPOCWOverQuotaNodePool targetNodes pushes you over quota for {INSTANCE_TYPE} in {ZONE}The quota exists, but is insufficient for the Node Pool request.
Learn more

For more information, see the Node Pool reference page.

For more details regarding your organization's quota, please contact your CoreWeave representative.

Configure taints, labels, and annotations

At the bottom of the creation page, configure any desired taints, annotations, or labels for the Node Pool.

Info

For more information about Node taint scheduling, see Taints and Tolerations in the official Kubernetes documentation. For information about the valid syntax for labels and annotations in CKS and vanilla Kubernetes, see Labels and Selectors in the official Kubernetes documentation.

Deploy the Node Pool

Once the Node Pool is ready to deploy, click the Submit button to deploy the Node Pool. Click the Reset button to clear all fields.

Once you click the Submit button, you will be directed back to the Node Pools dashboard. The new Node Pool is listed in a Pending state until it has completed deployment, when its status changes to Healthy.

Learn more

To learn more about other Node Pool conditions, see the Node Pool reference on conditions.

Create a Node Pool using Kubernetes

Configure a Node Pool manifest

Configure a Node Pool manifest. For example:

example-nodepool.yaml
apiVersion: compute.coreweave.com/v1alpha1
kind: NodePool
metadata:
name: example-nodepool
spec:
autoscaling: false
instanceType: gd-8xh100-i128
maxNodes: 0
minNodes: 0
targetNodes: 2
nodeLabels:
my-label/node: "true"
nodeTaints:
- key: node-taint
value: "true"
effect: NoSchedule

The fields in this manifest map to the following values:

FieldTypeDescriptionDefault
instanceTypeStringInstance type (GPU or CPU)N/A
autoscalingBooleanWhether a cluster has autoscaling enabledfalse (Autoscaling is currently not available in CKS)
targetNodesIntegerThe quantity of desired Nodes in the Node Pool. The minimum value is 0.N/A
minNodesInteger(Optional) The minimum number of Nodes that the cluster must have0
maxNodesInteger(Optional) The maximum number of Nodes that the cluster must have0
nodeLabelsMap(Optional) Labels to apply to the Nodes in the Node Pool, to organize Nodes for specific purposes and Pod schedulingN/A
nodeTaintsMap(Optional) Taints to apply to the Nodes in the Node Pool, to schedule only Pods with matching tolerationsN/A

In the example above, the Node Pool manifest creates a Node Pool with the following characteristics:

KeyExample valueDescription
nameexample-nodepoolThe name of the Node Pool
autoscalingfalseAutoscaling is not enabled (Autoscaling is currently not available in CKS)
instanceTypegd-8xh100-i128The type of instances to include in the Node Pool, in this case 8 GPU-count H100s (gd-8xh100-i128)
minNodes0The minimum number of Nodes that must be in the Node Pool - in this case, that number is not set (set to 0)
maxNodes0The maximum number of Nodes that must be in the Node Pool - in this case, that number is not set (set to 0)
targetNodes2The number of desired Nodes that should be in the Node Pool, in this case 2
nodeLabelsmy-label/node: "true"The label to place on all Nodes within the Node Pool
nodeTaints[{ key: "my-label/node", value: "true", effect: "NoSchedule" }]The taint to place on all Nodes in the Node Pool
Important

Autoscaling is not yet available in CKS, but it is planned for a future release. Until then, setting the value of minNodes and maxNodes can be used for simple autoscaling.

Apply the manifest

Once you have configured the manifest, deploy it using Kubectl:

Example
$
kubectl apply -f example-nodepool.yaml

When the manifest is applied, CKS provisions the cluster with a Node Pool comprised of Nodes that match the manifest's specifications.

Important

To add additional Nodes to your cluster, you must create and deploy a new Node Pool manifest reflecting the new desired number of Nodes.

Verify the Node Pool

Verify that the Node Pool resource has been created properly by using kubectl get on the nodepool resource. For example:

Example command
$
kubectl get nodepool example-nodepool

The output displays details about the Node Pool name, including the type and number of instances it contains:

Example output
NAME INSTANCE TYPE TARGETNODES ALLOCATEDNODES CURRENTNODES ALLOCATED ALLOCATEDREASON ACCEPTED METADATASYNCED AGE
example-nodepool gd-8xg100-i128 2 Accepted 24h

List all available Node Pools

To view all available Node Pools in a cluster, use kubectl get nodepool. This returns a list of all current Node Pools in the cluster, as well as their current condition. For example:

Example command
$
kubectl get nodepool
Example output
NAME INSTANCE TYPE TARGETNODES ALLOCATEDNODES CURRENTNODES ALLOCATED ALLOCATEDREASON ACCEPTED METADATASYNCED AGE
cpu-control-plane cd-hp-a96-genoa 2 2 2 True Complete Accepted Synced 33h
example-nodepool gd-8xg100-i128 2 Accepted 24h
NodePool-2 cd-hp-a96-genoa 2 2 2 True Complete Accepted Synced 2d22h
NodePool-3 gd-8xh100ib-i128 3 3 3 True Complete Accepted Synced 2d15h

View the Node Pool

To see additional details on any Node Pool, target the Node Pool with kubectl describe.

For example, where the Node Pool's metadata.name is example-nodepool:

Example command
$
kubectl describe nodepool example-nodepool
Example output
Name: example-nodepool
Namespace:
Labels: argocd.argoproj.io/instance=NodePools-6bab75-us-shoggy-prod
Annotations: argocd.argoproj.io/tracking-id: NodePools-6bab75-us-shoggy-prod:compute.coreweave.com/NodePool:NodePools-6bab75-us-shoggy-prod/example-nodepool
API Version: compute.coreweave.com/v1alpha1
Kind: NodePool
Metadata:
Creation Timestamp: 2024-08-05T19:26:49Z
Finalizers:
compute.coreweave.com/nodepool-finalizer
Generation: 2
Resource Version: 8618161
UID: edce6154-2967-43ac-96ad-9928fb4149e4
Spec:
Autoscaling: false
Instance Type: gd-8xh100ib-i128
Max Nodes: 0
Min Nodes: 0
Target Nodes: 3
Status:
Conditions:
Last Transition Time: 2024-08-15T19:38:34Z
Message: successfully validated
Reason: Accepted
Status: True
Type: Accepted
Last Transition Time: 2024-08-15T19:38:34Z
Message: current Node count 3 equals target Node count 3
Reason: Complete
Status: True
Type: Allocated
Last Transition Time: 2024-08-15T19:38:34Z
Message: allocatedNodes of 3 matches requested targetNodes of 3
Reason: Sufficient
Status: True
Type: SufficientCapacity
Last Transition Time: 2024-08-15T19:38:34Z
Message: metadata update pending
Reason: Synced
Status: True
Type: MetadataSynced
Current Nodes: 3
Node Profile: tnt-6bab
Info

For more information on Node Pool conditions, see Node Pool Reference: conditions.

Field nameDescription
INSTANCE TYPEThe instance type of all Nodes in the Node Pool.
TARGETNODESThe ideal number of Nodes in the Node Pool.
ALLOCATEDNODESThe number of Nodes that have been allocated for the Node Pool. This may be different from Nodes that are currently in the Node Pool; once allocated, Nodes need time to boot up and appear in the cluster.
CURRENTNODESThe number of Nodes that have been allocated and are currently available in the Node Pool.
ALLOCATEDThis indicates whether or not the Nodes have been allocated for the Node Pool. If the response is False, they have not yet been allocated.
ALLOCATEDREASONProvides reasoning for the current state of Node allocations. When this says Complete, the Nodes have all been allocated. A value like UnderTarget may indicate that fewer Nodes than the specified target have appeared in the cluster.
ACCEPTEDIndicates whether or not the Node Pool's configuration is valid.
METADATASYNCEDIndicates whether or not all Nodes in the Node Pool have valid metadata that is synced with the most up-to-date information on the Node Pool.
Learn more

For more information on Node Pool creation, see the Node Pool reference page.