Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.coreweave.com/llms.txt

Use this file to discover all available pages before exploring further.

Create a cluster using the Cloud Console

To create a new CKS cluster using the Cloud Console, open the Cluster Dashboard. From here, you can create new CKS clusters, or view and manage deployed ones. If you do not yet have any clusters, this dashboard will be empty: A screenshot of the Cloud Console cluster dashboard, with no current clusters To begin creating a cluster, click the Create Cluster button.

1. Setup

First, give your cluster a name. It’s best to reflect location in the name so clusters stay organized at scale. Here are some guidelines:
  • Keep names short and put the location first so that they group together naturally in reports.
  • Only use lowercase letters, numbers, and hyphens to keep names friendly for URLs and automation.
  • Avoid mutable details like Kubernetes version, Node Pool sizes, or temporary attributes that may change.
Here are some example names to use as a starting point, where [short_name] is a concise descriptor of an environment, lifecycle, or workload:
  • use04a-[short_name], such as use04a-prod or use04a-staging
  • us-east-04a-[short_name], such as us-east-04a-prod or us-east-04a-staging
Next, select a Kubernetes version from a dropdown list of supported versions. CKS generally supports the latest three versions of Kubernetes. See Cluster Components for more information about supported versions. A screenshot of the first step to creating a cluster, labeled "Setup" Next, select whether you’d like your cluster to be able to access the Kubernetes API via the Internet.
CKS currently only supports public clusters as a self-service option. For assistance setting up a private cluster, please reach out to our Support team.
To configure a custom Audit Policy for the cluster, select the Custom Audit Policy checkbox to open the Audit Policy YAML editor.
To learn more about cluster Audit Policies, see the official Kubernetes documentation.
A screenshot of the Audit Policy editing window This window allows direct editing of the Audit Policy configuration file. Click the Save button in the bottom right corner to save the Audit Policy and resume the cluster creation process. The Audit Policy is populated with the following settings by default, which can be changed as desired.
audit-policy.yaml
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
      - group: ""
        # Resource "pods" doesn't match requests to any subresource of pods,
        # which is consistent with the RBAC policy.
        resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
      - group: ""
        resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
      - group: ""
        resources: ["configmaps"]
        resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
      - group: "" # core API group
        resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
      - "/api*" # Wildcard matching.
      - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
      - group: "" # core API group
        resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
      - group: "" # core API group
        resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
      - group: "" # core API group
      - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

2. Network

Next, configure the network settings for the cluster. On the Network configuration page, first select the Super Region where you’d like to deploy the cluster. Then, select the Zone.
Zone availability is subject to capacity.
A screenshot of the second step to creating a cluster, labeled "Zone"

Select a VPC

If you have already created a VPC to use with this cluster, select the same Zone where that VPC was deployed. Otherwise, you can either elect to use a default VPC to be created for you during this process, or create a new custom VPC. Default VPCs can still be customized from this page, by clicking the Customize option beside the Create a default VPC radio button. A screenshot of the default VPC customization window This opens a configuration screen, where the available VPC prefixes and Host Prefixes may be adjusted as desired. The default Host Prefix assigned to the default VPC is also seen here. A screenshot of the default VPC customization window
Each Zone features its own default prefixes, which are used to populate default VPCs. These can be changed. Refer to Regions to see each Zone’s default prefixes.

3. Auth

The Auth screen exposes authentication and authorization configuration options, which may be enabled or disabled by toggling them on or off on this screen. Selecting any of these options causes additional configuration fields to appear. A screenshot of Step 3 of cluster configuration, auth settings
All settings on this screen are optional. If you do not wish to enable any of these features, you may proceed with cluster creation by clicking the Next button, without selecting any options in this step.

Add an authentication webhook

Selecting the Add an authentication webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field, and may optionally include a Certificate Authority.
To learn more about Webhook authentication in Kubernetes, see the official Kubernetes documentation.

Add an authorization webhook

Selecting the Add an authorization webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field.
To learn more about Webhook authorization in Kubernetes, see the official Kubernetes documentation.

Enable OIDC

To enable OIDC, the Issuer URL and Client ID fields are required. All other fields are optional.
To learn more about OIDC for Kubernetes, see the official Kubernetes documentation.

Certificate Authority

Each of the authentication and authorization options provides an optional Certificate Authority checkbox. Clicking this box opens a YAML editor, which can be used to input a Certificate Authority X.509 certificate.

4. Deploy

The final step of cluster creation provides an overview of all options selected during the creation process. After reviewing and confirming the cluster’s configuration, click the Submit button to deploy the new cluster. A screenshot of the final step to creating a cluster, labeled "Deploy" Your cluster will appear on the cluster dashboard with the status Creating. When a cluster is ready, its status changes to Healthy. If there are configuration or deployment issues, the cluster’s status changes to Unhealthy.

View details of deployed clusters

To view more information about a deployed cluster, click the vertical ellipses menu beside the cluster name and select View Details. This opens the cluster’s current configuration in JSON, as well as information about the cluster’s age, location, name, associated API endpoint, and current state. To return to the dashboard, close this panel. A screenshot of a UI element showing how to reveal the current config for the cluster

Cluster statuses

A screenshot of the Cloud Console cluster creation dashboard The cards at the top of the cluster dashboard provide information about the status of your current clusters.
The cluster’s status refers to the status of the cluster’s Control Plane. A CKS cluster with a Healthy status may in fact still be provisioning in-cluster resources, such as required Control Plane Node Pools, or required applications such as CNI and DNS configurations. See Cluster Components for more information.
NameDescription
QuotaDisplays the number of CKS clusters your organization has deployed, over the maximum limit of clusters it is allowed to create as defined by the organization’s quota. Represented as count/quota. If you have not yet created any clusters, or you have no quota assigned, the status presented is No Quota.
HealthyDisplays the number of healthy clusters deployed. In a healthy cluster, all Control Plane elements, servers, and Pods are in a Healthy state. The cluster is stable and responsive, and can manage workloads.
UnhealthyDisplays the number of unhealthy clusters. A cluster can become Unhealthy for many reasons, including Control Plane issues, unresponsive Nodes, failing Pods, network failures, or storage problems.

Do not install the NVIDIA GPU Operator on CKS clusters

CoreWeave manages the NVIDIA GPU Operator on your behalf. Do not install the NVIDIA GPU Operator on CKS clusters. Installing it yourself conflicts with the platform-managed deployment, and is not supported.
Last modified on April 20, 2026