Skip to main content

Create a CKS Cluster

Create and access CKS clusters

Create a cluster using the Cloud Console

To create a new CKS cluster using the Cloud Console, open the Cloud Console dashboard. The cluster dashboard is also the homepage of the Cloud Console, so it is the first dashboard you'll see. From here, you can create new CKS clusters, or view and manage deployed ones.

If you do not yet have any clusters, this dashboard will be empty.

To begin creating a cluster, click the Create Cluster button.

1. Setup

In the first step, Setup, give your cluster a name, then select a Kubernetes version from a dropdown list of supported versions.

Info

CKS generally supports the latest three versions of Kubernetes.

Next, select whether you'd like your cluster to be able to access the Kubernetes API via the Internet.

Warning

CKS currently only supports public clusters as a self-service option. For assistance setting up a private cluster, please reach out to our Support team.

To configure a custom Audit Policy for the cluster, select the Custom Audit Policy checkbox to open the Audit Policy YAML editor.

Learn more

To learn more about cluster Audit Policies, see the official Kubernetes documentation.

This window allows direct editing of the Audit Policy configuration file. Click the Save button in the bottom right corner to save the Audit Policy and resume the cluster creation process.

The Audit Policy is populated with the following settings by default, which can be changed as desired.

Click to expand: Default cluster audit policy
audit-policy.yaml
1
apiVersion: audit.k8s.io/v1 # This is required.
2
kind: Policy
3
# Don't generate audit events for all requests in RequestReceived stage.
4
omitStages:
5
- "RequestReceived"
6
rules:
7
# Log pod changes at RequestResponse level
8
- level: RequestResponse
9
resources:
10
- group: ""
11
# Resource "pods" doesn't match requests to any subresource of pods,
12
# which is consistent with the RBAC policy.
13
resources: ["pods"]
14
# Log "pods/log", "pods/status" at Metadata level
15
- level: Metadata
16
resources:
17
- group: ""
18
resources: ["pods/log", "pods/status"]
19
20
# Don't log requests to a configmap called "controller-leader"
21
- level: None
22
resources:
23
- group: ""
24
resources: ["configmaps"]
25
resourceNames: ["controller-leader"]
26
27
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
28
- level: None
29
users: ["system:kube-proxy"]
30
verbs: ["watch"]
31
resources:
32
- group: "" # core API group
33
resources: ["endpoints", "services"]
34
35
# Don't log authenticated requests to certain non-resource URL paths.
36
- level: None
37
userGroups: ["system:authenticated"]
38
nonResourceURLs:
39
- "/api*" # Wildcard matching.
40
- "/version"
41
42
# Log the request body of configmap changes in kube-system.
43
- level: Request
44
resources:
45
- group: "" # core API group
46
resources: ["configmaps"]
47
# This rule only applies to resources in the "kube-system" namespace.
48
# The empty string "" can be used to select non-namespaced resources.
49
namespaces: ["kube-system"]
50
51
# Log configmap and secret changes in all other namespaces at the Metadata level.
52
- level: Metadata
53
resources:
54
- group: "" # core API group
55
resources: ["secrets", "configmaps"]
56
57
# Log all other resources in core and extensions at the Request level.
58
- level: Request
59
resources:
60
- group: "" # core API group
61
- group: "extensions" # Version of group should NOT be included.
62
63
# A catch-all rule to log all other requests at the Metadata level.
64
- level: Metadata
65
# Long-running requests like watches that fall under this rule will not
66
# generate an audit event in RequestReceived.
67
omitStages:
68
- "RequestReceived"

2. Network

Next, configure the network settings for the cluster. On the Network configuration page, first select the Super Region where you'd like to deploy the cluster. Then, select the Zone.

Info

Zone availability is subject to capacity.

Select a VPC

If you have already created a VPC to use with this cluster, select the same Zone where that VPC was deployed. Otherwise, you can either elect to use a default VPC to be created for you during this process, or create a new custom VPC.

Default VPCs can still be customized from this page, by clicking the Customize option beside the Create a default VPC radio button.

This opens a configuration screen, where the available VPC prefixes and host prefixes may be adjusted as desired.

Info

Each Zone features its own default prefixes, which are used to populate default VPCs. These can be changed. Refer to Regions to see each Zone's default prefixes.

3. Auth

The Auth screen exposes authentication and authorization configuration options, which may be enabled or disabled by toggling them on or off on this screen. Selecting any of these options causes additional configuration fields to appear.

Note

All settings on this screen are optional. If you do not wish to enable any of these features, you may proceed with cluster creation by clicking the Next button, without selecting any options in this step.

Add an authentication webhook

Selecting the Add an authentication webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field, and may optionally include a Certificate Authority.

Learn more

To learn more about Webhook authentication in Kubernetes, see the official Kubernetes documentation.

Add an authorization webhook

Selecting the Add an authorization webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field.

Learn more

To learn more about Webhook authorization in Kubernetes, see the official Kubernetes documentation.

Enable OIDC

To enable OIDC, the Issuer URL and Client ID fields are required. All other fields are optional.

Learn more

To learn more about OIDC for Kubernetes, see the official Kubernetes documentation.

Certificate Authority

Each of the authentication and authorization options provides an optional Certificate Authority checkbox. Clicking this box opens a YAML editor, which can be used to input a Certificate Authority X.509 certificate.

4. Deploy

The final step of cluster creation provides an overview of all options selected during the creation process. After reviewing and confirming the cluster's configuration, click the Submit button to deploy the new cluster.

Your cluster will appear on the cluster dashboard with the status Creating. When a cluster is ready, its status changes to Healthy. If there are configuration or deployment issues, the cluster's status changes to Unhealthy.

View details of deployed clusters

To view more information about a deployed cluster, click the vertical ellipses menu beside the cluster name and select View Details. This opens the cluster's current configuration in JSON, as well as information about the cluster's age, location, name, associated API endpoint, and current state. To return to the dashboard, close this panel.

Cluster statuses

The cards at the top of the cluster dashboard provide information about the status of your current clusters.

Important

The cluster's status refers to the status of the cluster's control plane. A CKS cluster with a Healthy status may in fact still be provisioning in-cluster resources, such as required control plane Node Pools, or required applications such as CNI and DNS configurations. See Cluster Components for more information.

NameDescription
QuotaDisplays the number of CKS clusters your organization has deployed, over the maximum limit of clusters it is allowed to create as defined by the organization's quota. Represented as count/quota. If you have not yet created any clusters, or you have no quota assigned, the status presented is No Quota.
HealthyDisplays the number of healthy clusters deployed. In a healthy cluster, all control plane elements, servers, and Pods are in a Healthy state. The cluster is stable and responsive, and can manage workloads.
UnhealthyDisplays the number of unhealthy clusters. A cluster can become Unhealthy for many reasons, including control plane issues, unresponsive Nodes, failing Pods, network failures, or storage problems.