Skip to main content

Create a CKS Cluster

Create and access CKS clusters

Create a cluster using the Cloud Console

To create a new cluster using the Cloud Console, open the Cloud Console dashboard.

The cluster dashboard is also the home page of the Cloud Console, so it is the first dashboard shown. From here, you can create new clusters, or view and manage deployed ones.

If you do not yet have any clusters, this dashboard will be empty. To begin creating a cluster, click the Create Cluster button to open the cluster creation form.

1. Setup

In the Setup step, name your new cluster and select a Kubernetes version from a dropdown list of supported versions.

The page also includes optional checkboxes related to the control plane and cluster Audit Policy:

ValueTypeDescription
NameStringThe name of the cluster. Cluster names must begin and end with a letter or number. For example, foo123.
Kubernetes VersionDropdownThe cluster's Kubernetes version. All supported versions of Kubernetes are listed in this dropdown menu.
PublicCheckbox (Optional)Select this box to expose the cluster's control plane to the public Internet
Audit PolicyCheckbox (Optional)Select this box to apply a Kubernetes audit policy to the cluster

Checking the Audit Policy box will cause the Audit Policy YAML editor to open in the foreground.

Learn more

To learn more about cluster Audit Policies, see the official Kubernetes documentation.

This window allows direct editing of the YAML file. Click the Save button in the bottom right corner to save your Audit Policy and resume the cluster creation process.

The Audit Policy YAML is populated with the following settings by default, which can be changed as desired.

Click to expand: Default cluster audit policy
audit-policy.yaml
1
apiVersion: audit.k8s.io/v1 # This is required.
2
kind: Policy
3
# Don't generate audit events for all requests in RequestReceived stage.
4
omitStages:
5
- "RequestReceived"
6
rules:
7
# Log pod changes at RequestResponse level
8
- level: RequestResponse
9
resources:
10
- group: ""
11
# Resource "pods" doesn't match requests to any subresource of pods,
12
# which is consistent with the RBAC policy.
13
resources: ["pods"]
14
# Log "pods/log", "pods/status" at Metadata level
15
- level: Metadata
16
resources:
17
- group: ""
18
resources: ["pods/log", "pods/status"]
19
20
# Don't log requests to a configmap called "controller-leader"
21
- level: None
22
resources:
23
- group: ""
24
resources: ["configmaps"]
25
resourceNames: ["controller-leader"]
26
27
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
28
- level: None
29
users: ["system:kube-proxy"]
30
verbs: ["watch"]
31
resources:
32
- group: "" # core API group
33
resources: ["endpoints", "services"]
34
35
# Don't log authenticated requests to certain non-resource URL paths.
36
- level: None
37
userGroups: ["system:authenticated"]
38
nonResourceURLs:
39
- "/api*" # Wildcard matching.
40
- "/version"
41
42
# Log the request body of configmap changes in kube-system.
43
- level: Request
44
resources:
45
- group: "" # core API group
46
resources: ["configmaps"]
47
# This rule only applies to resources in the "kube-system" namespace.
48
# The empty string "" can be used to select non-namespaced resources.
49
namespaces: ["kube-system"]
50
51
# Log configmap and secret changes in all other namespaces at the Metadata level.
52
- level: Metadata
53
resources:
54
- group: "" # core API group
55
resources: ["secrets", "configmaps"]
56
57
# Log all other resources in core and extensions at the Request level.
58
- level: Request
59
resources:
60
- group: "" # core API group
61
- group: "extensions" # Version of group should NOT be included.
62
63
# A catch-all rule to log all other requests at the Metadata level.
64
- level: Metadata
65
# Long-running requests like watches that fall under this rule will not
66
# generate an audit event in RequestReceived.
67
omitStages:
68
- "RequestReceived"

2. Region

The second step of cluster creation allows you to select a Super Region for your cluster.

Select a Super Region by clicking on the corresponding box.

After selecting a Super Region, the Zone dropdown displays all usable, Availability Zones within corresponding General Access Regions. Availability of these Zones is subject to capacity.

ValueTypeDescription
RegionDropdownThe Super Region in which to deploy the cluster.
ZoneDropdownThe Zone in which to deploy the cluster. Zones labeled Dedicated Access are reserved for select customers. Zones without a label are classified as General Access.

3. Auth

The Auth screen exposes authentication and authorization configuration options, which may be enabled or disabled by toggling them on or off on this screen. Selecting any of these options causes additional configuration fields to appear.

Note

All settings on this screen are optional. If you do not wish to enable any of these features, you may proceed with cluster creation by clicking the Next button, without selecting any options in this step.

Add an authentication webhook

Selecting the Add an authentication webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field, and may optionally include a Certificate Authority.

Learn more

To learn more about Webhook authentication in Kubernetes, see the official Kubernetes documentation.

Add an authorization webhook

Selecting the Add an authorization webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field.

Learn more

To learn more about Webhook authorization in Kubernetes, see the official Kubernetes documentation.

Enable OIDC

To enable OIDC, the Issuer URL and Client ID fields are required. All other fields are optional.

Learn more

To learn more about OIDC for Kubernetes, see the official Kubernetes documentation.

Certificate Authority

Each of the authentication and authorization options provides an optional Certificate Authority checkbox. Clicking this box opens a YAML editor, which can be used to input a Certificate Authority X.509 certificate.

4. Network

The Network step provides a VPC dropdown menu, which lists the VPCs within the Region specified in Step 2: Region. VPCs that are not useable also appear on the dropdown list, but are not selectable.

Selecting a VPC causes the VPC Prefixes menu to appear. Use these options to map network prefixes to the cluster's internal and external load balancers.

FieldDescriptionMultiple or single input
Internal LB CIDRThe CIDR range to use for the internal Load Balancer used to balance traffic among instances within the VPC.Multiple
Pod CIDRThe CIDR range to assign Pods IP address space within the cluster.Single
Service CIDRThe CIDR range that defines the IP address space for Services within the cluster.Single
Note

Selecting a prefix in multiple fields causes that prefix to become deselected in the previously-selected fields.

5. Deploy

The final step of cluster creation provides an overview of all options selected during the creation process. After reviewing these options, click the Submit button to deploy the new cluster.

Your cluster will appear on the cluster dashboard with the status Creating.

When a cluster is ready, its status changes to Healthy. If there are configuration or deployment issues, the cluster's status changes to Unhealthy.

View deployed cluster details

To view more information about a deployed cluster, click the vertical ellipses menu beside the cluster name. This menu provides additional actions to perform on the cluster, as the View Details option. Clicking this displays the cluster's current manifest, as well as its age, location, name, and state. To return to the dashboard, close this panel.

Cluster statuses

The cards at the top of the cluster dashboard provide information about the status of your current clusters:

NameDescription
QuotaDisplays the number of CKS clusters your organization has deployed, over the maximum limit of clusters it is allowed to create as defined by the organization's quota. Represented as count/quota. If you have not yet created any clusters, or you have no quota assigned, the status presented is No Quota.
HealthyDisplays the number of healthy clusters deployed. In a healthy cluster, all control plane elements, servers, and Pods are in a Healthy state. The cluster is stable and responsive, and can manage workloads.
UnhealthyDisplays the number of unhealthy clusters. A cluster can become Unhealthy for many reasons, including control plane issues, unresponsive Nodes, failing Pods, network failures, or storage problems.