Create a CKS Cluster
Create and access CKS clusters
Create a cluster using the Cloud Console
To create a new cluster using the Cloud Console, open the Cloud Console dashboard.
The cluster dashboard is also the home page of the Cloud Console, so it is the first dashboard shown. From here, you can create new clusters, or view and manage deployed ones.
If you do not yet have any clusters, this dashboard will be empty. To begin creating a cluster, click the Create Cluster button to open the cluster creation form.
1. Setup
In the Setup step, name your new cluster and select a Kubernetes version from a dropdown list of supported versions.
The page also includes optional checkboxes related to the control plane and cluster Audit Policy:
Value | Type | Description |
---|---|---|
Name | String | The name of the cluster. Cluster names must begin and end with a letter or number. For example, foo123 . |
Kubernetes Version | Dropdown | The cluster's Kubernetes version. All supported versions of Kubernetes are listed in this dropdown menu. |
Public | Checkbox (Optional) | Select this box to expose the cluster's control plane to the public Internet |
Audit Policy | Checkbox (Optional) | Select this box to apply a Kubernetes audit policy to the cluster |
Checking the Audit Policy box will cause the Audit Policy YAML editor to open in the foreground.
To learn more about cluster Audit Policies, see the official Kubernetes documentation.
This window allows direct editing of the YAML file. Click the Save button in the bottom right corner to save your Audit Policy and resume the cluster creation process.
The Audit Policy YAML is populated with the following settings by default, which can be changed as desired.
Click to expand: Default cluster audit policy
1apiVersion: audit.k8s.io/v1 # This is required.2kind: Policy3# Don't generate audit events for all requests in RequestReceived stage.4omitStages:5- "RequestReceived"6rules:7# Log pod changes at RequestResponse level8- level: RequestResponse9resources:10- group: ""11# Resource "pods" doesn't match requests to any subresource of pods,12# which is consistent with the RBAC policy.13resources: ["pods"]14# Log "pods/log", "pods/status" at Metadata level15- level: Metadata16resources:17- group: ""18resources: ["pods/log", "pods/status"]1920# Don't log requests to a configmap called "controller-leader"21- level: None22resources:23- group: ""24resources: ["configmaps"]25resourceNames: ["controller-leader"]2627# Don't log watch requests by the "system:kube-proxy" on endpoints or services28- level: None29users: ["system:kube-proxy"]30verbs: ["watch"]31resources:32- group: "" # core API group33resources: ["endpoints", "services"]3435# Don't log authenticated requests to certain non-resource URL paths.36- level: None37userGroups: ["system:authenticated"]38nonResourceURLs:39- "/api*" # Wildcard matching.40- "/version"4142# Log the request body of configmap changes in kube-system.43- level: Request44resources:45- group: "" # core API group46resources: ["configmaps"]47# This rule only applies to resources in the "kube-system" namespace.48# The empty string "" can be used to select non-namespaced resources.49namespaces: ["kube-system"]5051# Log configmap and secret changes in all other namespaces at the Metadata level.52- level: Metadata53resources:54- group: "" # core API group55resources: ["secrets", "configmaps"]5657# Log all other resources in core and extensions at the Request level.58- level: Request59resources:60- group: "" # core API group61- group: "extensions" # Version of group should NOT be included.6263# A catch-all rule to log all other requests at the Metadata level.64- level: Metadata65# Long-running requests like watches that fall under this rule will not66# generate an audit event in RequestReceived.67omitStages:68- "RequestReceived"
2. Region
The second step of cluster creation allows you to select a Super Region for your cluster.
Select a Super Region by clicking on the corresponding box.
After selecting a Super Region, the Zone dropdown displays all usable, Availability Zones within corresponding General Access Regions. Availability of these Zones is subject to capacity.
Value | Type | Description |
---|---|---|
Region | Dropdown | The Super Region in which to deploy the cluster. |
Zone | Dropdown | The Zone in which to deploy the cluster. Zones labeled Dedicated Access are reserved for select customers. Zones without a label are classified as General Access. |
3. Auth
The Auth screen exposes authentication and authorization configuration options, which may be enabled or disabled by toggling them on or off on this screen. Selecting any of these options causes additional configuration fields to appear.
All settings on this screen are optional. If you do not wish to enable any of these features, you may proceed with cluster creation by clicking the Next button, without selecting any options in this step.
Add an authentication webhook
Selecting the Add an authentication webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field, and may optionally include a Certificate Authority.
To learn more about Webhook authentication in Kubernetes, see the official Kubernetes documentation.
Add an authorization webhook
Selecting the Add an authorization webhook checkbox causes the Server and Certificate Authority fields to appear. If this option is selected, you must provide a URL in the Server input field.
To learn more about Webhook authorization in Kubernetes, see the official Kubernetes documentation.
Enable OIDC
To enable OIDC, the Issuer URL and Client ID fields are required. All other fields are optional.
To learn more about OIDC for Kubernetes, see the official Kubernetes documentation.
Certificate Authority
Each of the authentication and authorization options provides an optional Certificate Authority checkbox. Clicking this box opens a YAML editor, which can be used to input a Certificate Authority X.509 certificate.
4. Network
The Network step provides a VPC dropdown menu, which lists the VPCs within the Region specified in Step 2: Region. VPCs that are not useable also appear on the dropdown list, but are not selectable.
Selecting a VPC causes the VPC Prefixes menu to appear. Use these options to map network prefixes to the cluster's internal and external load balancers.
Field | Description | Multiple or single input |
---|---|---|
Internal LB CIDR | The CIDR range to use for the internal Load Balancer used to balance traffic among instances within the VPC. | Multiple |
Pod CIDR | The CIDR range to assign Pods IP address space within the cluster. | Single |
Service CIDR | The CIDR range that defines the IP address space for Services within the cluster. | Single |
Selecting a prefix in multiple fields causes that prefix to become deselected in the previously-selected fields.
5. Deploy
The final step of cluster creation provides an overview of all options selected during the creation process. After reviewing these options, click the Submit button to deploy the new cluster.
Your cluster will appear on the cluster dashboard with the status Creating
.
When a cluster is ready, its status changes to Healthy
. If there are configuration or deployment issues, the cluster's status changes to Unhealthy
.
View deployed cluster details
To view more information about a deployed cluster, click the vertical ellipses menu beside the cluster name. This menu provides additional actions to perform on the cluster, as the View Details option. Clicking this displays the cluster's current manifest, as well as its age, location, name, and state. To return to the dashboard, close this panel.
Cluster statuses
The cards at the top of the cluster dashboard provide information about the status of your current clusters:
Name | Description |
---|---|
Quota | Displays the number of CKS clusters your organization has deployed, over the maximum limit of clusters it is allowed to create as defined by the organization's quota. Represented as count /quota . If you have not yet created any clusters, or you have no quota assigned, the status presented is No Quota . |
Healthy | Displays the number of healthy clusters deployed. In a healthy cluster, all control plane elements, servers, and Pods are in a Healthy state. The cluster is stable and responsive, and can manage workloads. |
Unhealthy | Displays the number of unhealthy clusters. A cluster can become Unhealthy for many reasons, including control plane issues, unresponsive Nodes, failing Pods, network failures, or storage problems. |