Skip to main content

Deploy an open-source LLM on CKS

Learn how to deploy an open source LLM on CKS

This comprehensive guide explains how to deploy the Llama 3.1 8B Instruct open-source LLM from Meta on CKS, which covers the following steps:

  • Create a cluster in CKS
  • Create a Node pool
  • Interact with clusters and Pods using kubectl
  • Deploy and interact with an LLM using Open WebUI

Before you begin

Before completing the steps in this guide, confirm that you have the following:

Cost and security disclaimer
  • Using resources, such as compute, incurs charges. Monitor your resource usage to avoid unexpected charges.

  • CoreWeave is not responsible for the security of the Llama model provided by Hugging Face or the Open Web UI container image.

Create a CKS cluster and Node Pool

CKS clusters and Node Pools are the core infrastructure for running and managing workloads. To create a cluster and Node Pool, complete the following steps:

  1. Log in to the Cloud Console.

  2. Click Create Cluster.

  3. In the Create a Cluster dialog, give the cluster a name, select the latest Kubernetes version, and verify the box is checked for "Enable access to the Kubernetes API via the Internet". Click Next.

  4. Create the cluster where you have GPU quota available. Verify the box is checked for "Create a default VPC," and then click Next.

  5. Leave the authentication boxes unchecked and click Next.

  6. On the deploy page, click Submit.

  7. On the Success! dialog box, click Create a Node Pool.

  8. Verify the cluster you just created is selected, and do the following:

    • Name the Node Pool
    • Pick a GPU instance
    • Set Target Nodes to 1
    • Leave all other fields empty
    • Click Submit

Note that creating Node Pools can be delayed while the cluster is being created. Then, Node Pool provisioning can take up to 30 minutes. Once the Node Pool status is Healthy, continue to the steps below.

Generate a CoreWeave access token

Access tokens let you authenticate to your Kubernetes resources through kubectl.

To create an access token, complete the following steps:

  1. In the Cloud Console, click Tokens, and then click Create Token.
  2. Enter a name and expiration and then click Create.
  3. In the Create API Token dialog, select the cluster you just created from the "Select current-context" dropdown menu, and then click Download.

Use kubectl with your cluster

To communicate with your cluster using kubectl, complete the following steps:

  1. Make a KUBECONFIG environment variable that points to the kubeconfig file you just downloaded, for example:

    Example
    export KUBECONFIG=~/Downloads/<CWKubeconfig_file_name>
  2. Confirm you can connect to the cluster with the following command:

    Example
    kubectl cluster-info

    You should see cluster information like the following:

    Example
    Kubernetes control plane is running at https://****.k8s.us-east-02a.coreweave.com
    CoreDNS is running at https://****.k8s.us-east-02a.coreweave.com/api/v1/namespaces/kube-system/services/coredns:dns/proxy
    node-local-dns is running at https://****.k8s.us-east-02a.coreweave.com/api/v1/namespaces/kube-system/services/node-local-dns:dns/proxy

Create a Hugging Face secret

For CKS to download the llama-3.1-8B-Instruct model from Hugging Face, you need to create a Kubernetes secret for authentication. Complete the following steps to create a Kubernetes secret for your Hugging Face access tokens:

  1. Run the following command to create a Hugging Face secret:

    Example
    kubectl create secret generic hf-token-secret --from-literal=api_token=<Hugging Face token>
    • <Hugging Face token>: This is the token Hugging Face provides you. For more information about creating a Hugging Face token, see User access tokens.

Download and apply a YAML configuration file

Kubernetes uses YAML files to configure resources. To deploy the Llama-3.1-8B-Instruct model using our example YAML file, complete the following steps:

  1. Use kubectl to apply the file by running the following command:

    Caution

    Before running the command, confirm you have been granted access to the Llama-3.1-8B model. Visit the meta-llama/Llama-3.1-8B-Instruct page to verify you have access.

    Example
    kubectl apply -f https://docs.coreweave.com/examples/llm-on-cks-example.yaml
  2. Confirm Kubernetes deployed the resources by running the following commands:

    Example
    kubectl get pods

    Verify all Pods are ready and running. The output should look like the following:

    Example
    NAME READY STATUS RESTARTS AGE
    llama-3-1-8b-deployment-77f4559f9f-wdvpj 1/1 Running 0 2m53s
    open-webui-5b464664d8-942cg 1/1 Running 0 2m53s
  3. Verify the services are working by running the following commands:

    Example
    kubectl logs <llama-pod-name>
    • <llama-pod-name>: The Pod name beginning with llama-* that was returned by kubectl get pods.

    • In the logs, you should see the following line: INFO: Application startup complete.

Get Open WebUI endpoint

To get the Open WebUI HTTP endpoint, complete the following steps:

  1. Run the following command to get the external IP exposed by the service:

    Example
    kubectl get services

    You should see the service name and an external IP address and port like the following:

    Example
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    open-webui-svc LoadBalancer 10.16.1.142 198.51.100.1 80:32141/TCP 6m21s
  2. Navigate to the Open WebUI endpoint using your external IP and port number found in the previous command. For example, http://198.51.100.1:80.

You should now see the see the Open WebUI site:

Open WebUI site

Next steps

Congratulations! You have just deployed an LLM on CKS.