Deploy development environments with DevPod
Create development environments with DevPod using CKS resources
DevPod enables you to create development environments by running Pods on your existing CKS resources. This tutorial teaches you how to set up and use DevPod for both basic and Docker-enabled development environments.
In this tutorial, you will:
- Learn how DevPod integrates with CKS to create development environments
- Set up DevPod on CKS, either:
- Basic setup without Docker: A simple development environment without Docker, suitable for most development tasks
- Advanced setup with Docker: A development environment with Docker support, enabling containerized development
- Access and verify your development environment using both SSH and web-based interfaces
- Understand how to manage and clean up DevPod workspaces
Before you start, you must have:
- A CKS cluster with available CPU or GPU resources
kubectl
installed and configured to access your CKS cluster- A Virtual Server with access to your CKS cluster
- DevPod CLI installed on your Virtual Server
You'll use these tools and technologies:
- DevPod CLI: For creating and managing development environments
- Kubernetes Pod manifests: For defining your development environment configuration
- PyTorch container image: As the base development environment
- Docker-in-Docker (DinD): For Docker-enabled environments (advanced setup)
- VS Code: For development (either the browser-based or the desktop version)
Know before you go
Key concepts
- DevPod: A tool that creates development environments as Kubernetes Pods, allowing you to develop directly on your CKS cluster resources
- Pod manifest template: A Kubernetes Pod specification that DevPod uses to create your development environment
- Docker-in-Docker (DinD): A sidecar container that provides Docker functionality within your development environment
The DevPod CLI assumes that kubectl
is in your PATH, and that the kubectl
context is set to the cluster you want to use.
Environment setup and assumptions
- SUNK environment: The examples in this tutorial assume a SUNK environment for toleration specification. If you are not using a SUNK environment, adjust the toleration specification accordingly.
- PyTorch base image: You'll use a PyTorch image as the base image, regardless of whether you're using CPU or GPU resources.
- Namespace organization: Use the
devpod
namespace to keep your DevPod environments organized - this is the default namespace for the Kubernetes provider in DevPod.
Although Pods require elevated privileges to function properly, you do not need additional, cluster-wide permissions beyond what those typically granted to users. This means you can set up your development environment securely without providing administrative access.
1. Install DevPod CLI
Install the DevPod CLI on your Virtual Server by following the DevPod installation guide. The DevPod CLI should be installed and running on a Virtual Server that has access to your CKS cluster.
2. Select a setup path
- Basic setup without Docker: A simple development environment without Docker, suitable for most development tasks
- Advanced setup with Docker: A development environment with Docker support, enabling containerized development
The basic setup is recommended for most development tasks.
Basic setup without Docker
If your development environment does not require Docker, use this simplified configuration.
1. Create the configuration files
First, create a directory for your DevPod configuration files. Then, create the example configuration files as described below:
pod_manifest_template.yaml
: A Kubernetes Pod manifest that DevPod uses as a template to provision your development environment.run_demo_gpu_cwsa.sh
: A script that creates and starts your DevPod environment..devcontainer/devcontainer.json
: A configuration file for the DevContainer IDE.
In this tutorial, the files are stored in ~/devpod-gpu-sidecar/
.
Create the Pod manifest template
Create a file named pod_manifest_template.yaml
with the following content:
apiVersion: v1kind: Podmetadata:namespace: devpodspec:tolerations:- key: sunk.coreweave.com/nodesoperator: Exists- key: is_cpu_computeoperator: Existscontainers:- name: devpodimage: pytorch/pytorch:2.4.1-cuda12.4-cudnn9-runtimevolumeMounts:- name: shared-toolsmountPath: /sharedsecurityContext:privileged: truevolumes:- name: shared-toolsemptyDir: {}
In this example manifest:
- The
pod_manifest_template.yaml
file defines a Kubernetes Pod manifest that DevPod uses as a template to provision your development environment. - The Pod is created in the
devpod
namespace, in order to keep DevPod resources logically separated from other workloads. - The manifest specifies tolerations for both
sunk.coreweave.com/nodes
andis_cpu_compute
Node labels, allowing the Pod to be scheduled on Nodes with these taints. - The primary container uses the
pytorch/pytorch:2.4.1-cuda12.4-cudnn9-runtime
image, mounts a sharedemptyDir
volume at/shared
, and runs in a privileged security context to enable advanced development workflows that require elevated permissions.
Create the setup script
Next, create a file named run_demo_gpu_cwsa.sh
with the following content:
#!/bin/bash# Create namespace if it doesn't existkubectl create namespace devpod --dry-run=client -o yaml | kubectl apply -f -# Configure DevPod providerdevpod provider use kubernetesdevpod provider set-options kubernetes -o POD_MANIFEST_TEMPLATE="$(pwd)/pod_manifest_template.yaml"# Set resource requirements - adjust these based on your needs# For CPU-only environments, remove the GPU limits linedevpod provider set-options kubernetes -o RESOURCES="requests.cpu=16,requests.memory=32Gi,limits.nvidia.com/gpu=8"devpod provider set-options kubernetes -o DISK_SIZE="100Gi"devpod provider set-options kubernetes -o KUBERNETES_PULL_SECRETS_ENABLED="false"devpod provider set-options kubernetes -o STRICT_SECURITY="true"# Start the DevPod environment# The debug flag is optional, but recommended until the script is# verified to be working properlydevpod up . --debug --ide openvscode
In this tutorial, we explicitly specify the openvscode
IDE to ensure consistent behavior. This provides a browser-based VS Code interface, which is why you'll see openvscode
in the IDE column when you run devpod ls
.
Create the .devcontainer
configuration
Create a .devcontainer
directory, then add a devcontainer.json
file with the following content:
{"name": "DevPod"}
2. Start the DevPod environment
Run the setup script to start your DevPod environment:
$chmod +x run_demo_gpu_cwsa.sh$./run_demo_gpu_cwsa.sh
Leave this terminal window open. Once the script is running, cancel it with Ctrl+C
. Remove the debug flag in the script once you've verified that the setup is working properly.
Monitor the Pod status to ensure the container is ready:
$kubectl get pods -n devpod -o wide
The output should look similar to:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdevpod-default-de-c24f8 2/2 Running 0 127m 10.0.87.91 g778056 <none> <none>
If the Pod fails to start, verify the following:
- Your CKS cluster has sufficient resources
- Your
kubectl
context is set to the correct cluster - The tolerations in the Pod manifest match your cluster configuration
- The targeted namespace exists, and you have permissions to create Pods in it
3. Access your DevPod environment
Open another terminal in your Virtual Server, then list your DevPod workspaces using devpod ls
:
$devpod ls
The output should look similar to the following:
NAME | SOURCE | MACHINE | PROVIDER | IDE | LAST USED | AGE | PRO---------------------+----------------------------------------+---------+------------+------------+-----------+---------+--------devpod-gpu-sidecar | local:/home/gabrams/devpod-gpu-sidecar | | kubernetes | openvscode | 2h6m13s | 2h6m47s | false
Next, connect to your DevPod environment using devpod ssh
:
$devpod ssh devpod-gpu-sidecar
When you run devpod ssh
, DevPod automatically opens a browser-based VS Code IDE in your Virtual Server's browser. This happens because the workspace is configured to use openvscode
as the IDE (as shown in the devpod ls
output).
If the browser-based IDE doesn't open automatically:
- Ensure your Virtual Server has a browser installed and accessible
- Check that the DevPod workspace is running properly
- You can also access the IDE manually by running
devpod open
in a separate terminal
Alternatively, use the full VS Code IDE.
4. Verify GPU access
Since PyTorch is the base image, nvidia-smi
works on GPU systems. Use this to verify GPU access:
$nvidia-smi
If nvidia-smi
does not work:
- Verify your cluster has GPU Nodes available
- Check that the GPU limits are set correctly in the resources configuration
- Ensure the PyTorch image includes CUDA support (this is the default)
- Verify the Pod is scheduled on a GPU Node using
kubectl get pods -n devpod -o wide
Advanced setup with Docker
If your development environment requires Docker, use this advanced configuration, which includes a Docker-in-Docker (DinD) sidecar container.
1. Create the Docker-enabled configuration files
Create the Pod manifest template with Docker sidecar
Create a file named pod_manifest_template.yaml
with the following content:
apiVersion: v1kind: Podmetadata:namespace: devpodspec:tolerations:- key: sunk.coreweave.com/nodesoperator: Exists- key: is_cpu_computeoperator: Existscontainers:- name: dindimage: docker:dindsecurityContext:privileged: trueargs:- "--host=tcp://0.0.0.0:2375"- "--tls=false"volumeMounts:- name: docker-graph-storagemountPath: /var/lib/docker- name: devpodmountPath: /workspacesubPath: dind-workspaceenv:- name: DOCKER_TLS_CERTDIRvalue: ""- name: devpodimage: pytorch/pytorch:2.4.1-cuda12.4-cudnn9-runtimevolumeMounts:- name: shared-toolsmountPath: /sharedenv:- name: DOCKER_HOSTvalue: "tcp://localhost:2375"securityContext:privileged: truevolumes:- name: docker-graph-storageemptyDir: {}- name: shared-toolsemptyDir: {}
This example manifest does the following:
- The
pod_manifest_template.yaml
file defines a Kubernetes Pod manifest that DevPod uses as a template to provision your development environment. - The Pod is created in the
devpod
namespace, in order to keep DevPod resources logically separated from other workloads. - The manifest specifies tolerations for both
sunk.coreweave.com/nodes
andis_cpu_compute
Node labels, allowing the Pod to be scheduled on Nodes with these taints. - The primary container uses the
pytorch/pytorch:2.4.1-cuda12.4-cudnn9-runtime
image, mounts a sharedemptyDir
volume at/shared
, and runs in a privileged security context to enable advanced development workflows that require elevated permissions. - The manifest configures a Docker-in-Docker (DinD) sidecar container (
containers.dind
), which allows the primary container to run Docker commands inside the Pod. - The DinD container exposes the Docker daemon on
tcp://0.0.0.0:2375
without TLS, and disables Docker's default TLS certificate directory for easier local development. - The
devpod
container sets theDOCKER_HOST
environment variable to connect to the DinD sidecar, enabling Docker CLI usage. - Both containers mount a shared workspace volume (
devpod
), allowing files to be accessed between the DinD and primary containers. - The manifest uses
emptyDir
volumes for both Docker storage and shared tools, ensuring ephemeral, fast local storage within the Pod. - Both containers run in privileged mode, which is required for Docker-in-Docker and some advanced development workflows, but should be used with caution in production environments.
Create the setup script with Docker
Create a file named run_demo_gpu_cwsa.sh
with the following content:
#!/bin/bash# Create namespace if it doesn't existkubectl create namespace devpod --dry-run=client -o yaml | kubectl apply -f -# Clean up any existing workspacedevpod delete devpod-gpu-sidecar# Configure DevPod providerdevpod provider use kubernetesdevpod provider set-options kubernetes -o POD_MANIFEST_TEMPLATE="$(pwd)/pod_manifest_template.yaml"# Set resource requirements - adjust these based on your needs# For CPU-only environments, remove the GPU limits linedevpod provider set-options kubernetes -o RESOURCES="requests.cpu=16,requests.memory=32Gi,limits.nvidia.com/gpu=8"devpod provider set-options kubernetes -o DISK_SIZE="100Gi"devpod provider set-options kubernetes -o KUBERNETES_PULL_SECRETS_ENABLED="false"devpod provider set-options kubernetes -o STRICT_SECURITY="true"# Start the DevPod environment# The debug flag is optional, but recommended until the script is# verified to be working properlydevpod up . --debug --ide openvscode
In this tutorial, we explicitly specify the openvscode
IDE to ensure consistent behavior. This provides a browser-based VS Code interface, which is why you'll see openvscode
in the IDE column when you run devpod ls
.
Create the devcontainer configuration
Create a .devcontainer
directory, then add a devcontainer.json
file:
{"name": "DinD"}
2. Start the DevPod environment with Docker
Run the setup script to start your DevPod environment:
$chmod +x run_demo_gpu_cwsa.sh$./run_demo_gpu_cwsa.sh
Leave this terminal window open until the script is running, then cancel it with Ctrl+C
. Once you've verified that the setup is working properly, remove the debug flag from the script.
Monitor the Pod status to ensure both containers are ready:
$kubectl get pods -n devpod -o wide
The output should look similar to the following:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdevpod-default-de-c24f8 2/2 Running 0 127m 10.0.87.91 g778056 <none> <none>
3. Access your DevPod environment
Open another terminal in your Virtual Server, then list your DevPod workspaces using devpod ls
:
$devpod ls
The output should look similar to:
NAME | SOURCE | MACHINE | PROVIDER | IDE | LAST USED | AGE | PRO---------------------+----------------------------------------+---------+------------+------------+-----------+---------+--------devpod-gpu-sidecar | local:/home/gabrams/devpod-gpu-sidecar | | kubernetes | openvscode | 2h6m13s | 2h6m47s | false
Connect to your DevPod environment using devpod ssh
:
$devpod ssh devpod-gpu-sidecar
When you run devpod ssh
, DevPod automatically opens a browser-based VS Code IDE in your Virtual Server's browser. This happens because the workspace is configured to use openvscode
as the IDE (as shown in the devpod ls
output).
If the browser-based IDE doesn't open automatically:
- Ensure your Virtual Server has a browser installed and accessible
- Check that the DevPod workspace is running properly
- You can also access the IDE manually by running
devpod open
in a separate terminal
Alternatively, use the full VS Code IDE.
4. Install Docker CLI
Use apt
to install the Docker CLI that connects to the Docker sidecar. This is required to run Docker commands from the primary container.
$sudo -i$apt update$apt upgrade$apt install docker.io$exit
5. Verify Docker connection
Test that Docker is working and connected to the sidecar container by running docker ps
:
$docker ps
You should see an empty list of containers, which indicates that Docker is connected to the sidecar, but no containers are running yet.
Test Docker by running a simple container using docker run hello-world
:
$docker run --rm hello-world
This command downloads and runs a test container to verify that Docker can pull images and run containers through the sidecar connection.
If you cannot connect to Docker from the main container:
- Verify the sidecar container is running by running
kubectl get pods -n devpod -o wide
- Check that the
DOCKER_HOST
environment variable is set correctly - Ensure the Docker daemon is running in the sidecar container
- Check the sidecar container logs:
kubectl logs -n devpod <pod-name> -c dind
6. Verify GPU access
Since PyTorch is the base image, nvidia-smi
should work on GPU systems. Run nvidia-smi
to verify GPU access:
$nvidia-smi
Install full Docker with buildx support
For a complete Docker installation including buildx support, follow the official Docker installation steps:
Update the package index:
$sudo apt-get update
Install packages to allow apt to use a repository over HTTPS:
$sudo apt-get install ca-certificates curl gnupg -y
Add Docker's official GPG key to the Apt keyring:
$sudo install -m 0755 -d /etc/apt/keyrings$curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg$sudo chmod a+r /etc/apt/keyrings/docker.gpg
Add the repository to Apt sources:
$echo \"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$sudo apt-get update
Now, install Docker Engine and its associated plugins:
$sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Use the full VS Code IDE
To use the full VS Code IDE for either setup path, first ensure VS Code is installed on the same local client that starts the DevPod. Then, modify your run_demo_gpu_cwsa.sh
file to include the following changes for either setup path:
#!/bin/bash# Create namespace if it doesn't existkubectl create namespace devpod --dry-run=client -o yaml | kubectl apply -f -# Clean up any existing workspacedevpod delete devpod-gpu-sidecar# Configure DevPod providerdevpod provider use kubernetesdevpod provider set-options kubernetes -o POD_MANIFEST_TEMPLATE="$(pwd)/pod_manifest_template.yaml"devpod provider set-options kubernetes -o RESOURCES="requests.cpu=16,requests.memory=32Gi,limits.nvidia.com/gpu=8"devpod provider set-options kubernetes -o DISK_SIZE="100Gi"devpod provider set-options kubernetes -o KUBERNETES_PULL_SECRETS_ENABLED="false"devpod provider set-options kubernetes -o STRICT_SECURITY="true"# Configure context to prevent timeoutdevpod context set-options -o EXIT_AFTER_TIMEOUT=false# Start the DevPod environment with VS Code IDEdevpod up . --debug --ide vscode
Run the modified script:
$./run_demo_gpu_cwsa.sh
Open VS Code, then connect to the DevPod workspace using devpod open
.
$devpod open
Clean up
When you are finished with your DevPod environment, delete it using devpod delete
:
$devpod delete devpod-gpu-sidecar
To clean up the namespace and all resources, run kubectl delete namespace
:
$kubectl delete namespace devpod