Deploy Dragonfly on CKS
Deploy Dragonfly Vector Database on CoreWeave Kubernetes Service (CKS)
These instructions explain how to deploy Dragonfly, an open-source vector database built for GenAI applications, on CoreWeave Kubernetes Service (CKS).
Prerequisites
Before you start, you need:
- A working CKS cluster, ideally with CPU Nodes. You can also use a GPU Node cluster, but Dragonfly has no capability that would benefit from GPUs.
You'll need the following tools on your local machine:
Step 1. Verify your system configuration
-
Verify that you can access your cluster with
kubectl. For example:$kubectl cluster-infoYou should see something similar to:
Kubernetes control plane is running at...CoreDNS is running at...node-local-dns is running at... -
Verify your cluster has at least one CPU Node. GPU Nodes are also supported, but CPU Nodes are preferred since Dragonfly cannot leverage GPUs for any of its functionality. For example:
$kubectl get nodes -o=custom-columns="NAME:metadata.name,CLASS:metadata.labels['node\.coreweave\.cloud\/class']"You should see something similar to the following:
NAME CLASSg137a10 gpug5424e0 cpug77575e cpugd926d4 gpu
Step 2. Deploy Dragonfly
-
Install the Dragonfly Operator. See the Operator installation guide for more details.
$kubectl apply -f https://raw.githubusercontent.com/dragonflydb/dragonfly-operator/main/manifests/dragonfly-operator.yamlThis installs the Dragonfly Custom Resource Definition (CRD), which is used to define Dragonfly clusters, along with the Operator that manages them. It creates a new namespace called
dragonfly-operator-systemfor the Operator itself.It's possible to add
dragonfly-operatoras a dependency to the CoreWeave chart (which you'll download in the next step), but it's preferred to install it in a separate namespace from the database. The Operator can manage multiple Dragonfly clusters in different namespaces. -
Clone the CoreWeave Dragonfly chart repository. It's located at https://github.com/coreweave/reference-architecture/tree/main/tooling/vector_dbs/cw-dragonfly.
-
Edit the chart's
values.yamlwith your details. None of the values invalues.ymlmust be changed, but you may want to adjust them for your specific use case. Keep the following principles in mind:- Dragonfly allocates 80% of the limit memory.
- If the CPU limit is set, the I/O threads are equal to it.
- If the CPU limit is not set, all visible cores are used.
- If the CPU limit is not set and the proactor threads parameter is set, the parameter is used.
- Ensure you have 256MiB memory per thread.
- See CoreWeave CPU Instances for details about the number of cores and memory per Node.
The CoreWeave chart handles the following items:
- Provisioning a secret for the database password. You can also specify one of your own via the
dbPasswordattribute invalues.yaml, or provide an existing secret containing the password viaexistingDbPasswordSecretName. - Sets Node affinities for the Dragonfly Pods to CPU Nodes. Pods will be scheduled onto GPU Nodes if no CPU Nodes are available.
For example:
affinity:# prefer running on CPU nodes, if availablenodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100preference:matchExpressions:- key: node.coreweave.cloud/classoperator: Invalues:- cpu- Configures snapshotting to PVC backed by VAST.
You can control the cron expression for scheduling the job, as well as the volume size, through the block shown below in
values.yaml.snapshot:cron: "30 7 * * *"enableOnMasterOnly: falsepersistentVolumeClaimSpec:storageClassName: shared-vastaccessModes:- ReadWriteManyresources:requests:storage: 2GiOut of the box, the chart configures Dragonfly to maintain a single snapshot file. This is because the
timestampstring Dragonfly would use otherwise contains:, which is not supported out of the box on CoreWeave VAST-backed volumes. You can contact your CoreWeave account team to have special character support enabled, if you prefer Dragonfly taking timestamped snapshots. Otherwise, the chart provisions a sidecar container that copies the single snapshot to a persistent volume. You can control the scheduling of the snapshot copy job viasnapshotMoveCron.Snapshots will accumulateSnapshots are not pruned, regardless of which mechanism you use (timestamped snapshots or scheduled snapshot copies). This means they will accumulate on your volume unless you clean out unneeded ones periodically. Alternatively, you can specify a database file name as an argument. The effect of doing this will be that a single snapshot will be kept, with that name. The timing of creating that snapshot is governed by the cron expression.
Step 3. Install the chart
-
Change to the chart directory:
$cd reference-architecture/tooling/vector_dbs/cw-dragonfly -
Install the chart in a new namespace, e.g.
dragonfly:$helm install -n dragonfly --create-namespace cw-dragonfly . -
Check the status of the custom resource. This may take several minutes to complete, as the Operator sets up the database.
kubectl -n dragonfly describe dragonfly cw-dragonflyThe
Statusblock will show the status of the database. Once everything is set up, that block should look like this:Status:Phase: Ready
Step 4. Access the database
After the database is ready, the database service is available on port 6379. To access it, forward local ports from your machine to the service.
-
Forward a local port to the database service.
$kubectl -n dragonfly port-forward --address 0.0.0.0 service/cw-dragonfly 27017:6379Forwarding from 0.0.0.0:27017 -> 6379 -
Connect to the database at
localhost:27017. You can use any Redis client, such as the Redis CLI, to connect:$redis-cli -h localhost -p 27017localhost:27017> GET 1(error) NOAUTH Authentication required.localhost:27017> AUTH YOUR_PASSWORDOKlocalhost:27017> GET 1(nil)If you did not specify your own password, you can get the password by looking at the
cw-dragonfly-db-passwordsecret in thedragonflynamespace. Note the password will be base64 encoded.
Additional resources
See the Dragonfly documentation to learn more.