Exporting images to QCOW2
Objective: Create a QCOW2 image from a PVC hosted in our namespace.
Overview: We will spin up a shared filesystem to store a QCOW2 image, generated by a worker pod from a mounted PVC in our namespace.
References:
Create a shared filesystem
Creating a shared filesystem gives us a destination for our worker pod to write to, as well as a volume that can be attached to a Virtual Server or Samba Pod to egress the exported QCOW2 file.
We'll deploy the following YAML with kubectl create -f shared_data.yaml
.
apiVersion: v1kind: PersistentVolumeClaimmetadata:name: shared-dataspec:storageClassName: shared-hdd-ord1accessModes:- ReadWriteManyresources:requests:storage: 1000Gi
Note we've created our shared filesystem in the ORD region. If our source disk exists in a different region - we'll want to change the shared data filesystem to match.
Identify source disk
Using kubectl get pvc
, we'll identify a PVC in our namespace that we wish to export:
Note our source image exists in the ORD1 region - matching our shared data filesystem.
Deploy worker pod
Next, we'll create a worker pod that has both our source disk, and shared data file system mounted.
Using kubectl create -f clone-to-file.yaml
:
apiVersion: batch/v1kind: Jobmetadata:labels:app.kubernetes.io/component: clone-to-filename: clone-to-filespec:template:metadata:labels:app.kubernetes.io/component: clone-to-filespec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: failure-domain.beta.kubernetes.io/regionoperator: Invalues:- ORD1- key: node.coreweave.cloud/classoperator: Invalues:- cpu- key: ethernet.coreweave.cloud/speedoperator: Invalues:- 10G- 40Gcontainers:- name: clone-to-fileresources:requests:cpu: 1memory: 2Giimage: registry.gitlab.com/coreweave/utility-images/admin-shell:36f48c0dcommand:- sh- '-c'- 'qemu-img convert -f raw -O qcow2 /dev/xvda /shared-data/disk.qcow2 -c -p'volumeMounts:- name: shared-datamountPath: /shared-datavolumeDevices:- devicePath: /dev/xvdaname: sourcerestartPolicy: OnFailurevolumes:- name: shared-datapersistentVolumeClaim:claimName: shared-datareadOnly: false- name: sourcepersistentVolumeClaim:claimName: winserver2019std-clone-20210701-ord1readOnly: truetolerations:- key: node.coreweave.cloud/hypervisoroperator: Exists- key: is_cpu_computeoperator: Exists
Note that while our shared filesystem can be mounted to multiple pods/Virtual Servers simultaneously, our source block filesystem winserver2019std-clone-20210701-ord1 must not be in use by any running pods when this job is deployed.
Progress can be monitored with kubectl get pods --watch
:
Once the job status shows Completed, the job can be deleted with kubectl delete job clone-to-file
.
The shared data filesystem, with its exported QCOW2 can be attached to a Virtual Server or a apps.coreweave.com based File Browser or SAMBA for further inspection.