Skip to main content

Exporting images to QCOW2

Objective: Create a QCOW2 image from a PVC hosted in our namespace.
Overview: We will spin up a shared filesystem to store a QCOW2 image, generated by a worker pod from a mounted PVC in our namespace.

References:

Create a shared filesystem

Creating a shared filesystem gives us a destination for our worker pod to write to, as well as a volume that can be attached to a Virtual Server or Samba Pod to egress the exported QCOW2 file.

We'll deploy the following YAML with kubectl create -f shared_data.yaml.

title="shared_data.yaml"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data
spec:
storageClassName: shared-hdd-ord1
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1000Gi
Note

Note we've created our shared filesystem in the ORD region. If our source disk exists in a different region - we'll want to change the shared data filesystem to match.

Identify source disk

Using kubectl get pvc, we'll identify a PVC in our namespace that we wish to export:

Note

Note our source image exists in the ORD1 region - matching our shared data filesystem.

Deploy worker pod

Next, we'll create a worker pod that has both our source disk, and shared data file system mounted.

Using kubectl create -f clone-to-file.yaml:

title="clone-to-file.yaml"
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: clone-to-file
name: clone-to-file
spec:
template:
metadata:
labels:
app.kubernetes.io/component: clone-to-file
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- ORD1
- key: node.coreweave.cloud/class
operator: In
values:
- cpu
- key: ethernet.coreweave.cloud/speed
operator: In
values:
- 10G
- 40G
containers:
- name: clone-to-file
resources:
requests:
cpu: 1
memory: 2Gi
image: registry.gitlab.com/coreweave/utility-images/admin-shell:36f48c0d
command:
- sh
- '-c'
- 'qemu-img convert -f raw -O qcow2 /dev/xvda /shared-data/disk.qcow2 -c -p'
volumeMounts:
- name: shared-data
mountPath: /shared-data
volumeDevices:
- devicePath: /dev/xvda
name: source
restartPolicy: OnFailure
volumes:
- name: shared-data
persistentVolumeClaim:
claimName: shared-data
readOnly: false
- name: source
persistentVolumeClaim:
claimName: winserver2019std-clone-20210701-ord1
readOnly: true
tolerations:
- key: node.coreweave.cloud/hypervisor
operator: Exists
- key: is_cpu_compute
operator: Exists
Important

Note that while our shared filesystem can be mounted to multiple pods/Virtual Servers simultaneously, our source block filesystem winserver2019std-clone-20210701-ord1 must not be in use by any running pods when this job is deployed.

Progress can be monitored with kubectl get pods --watch:

Once the job status shows Completed, the job can be deleted with kubectl delete job clone-to-file.

The shared data filesystem, with its exported QCOW2 can be attached to a Virtual Server or a apps.coreweave.com based File Browser or SAMBA for further inspection.