CoreWeave
Search…
Exporting images to QCOW2
Objective: Create a QCOW2 image from a PVC hosted in our namespace. Overview: We will spin up a shared filesystem to store a QCOW2 image, generated by a worker pod from a mounted PVC in our namespace.

References:

clone_to_file.yaml
1KB
Code
shared_data.yaml
208B
Code

Create a shared filesystem

Creating a shared filesystem gives us a destination for our worker pod to write to, as well as a volume that can be attached to a Virtual Server or Samba Pod to egress the exported QCOW2 file.
We'll deploy the following YAML with k create -f shared_data.yaml.
YAML
shared_data.yaml
1
apiVersion: v1
2
kind: PersistentVolumeClaim
3
metadata:
4
name: shared-data
5
spec:
6
storageClassName: shared-hdd-ord1
7
accessModes:
8
- ReadWriteMany
9
resources:
10
requests:
11
storage: 1000Gi
Copied!
Note we've created our shared filesystem in the ORD region. If our source disk exists in a different region - we'll want to change the shared data filesystem to match.

Identify source disk

Using k get pvc, we'll identify a PVC in our namespace that we wish to export:
Note our source image exists in the ORD region - matching our shared data filesystem.

Deploy worker pod

Next, we'll create a worker pod that has both our source disk, and shared data filesystem mounted.
Using k create -f clone-to-file.yaml:
YAML
clone-to-file.yaml
1
apiVersion: batch/v1
2
kind: Job
3
metadata:
4
name: clone-to-file
5
spec:
6
parallelism: 1
7
completions: 1
8
template:
9
metadata:
10
name: clone
11
spec:
12
nodeSelector:
13
node.coreweave.cloud/class: cpu
14
ethernet.coreweave.cloud/speed: 10G
15
topology.kubernetes.io/region: ORD1
16
cpu.coreweave.cloud/family: epyc
17
containers:
18
- name: rsync
19
image: ubuntu:latest
20
command: ["bash", "-c", "apt update;DEBIAN_FRONTEND=noninteractive apt install -y qemu-utils; dd conv=sparse bs=4M if=/dev/xvda of=/tmp/disk.img;qemu-img convert -f raw -O qcow2 /tmp/disk.img /shared-data/disk.qcow2;rm /tmp/disk.*; echo 'Done'"]
21
volumeDevices:
22
- name: source
23
devicePath: /dev/xvda
24
volumeMounts:
25
- mountPath: /shared-data
26
name: shared-data
27
restartPolicy: OnFailure
28
volumes:
29
- name: source
30
persistentVolumeClaim:
31
claimName: winserver2019std-clone-20210701-ord1
32
readOnly: true
33
- name: shared-data
34
persistentVolumeClaim:
35
claimName: shared-data
36
tolerations:
37
- key: node.coreweave.cloud/hypervisor
38
operator: Exists
39
- key: is_cpu_compute
40
operator: Exists
Copied!
Note that while our shared filesystem can be mounted to multiple pods/Virtual Servers simultaneously, our source block filesystem winserver2019std-clone-20210701-ord1 must not be in use by any running pods when this job is deployed.
Progress can be monitored with k get pods --watch:
Once the job status shows Completed, the job can be deleted with k delete job clone-to-file.
The shared data filesystem, with its exported QCOW2 can be attached to a Virtual Server or Samba pod for further inspection.
Last modified 5mo ago