Importing Disk Images
Import a disk image from an external source
Disk images can be imported from external URLs to be used as source images for root or additional disks for Virtual Servers. In addition to qcow2
, raw
and iso
formatted images are also supported, and may be compressed with either gz
or xz
.
Most Operating Systems and virtual appliances provide a Cloud image in qcow2
or raw
format. These are all compatible, and may be used while following this guide.
Using a .iso
installation media (ie., a virtual DVD) requires additional parameters not covered in this document. For assistance, please contact your CoreWeave Support Specialist.
There are three ways to import disk images from external sources:
Using HTTP/HTTPS
A DataVolume
is used to both do the import and store the imported image.
Use the following manifest to import a disk image already hosted on a publicly accessible HTTP/HTTPS Web server:
apiVersion: cdi.kubevirt.io/v1beta1kind: DataVolumemetadata:name: debian-importspec:source:http:url: "http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img"pvc:accessModes:- ReadWriteOncevolumeMode: BlockstorageClassName: block-nvme-ord1 # Update to region where your VS will runresources:requests:storage: 64Mi # Update to size of imported image
Using an external object store
A DataVolume
can also import a disk image from an S3-compatible object store. To import an image from an existing object store, create a Secret
with your accessKey
and secretKey
:
kind: Secretmetadata:name: object-import-secrettype: OpaqueapiVersion: v1data:accessKeyId: EBMAq2KEBQyxLBi2ZipHQE1bsecretKey: YeXoB0QmpdS4zYFDGn7UYaPu6EglSHI5MkIMfMcv2Z6n7GBLCNAA4gH13NMU
Apply the manifest:
$kubectl apply -f object-import-secret.yamlsecret/object-import-secret created
The Secret
- along with your object store URL - will be referenced in your manifest:
apiVersion: cdi.kubevirt.io/v1beta1kind: DataVolumemetadata:name: debian-importspec:source:s3:url: https://object-store-tld/bucket-name/cirros-0.5.2-x86_64-disk.imgsecretRef: object-import-secretpvc:accessModes:- ReadWriteOncevolumeMode: BlockstorageClassName: block-nvme-ord1 # Update to region where your VS will runresources:requests:storage: 64Mi # Update to size of imported image
Using CoreWeave Object Storage
An image stored locally can easily be uploaded to CoreWeave Object Storage, then imported to a DataVolume
.
Once your object store credentials are generated and imported to s3cmd, make a bucket in which to store your images:
$s3cmd mb s3://imagesBucket 's3://images/' created
Next, upload a locally stored image:
$s3cmd put cirros-0.5.2-x86_64-disk.img s3://imagesupload: 'cirros-0.5.2-x86_64-disk.img' -> 's3://images/cirros-0.5.2-x86_64-disk.img' [part 1 of 2, 15MB] [1 of 1]15728640 of 15728640 100% in 1s 11.44 MB/s doneupload: 'cirros-0.5.2-x86_64-disk.img' -> 's3://images/cirros-0.5.2-x86_64-disk.img' [part 2 of 2, 558kB] [1 of 1]571904 of 571904 100% in 0s 5.59 MB/s done
Create an object store user manifest with Read
permissions:
apiVersion: objectstorage.coreweave.com/v1alpha1kind: Usermetadata:name: object-store-importnamespace: tenant-test-testspec:owner: tenant-test-testaccess: read
Apply the user manifest:
$kubectl apply -f object-store-import.yamluser.objectstorage.coreweave.com/object-store-import created
The generated access key needs some additional information in order for a DataVolume
to parse it. First, grab your accessKey
:
$kubectl get secret tenant-test-test-object-store-import-obj-store-creds -o jsonpath='{.data.accessKey}'EBMAq2KEBQyxLBi2ZipHQE1b
Then, use the accessKey
to patch in an accessKeyId
of the same value:
$kubectl patch secret tenant-test-test-object-store-import-obj-store-creds -p '{"data":{"accessKeyId":"EBMAq2KEBQyxLBi2ZipHQE1b"}}'secret/tenant-test-test-object-store-import-obj-store-creds patched
The updated secret will be referenced in your DataVolume
manifest.
The path-mapping for the Object Store URL uses sub-path mapping.
apiVersion: cdi.kubevirt.io/v1beta1kind: DataVolumemetadata:name: debian-importspec:source:s3:url: https://object.lga1.coreweave.com/images/cirros-0.5.2-x86_64-disk.imgsecretRef: object-import-secretpvc:accessModes:- ReadWriteOncevolumeMode: BlockstorageClassName: block-nvme-ord1 # Update to region where your VS will runresources:requests:storage: 64Mi # Update to size of imported image
Monitor the disk image import
After deploying the manifest above:
$kubectl apply -f dv.yamldatavolume.cdi.kubevirt.io/debian-import created
The status of the import can be followed by using kubectl get --watch
while it is importing:
$kubectl get --watch dv debian-importNAME PHASE PROGRESS RESTARTS AGEdebian-import Pending N/A 4sdebian-import ImportScheduled N/A 7sdebian-import ImportInProgress N/A 19sdebian-import ImportInProgress 0.00% 22sdebian-import ImportInProgress 1.00% 29sdebian-import ImportInProgress 7.12% 58s...debian-import Succeeded 100.0% 11m
If the counter in the "RESTARTS
" column increases, it means there has been an error while trying to import the image. Use kubectl describe dv
to see the error:
$kubectl describe dv debian-import...Events:Type Reason Age From Message---- ------ ---- ---- -------Warning ErrResourceExists 41s (x8 over 49s) datavolume-controller Resource "debian-import" already exists and is not managed by DataVolumeNormal Pending 40s datavolume-controller PVC debian-import PendingNormal ImportScheduled 38s datavolume-controller Import into debian-import scheduledNormal Bound 38s datavolume-controller PVC debian-import BoundNormal ImportInProgress 14s datavolume-controller Import into debian-import in progressWarning Error 6s (x2 over 11s) datavolume-controller Unable to process data: Virtual image size 2147483648 is larger than available size 576716800 (PVC size 2147483648, reserved overhead 0.000000%). A larger PVC is required.
Images are fully validated after import, which makes the import process slow. Import times will be decreased in the future.
Launch a Virtual Server
After the image is finished importing, a Virtual Server can be launched with the imported image as the template for the root disk the same way they are launched using CoreWeave provided OS images.
Use the Kubectl method of deployment to create a Virtual Server manifest that specifies the source of the root disk:
apiVersion: virtualservers.coreweave.com/v1alpha1kind: VirtualServermetadata:name: example-vsspec:region: ORD1os:type: linuxenableUEFIBoot: falseresources:cpu:# Reference CPU instance label selectors here:# https://docs.coreweave.com/coreweave-kubernetes/node-typestype: amd-epyc-romecount: 4memory: 16Gistorage:root:size: 40Gi # Root disk will automatically be expandedstorageClassName: block-nvme-ord1 # Needs to match the class of the imported volumesource:pvc:namespace: tenant-test-test # Replace with your namespacename: debian-import# If the image supports cloudInit, the regular users configuration can be used# users:# - username: SET YOUR USERNAME HERE# password: SET YOUR PASSWORD HERE# To use key-based authentication replace and uncomment ssh-rsa below with your public ssh key# sshpublickey: |# ssh-rsa AAAAB3NzaC1yc2EAAAA ... user@hostnamenetwork:public: truetcp:ports:- 22
When importing an image configured for EFI
boot, set spec.os.enableUEFIBoot
to true
.
Apply the manifest:
$kubectl apply -f example-vs.yamlvirtualserver.virtualservers.coreweave.com/example-vs created
The Virtual Server will now initialize. Once fully launched and in VirtualServerReady
status, it will be available over SSH (assuming the root disk image has supported SSH) as well as via regular remote console.
$kubectl get vs example-vsNAME STATUS REASON STARTED INTERNAL IP EXTERNAL IPexample-vs VirtualServerReady VirtualServerReady True 10.135.208.235 207.53.234.142
To export an Amazon AMI, chose the raw
format when following the AWS Export guide