Skip to main content

Configuring an externally sourced cloud Linux image

Objective: Several distributions such as Ubuntu and CentOS offer OS images designed to run in the cloud – these are sparse images with an OS already installed, and setup with Cloudinit. In this example, we'll use our Packer Virtual Server to configure an Ubuntu Cloud image from Canonical.

Overview: Packer by Hashicorp uses KVM to spin up a virtual machine and perform configuration actions that would normally done by human hand. You feed it an image, which it then connects to via SSH, and it executes scripts/commands you describe in the configuration JSON. This process consists of using the generated Virtual Server to configure Canonicals' Cloud image using Packer. Reference Packer's QEMU docs for more information

References:

Create a destination block volume PVC

First, we'll create a new block volume PVC – this will serve as the destination for our image once Packer completes processing.

Using kubectl create -f new_block_pvc.yaml we'll have our block volume created accordingly:

new_block_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
labels:
name: ubuntu-cloudimg
namespace: tenant-<tenant>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
storageClassName: block-nvme-ord1
volumeMode: Block

Referencing Creating a Packer Worker Virtual Server, we will edit our YAML to point to our newly created blank PVC:

packer_vs.yaml
    additionalDisks:
- name: ubuntu-cloudimg
spec:
persistentVolumeClaim:
claimName: ubuntu-cloudimg

Configuring Packer manifest

Once our VS is spun up, we'll have a look at our JSON for Ubuntu:

ubuntu.json
{
"builders": [
{
"type": "qemu",
"accelerator": "kvm",
"communicator": "ssh",
"headless": true,
"disk_image": true,
"cpus": "6",
"memory": "16384",
"iso_checksum": "file:https://cloud-images.ubuntu.com/hirsute/current/SHA256SUMS",
"iso_url": "https://cloud-images.ubuntu.com/hirsute/current/hirsute-server-cloudimg-amd64.img",
"qemuargs": [
["-machine","pc-q35-4.2,accel=kvm,usb=off,vmport=off,dump-guest-core=off"],
[ "-cpu", "host" ],
[ "-cdrom", "cidata.iso" ]
],
"ssh_username": "user",
"ssh_password": "packer",
"net_device": "virtio-net",
"shutdown_command": "sudo shutdown --poweroff --no-wall now"
}
],
"provisioners": [
{
"type": "shell",
"execute_command": "{{.Vars}} sudo -S -E bash '{{.Path}}'",
"inline": [
"apt update",
"apt dist-upgrade -y",
"apt autoremove -y",
"apt clean"
]
}
]
}
note

Note in iso_url and iso_checksum image and checksum are pulled from Canonical

note

In this example, we are using the shell provisioner to install package updates. To learn more and view more provisioners, view Hashicorp's documentation.

note

The credentials in this configuration are created when the VM reads the image output of create-ci-data.sh

Generate credentials for the Packer VM

Similar to Cloud images provided by Canonical and RedHat, images from CoreWeave do not have users by default – they are generated by Cloud-init on initial instance launch.

To create a user for packer to communicate with, we need to run create-ci-data.sh, which will generate an ISO mounted by the Packer VM with credential information:

create-ci-data.sh
cat <<EOF >user-data
#cloud-config
ssh_pwauth: True
users:
- name: user
plain_text_passwd: packer
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
lock_passwd: false
EOF

cat <<EOF >meta-data
{"instance-id":"packer-worker.tenant-local","local-hostname":"packer-worker"}
EOF

genisoimage -output cidata.iso -input-charset utf-8 -volid cidata -joliet -r \
user-data meta-data
note

This generates an ISO (cidata.iso) referenced by our JSON that will be presented to the VM Packer configures

important

Note the username and password referenced in our JSON is created here

Execute Packer docker image

Once our JSON is configured, we'll launch the packer process with launch-docker.sh ubuntu.json.

launch_docker.sh
CONFIG="$1"
exec docker run --rm --dns 1.1.1.1 --device /dev/kvm --privileged --cap-add=NET_ADMIN --net=host \
-v /var/lib/libvirt:/var/lib/libvirt \
-v /var/run/libvirt:/var/run/libvirt \
--volume $PWD:/work -it packer:latest \
packer build -force -on-error=abort \
$CONFIG

Packer pulls down the image, verifies it checksum, then boots it:

When the Packer operation completes, the output image will be located in outpuet-qemu/packer-qemu:

Write generated image to block volume PVC

We need to write Packer's output image to the PVC we created earlier – in this example, the PVC is mounted to /dev/vdc:

Using DD, we'll write to the PVC with dd if=output-qemu/packer-qemu of=/dev/vdc bs=1M status=progress

With the DD operation complete - the Virtual Server can be safely deleted (k delete vs packer-worker). The written PVC will remain in your namespace to serve as a source image for subsequent Virtual Servers.