Using Storage - Kubectl
Manage Storage Volumes using Kubectl
Storage can be managed via the Kubernetes API natively using
kubectl
. Below are some example manifests to do this, as well as descriptions of the fields used.Field name | Field type | Description |
---|---|---|
storageClassName | string | Sets the storage class name for the volume's PVC; determines which kind of storage class the volume will be |
accessModes | list | |
resources | array | Defines which resources with which to provision the Volume |
requests | array | Defines the resource requests to create the volume |
storage | string | Determines the size of the volume, in Gi |
storage.root.serial | bool | The root disk serial number. When not specified, a new serial number is generated, which is preserved between restarts. |
All-NVMe Cloud Storage Volumes can be provisioned using the following storage class convention:
Block Volumes:
block-nvme-<region>
Shared File System: shared-nvme-<region>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
storageClassName: block-nvme-ord1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
All HDD Cloud Storage Volumes can be provisioned using the following storage class convention:
Block Volumes:
block-hdd-<region>
Shared File System: shared-hdd-<region>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data
spec:
storageClassName: shared-hdd-ord1
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
Important
The volumes must first be created and provisioned before they can be attached to a Pod or Virtual Server.
Attaching Storage Volumes using the Kubectl command line varies depending on whether you are attaching to Pods or to Virtual Servers.
Filesystem attachments
To attach filesystem storage to a Pod, specify the
mountPath
and name
under the volumeMounts
stanza. Then, specify the volumes.name
and the persistentVolumeClaim.claimName
as shown in the following example.apiVersion: v1
kind: Pod
metadata:
name: filesystem-storage-example
spec:
containers:
- image: nginx:1.14.2
name: nginx
volumeMounts:
- mountPath: /storage
name: filesystem-storage
volumes:
- name: filesystem-storage
persistentVolumeClaim:
claimName: filesystem-storage-pvc
Block storage attachments
As a kind of device, block storage is attached to a Pod by providing the
devicePath
under volumeDevices
, in addition to the volumes.name
and persistentVolumeClaim.claimName
values, as demonstrated in the example below.apiVersion: v1
kind: Pod
metadata:
name: block-storage-example
spec:
containers:
- image: nginx:1.14.2
name: nginx
volumeDevices:
- devicePath: /dev/vda1
name: block-storage
volumes:
- name: block-storage
persistentVolumeClaim:
claimName: block-storage-pvc
The filesystem attachment information for Virtual Servers is provided in the
storage.filesystems
stanza of the spec. The following example demonstrates specifying values for filesystems.name
, filesystems.mountPoint
, and persistentVolumeClaim.name
:apiVersion: virtualservers.coreweave.com/v1alpha1
kind: VirtualServer
metadata:
name: filesystem-storage-example
spec:
[...]
storage:
filesystems:
- name: filesystem-storage
mountPoint: /mnt/storage
spec:
persistentVolumeClaim:
name: filesystem-storage-pvc
To attach a block storage device to a Virtual Server, specify the block device's values in the
storage.additionalDisks
stanza, as demonstrated in the following example.apiVersion: virtualservers.coreweave.com/v1alpha1
kind: VirtualServer
metadata:
name: block-storage-example
spec:
...
storage:
additionalDisks:
- name: block-storage
spec:
persistentVolumeClaim:
name: block-storage-pvc
Attach disks as read-only by including
readOnly: true
in the additionalDisks
specification. apiVersion: virtualservers.coreweave.com/v1alpha1
kind: VirtualServer
metadata:
name: block-storage-example
spec:
...
storage:
additionalDisks:
- name: block-storage
readOnly: true
spec:
persistentVolumeClaim:
name: block-storage-pvc
Note
It's required for
readOnly
to be true
when attaching a PersistentVolumeClaim with accessMode
set to ReadOnlyMany
.Shared File System Volumes are resized online without disruption the workload, but resizing Block Volumes requires stopping or restarting all workloads that are attached the Volume in order for the resize to take effect.
Important
Volumes cannot be downsized again once they are expanded.
Expanding storage volumes via
kubectl
is as simple as a single-line command:kubectl patch pvc <myvolume> -p \
'{"spec":{"resources":{"requests":{"storage": "500Gi"}}}}'
All physical nodes are equipped with SSD or NVMe ephemeral (local) storage. Ephemeral storage available ranges between
512GB
to 2TB
, depending upon node type.No volume claims are needed to allocate ephemeral storage - simply write anywhere in the container file system.
If a larger amount (above
20Gi
) of ephemeral storage is used, it is recommended to include ephemeral storage in the workloads resource request. For example:spec:
containers:
- name: example
resources:
limits:
cpu: 3
memory: 16Gi
nvidia.com/gpu: 1
ephemeral-storage: 20Gi
Last modified 1mo ago