Storage

Persistent Storage

Fast SSD and cost effective HDD storage are available as both block storage and shared filesystem types. All data is replicated for High Availability. Storage is allocated using Kubernetes Persistent Volume Claims. Volumes are automatically provisioned when a Persistent Volume Claim is created.

Block Storage

Block Storage provides the best performance, and is the recommended storage access method whenever possible. Block Storage is exposed via the Kubernetes ReadWriteOnce access mode. Block volumes can only be attached to a single physical node at any one time.

HDD
SSD
HDD
data-hdd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
storageClassName: ceph-hdd-2-replica
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
SSD
data-ssd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
storageClassName: ceph-ssd-2-replica
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

Shared Filesystem

Unlike block volumes a shared filesystem can be accessed by multiple nodes at the same time. This storage type is useful for parallel tasks, i.e. reading assets for CGI rendering or loading ML models for parallel inference. A shared filesystem is accessed similarly to block storage. The access mode changes to ReadWriteMany and the storage class names are different.

HDD
SSD
HDD
shared-data-hdd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data
spec:
storageClassName: sharedfs-hdd-replicated
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
SSD
shared-data-ssd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data
spec:
storageClassName: sharedfs-ssd-replicated
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi

Billing

Storage is billed per gigabyte of allocated (requested) space as an average over a billing cycle.

Ephemeral Storage

All physical nodes are equipped with SSD or NVMe ephemeral (local) storage. Ephemeral storage available ranges between 100GB to 2TB depending upon node type. No volume claims are needed to allocate ephemeral storage, simply write anywhere in the container filesystem. If a larger amount (above 20GB) of ephemeral storage is used, it is recommended to include this in the workloads resource request.

spec:
containers:
- name: example
resources:
limits:
cpu: 3
memory: 16Gi
nvidia.com/gpu: 1
ephemeral-storage: 20Gi