CoreWeave
Search…
Storage

Persistent Storage

Fast SSD and cost effective HDD storage are available as both block storage and shared filesystem types. All data is replicated for High Availability. Storage is allocated using Kubernetes Persistent Volume Claims. Volumes are automatically provisioned when a Persistent Volume Claim is created.
Storage Type
Disk Class
Region
Storage Class Name
Block Storage
NVMe
EWR1
block-nvme-ewr1
Block Storage
HDD
EWR1
block-hdd-ewr1
Shared Filesystem
NVMe
EWR1
shared-nvme-ewr1
Shared Filesystem
HDD
EWR1
shared-hdd-ewr1
Block Storage
NVMe
ORD1
block-nvme-ord1
Block Storage
HDD
ORD1
block-hdd-ord1
Shared Filesystem
NVMe
ORD1
shared-nvme-ord1
Shared Filesystem
HDD
ORD1
shared-hdd-ord1
Block Storage
NVMe
LAS1
block-nvme-las1
Block Storage
HDD
LAS1
block-hdd-las1
Shared Filesystem
NVMe
LAS1
shared-nvme-las1
Shared Filesystem
HDD
LAS1
shared-hdd-las1

Block Storage

Block Storage provides the best performance, and is the recommended storage access method whenever possible. Block Storage is exposed via the Kubernetes ReadWriteOnce access mode. Block volumes can only be attached to a single physical node at any one time.
NVMe
HDD
data-ssd-nvme.yaml
1
apiVersion: v1
2
kind: PersistentVolumeClaim
3
metadata:
4
name: data
5
spec:
6
storageClassName: block-nvme-ord1
7
accessModes:
8
- ReadWriteOnce
9
resources:
10
requests:
11
storage: 10Gi
Copied!
data-hdd-pvc.yaml
1
apiVersion: v1
2
kind: PersistentVolumeClaim
3
metadata:
4
name: data
5
spec:
6
storageClassName: block-hdd-ord1
7
accessModes:
8
- ReadWriteOnce
9
resources:
10
requests:
11
storage: 100Gi
Copied!

Shared Filesystem

Unlike block volumes a shared filesystem can be accessed by multiple nodes at the same time. This storage type is useful for parallel tasks, i.e. reading assets for CGI rendering or loading ML models for parallel inference. A shared filesystem is accessed similarly to block storage. The access mode changes to ReadWriteMany and the storage class names are different.
NVMe
HDD
shared-data-nvme-pvc.yaml
1
apiVersion: v1
2
kind: PersistentVolumeClaim
3
metadata:
4
name: shared-data
5
spec:
6
storageClassName: shared-nvme-ord1
7
accessModes:
8
- ReadWriteMany
9
resources:
10
requests:
11
storage: 10Gi
Copied!
shared-data-hdd-pvc.yaml
1
apiVersion: v1
2
kind: PersistentVolumeClaim
3
metadata:
4
name: shared-data
5
spec:
6
storageClassName: shared-hdd-ord1
7
accessModes:
8
- ReadWriteMany
9
resources:
10
requests:
11
storage: 100Gi
Copied!

Billing

Storage is billed per gigabyte of allocated (requested) space as an average over a billing cycle.

Resizing

Volumes can be expanded by simply increasing the storage request and reapplying the manifest. ReadWriteMany volumes are resized online without disruption the workload. For ReadWriteOnce volumes you will need to stop or restart all workloads that are attaching the volume for the resize to take effect. Please note that volumes cannot be shrunk after they are expanded. One-line example to grow a storage volume: kubectl patch pvc myvolume -p '{"spec":{"resources":{"requests":{"storage": "500Gi"}}}}'

Ephemeral Storage

All physical nodes are equipped with SSD or NVMe ephemeral (local) storage. Ephemeral storage available ranges between 100GB to 2TB depending upon node type. No volume claims are needed to allocate ephemeral storage, simply write anywhere in the container filesystem. If a larger amount (above 20GB) of ephemeral storage is used, it is recommended to include this in the workloads resource request.
1
spec:
2
containers:
3
- name: example
4
resources:
5
limits:
6
cpu: 3
7
memory: 16Gi
8
nvidia.com/gpu: 1
9
ephemeral-storage: 20Gi
Copied!
Last modified 1mo ago