Comment on page
Get Started with Storage
Learn about CoreWeave Storage options
High-performance, network-attached storage volumes for both containerized workloads and Virtual Servers are easy to provision and manage on CoreWeave Cloud.
Available in both all-NVMe and HDD tiers, Storage Volumes can be created as Block Volumes or Shared File System Volumes, and can be resized at any time. Storage is managed separately from compute, and can be moved between instances and hardware types.
CoreWeave Cloud Storage Volumes are built on top of Ceph, a software defined scale-out enterprise grade storage platform. Built with triple replication, the CoreWeave Cloud Storage platform is built to provide high-availability, performant storage for your most demanding Cloud-native workloads.
Accessible by both containerized and hypervisor Workloads
🤝
Easily resized to add more capacity at any time
📈
Single Storage Volumes can be scaled to the multiple Petabyte level
🤯
Easily managed from the CoreWeave Cloud UI or via
✅
kubectl
May be cloned to instantiate new Virtual Servers from a template
🏁
Important
For the vast majority of use cases, Workloads should run in the same region as the storage block they are accessing. Use the region label affinity to limit scheduling workloads to a single region.
There are certain exceptions to this rule of thumb, which are mainly relevant for shared volumes, such as:
- Batch Workloads where IOPS are not a concern but accessing compute capacity from multiple regions might give scaling benefits
- When data is strictly read during startup of a process, such as when reading a ML model from storage into system and GPU memory for inference
In general, block volumes should always be used in the same region in which they are allocated.
Storage Volumes can be configured, deployed, and managed using either the CoreWeave Cloud UI or using the Kubernetes command line (
kubectl
).Select your preferred method to learn more:
Block Storage Volumes are attached to containerized workloads and Virtual Server instances as high-performance virtual disks.
When served from our all-NVMe storage tier, these virtual disks readily outperform local workstation SSDs, and are scalable to the Petabyte scale. These volumes are presented as generic block devices, so they are treated by the operating system like a typical physically connected storage device.
POSIX-compliant Shared File System Volumes can be attached to both containerized and Virtual Server workloads to provide native shared file systems.
It is possible to attach these Volumes to many instances at the same time. They are great for centralized asset storage, whether for sharing with co-workers in the cloud or as a data source for massively parallel computation.
All-NVMe
⏩
The All-NVMe CoreWeave Cloud storage tier offers the highest performance in both throughput and IOPS.
Great for hosting the root disk of a Virtual Server, or as the backing store for a transactional database, the All-NVMe tier can provide up to 10 million IOPS per Volume and peak storage throughput into the Tbps range.
HDD
▶
The HDD CoreWeave Cloud storage tier offers excellent throughput optimized performance at a lower cost.
Great for large file storage with sequential IOPS access patterns, the HDD tier is backed by an NVMe caching layer to improve performance and scale throughput capacity.
All physical nodes are equipped with SSD or NVMe ephemeral (local) storage. Ephemeral storage available ranges between
512GB
to 2TB
, depending on the node type. No Volume Claims are needed to allocate ephemeral storage - simply write anywhere in the container file system.Tip
If a larger amount (above
20Gi
) of ephemeral storage is needed, include it in the workloads resource request. See Using Storage - Kubectl for details.In addition to traditional Storage Volumes, CoreWeave also offers S3-compliant Object Storage. See the Object Storage page to learn more.
Storage is billed per gigabyte of allocated (requested) space as an average over a billing month.
Last modified 3mo ago