Skip to main content

Get Started with Storage

Learn about CoreWeave Storage options

High-performance, network-attached storage volumes for both containerized workloads and Virtual Servers are easy to provision and manage on CoreWeave Cloud.

Available in both All-NVMe and HDD tiers, Storage Volumes can be created as Block Volumes or Shared File System Volumes, and can be resized at any time. Storage is managed separately from compute, and can be moved between instances and hardware types.

CoreWeave Cloud Storage Volumes are built on top of Ceph, a software defined scale-out enterprise grade storage platform. Built with triple replication, the CoreWeave Cloud Storage platform is built to provide high-availability, performant storage for your most demanding Cloud-native workloads.

Storage features

  • Storage Volumes are accessible to both containerized and hypervisor Workloads
  • Volumes may be easily resized to add more capacity at any time
  • Single Storage Volumes can be scaled to the multi-Petabyte level
  • Volumes are managed easily either from the CoreWeave Cloud UI or via kubectl
  • Volumes may be cloned to instantiate new Virtual Servers from a template
  • Automatic backups are supported using BackBlaze (via CoreWeave Apps)

Object Storage

In addition to traditional Storage Volumes, CoreWeave also offers S3-compliant Object Storage.

Create and use Storage Volumes

Storage Volumes can be configured, deployed, and managed one of two ways:

Select your preferred deployment method to learn more:

important

Workloads must run in the same data center region as the storage block they are accessing. The topology.kubernetes.io/region affinity can be used to explicitly set workloads to schedule to a selected region.

Storage Volume types

There are two primary types of Storage Volumes available, offered at different tiers. Each type and tier are best for different use cases.

Block Storage Volumes

Block Storage Volumes are attached to containerized workloads and Virtual Server instances as high-performance virtual disks. When served from our all-NVMe storage tier, these virtual disks readily outperform local workstation SSDs, and are scalable to the Petabyte scale. These volumes are presented as generic block devices, so they are treated by the Operating System the same way as a traditional, physically connected storage device.

Shared File System Volumes

POSIX-compliant Shared File System Volumes can be attached to both containerized and Virtual Server workloads to provide native shared file systems. It is also possible to attach these Volumes to many instances at the same time. Shared File System Volumes are ideal for centralized asset storage, whether for sharing with co-workers in the Cloud or for serving as a data source for massively parallel computation.

Storage tiers

All-NVMe

The All-NVMe CoreWeave Cloud storage tier offers the highest performance in both throughput and IOPS. This tier is ideal for hosting the root disk of a Virtual Server, or for serving as the backing store for a transactional database. The All-NVMe tier offers exceptionally high IOPS per Volume and peak throughput into the Tbps range.

NVMe tiers

We offer two classes of NVMe storage - Standard and Premium.

Premium NVMe delivers higher write speeds overall, as well as higher read speeds in cases where the block size and queue depth are low. Premium NVMe is ideal for operations that require a lot of synchronization between Pods. Performance bottlenecks caused by contention are also minimized at this tier. In addition, Premium NVMe provides higher IOPS for datapath-related operations compared to the Standard tier.

Standard NVMe is the best choice for cases where block sizes are large and queue depths are higher.

All-NVMe Block and Shared Storage Volumes can be provisioned using kubectl or Cloud UI.

HDD

The HDD CoreWeave Cloud storage tier offers excellent throughput optimized performance at a lower cost. Great for large file storage with sequential IOPS access patterns, the HDD tier is backed by an NVMe caching layer to improve performance and scale throughput capacity.

HDD Block and Shared Storage Volumes can be provisioned using kubectl or Cloud UI.

Ephemeral storage

All physical nodes are equipped with SSD or NVMe ephemeral (local) storage. Ephemeral storage sizes range between 512GB to 2TB, depending on the node type. No Volume Claims are needed to allocate ephemeral storage - simply write anywhere in the container file system.

tip

If a larger amount (above 20Gi) of ephemeral storage is needed, it is recommended to include this in the Workload's resource requests. See Using Storage - Kubectl for details.

Billing

Storage is billed per gigabyte of allocated (requested) space as an average over a billing month.