Skip to main content

Get Started with Storage

CoreWeave's high-performance, network-attached storage volumes and AI Object Storage with Node-local caching are optimized specifically for providing storage solutions for containerized ML workloads, designed from the ground up to optimize speed, performance, and reliability.

AI Object Storage

CoreWeave AI Object Storage is a hyper-performant S3-compatible Object Storage gateway, which leverages CoreWeave's Local Object Transfer Accelerator (LOTA), a first-of-its-kind, Node-local connection to Object Storage, designed to create a hyper-efficient path for object data to the GPU, which also caches data on GPU Nodes to reduce load times.

Distributed File Storage

POSIX-compliant Distributed File Storage can be attached to containerized workloads to provide native shared filesystem storage. It is also possible to attach these volumes to many instances at the same time, enabling increased consistency and resource efficiency and allowing multiple containers to simultaneously share data. Distributed File Storage volumes are ideal for centralized asset storage, whether for sharing with co-workers in the Cloud or for serving as a data source for massively parallel computation.

Distributed File Storage volumes are ideal for operations that require a lot of synchronization between Pods, as performance bottlenecks caused by contention are minimized. Additionally, Distributed File Storage volumes provide high IOPS for datapath-related operations.