CoreWeave provides a cloud infrastructure platform optimized for AI training, inference, and high-performance computing (HPC) workloads. This page gives an overview of the products available to build, connect, and run your workloads.Documentation Index
Fetch the complete documentation index at: https://docs.coreweave.com/llms.txt
Use this file to discover all available pages before exploring further.
Compute
CoreWeave offers compute options for running containerized AI and HPC workloads:CoreWeave Kubernetes Service (CKS)
Managed Kubernetes on bare metal for training, inference, and HPC.
SUNK
Runs Slurm on Kubernetes for batch and burst workloads.
Inference
Serverless and dedicated inference options for serving AI models.
CoreWeave Sandbox
Ephemeral compute sandboxes for agents and code interpretation.
Storage
CoreWeave storage solutions support the data requirements of AI and ML workloads:CoreWeave AI Object Storage
S3-compatible storage for datasets, model weights, and checkpoints.
Dedicated VAST Storage
Single-tenant VAST clusters with multi-protocol access at petabyte scale.
Distributed File Storage
POSIX shared filesystem for multi-node access and distributed training.
Local Storage
Ephemeral storage on GPU nodes for scratch space and caching.
Networking
CoreWeave networking products create secure, high-performance connections between your resources and services:Virtual Private Clouds (VPCs)
Isolated networks for CKS clusters.
HPC Interconnect
GPUDirect RDMA with InfiniBand for GPU-to-GPU communication.
Direct Connect
Private links via Equinix and Megaport.
IP addresses
Public IPv4 and Bring Your Own IP.
Ingress Service
Public DNS names for services.