CoreWeave Cloud is built to provide significant flexibility in hardware selection, allowing customization of CPU, RAM, storage and GPU requests when scheduling your workloads. Resources are scheduled using these provided configurations, providing savings and simplicity on top of legacy cloud alphabet-soup instance type selection.
While we also show pricing and allow scheduling based upon "Standard Instances", all CoreWeave Cloud instances are configurable, and all billing is à la carte, priced by the hour, billed by the minute. All billing is based upon the greater of resources requested in an instance, or, if burstable, the actual resources consumed during any minute billing window.
The following components are configurable in GPU based instances.
Type | Description | Resource Label | Cost per Hour | VRAM |
GPU | NVIDIA A100 for NVLINK | A100_NVLINK | $2.06 | 40GB HBM2e |
GPU | NVIDIA V100 for NVLINK | Tesla_V100_NVLINK | $0.80 | 16GB HBM2 |
GPU | NVIDIA V100 for PCIe | Tesla_V100 | $0.47 | 16GB HBM2 |
GPU | NVIDIA Quadro RTX 6000 | Quadro_RTX_6000 | $0.97 | 24GB GDDR6 |
GPU | NVIDIA Quadro RTX 5000 | Quadro_RTX_5000 | $0.57 | 16GB GDDR6 |
GPU | NVIDIA Quadro RTX 4000 | Quadro_RTX_4000 | $0.24 | 8GB GDDR6 |
GPU | NVIDIA P100 for NVLINK | Tesla_P100_NVLINK | $0.55 | 16GB HBM2 |
CPU | AMD Epyc vCPU | epyc | $0.010 | N/A |
CPU | Intel Xeon vCPU | xeon | $0.005 | N/A |
RAM | System RAM per GB | memory | $0.005 | N/A |
An example, guaranteed request, hardware configuration of 4 Tesla V100 NVLINK GPUs with 32 Intel Xeon vCPU and 128GB of RAM would look something like:
containers:- name: v100-exampleresources:requests:cpu: 32memory: 128Ginvidia.com/gpu: 4limits:cpu: 32memory: 128Ginvidia.com/gpu: 4affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: gpu.nvidia.com/classoperator: Invalues:- Tesla_V100_NVLINK- key: cpu.coreweave.cloud/familyoperator: Invalues:- xeon
In the above example, the cost per hour of the instance would be:
Instance Configuration:4x NVIDIA Tesla V100 for NVLINK32 Intel Xeon vCPU128Gi System RAMInstance Cost:Tesla_V100_NVLINK -> $0.80 * 4Xeon vCPU -> $0.005 * 32System RAM -> $0.005 * 128= $4.00 per hour
Instances without a GPU attached are configurable in combinations of vCPU and system RAM. For these instances, system RAM is included in the vCPU price. Combinations can be configured in multiples of:
CPU Type | Resource Label | Ram per vCPU | Cost per vCPU per Hour |
AMD Epyc | epyc | 4GB | $0.03 |
Intel Xeon v1/v2 | xeon | 3GB | $0.009 |
An example configuration requesting 6 AMD Epyc vCPU with 24GB of RAM would look like:
containers:- name: epyc-exampleresources:requests:cpu: 6memory: 24Gilimits:cpu: 6memory: 24Giaffinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: cpu.coreweave.cloud/familyoperator: Invalues:- epyc
In the above example, the cost per hour of the instance would be:
Instance Configuration:6 AMD Epyc vCPU24Gi System RAMInstance Cost:AMD Epyc vCPU -> $0.03 * 6= $0.18 per hour
Billing Periods
All CoreWeave Cloud billing periods cover the calendar month (i.e. 1 January 12:00am UTC thru 1 February 12:00am UTC).