CoreWeave
Search…
Resource Based Pricing
CoreWeave Cloud is built to provide significant flexibility in hardware selection, allowing customization of CPU, RAM, storage and GPU requests when scheduling your workloads. Resources are scheduled using these provided configurations, providing savings and simplicity on top of legacy cloud alphabet-soup instance type selection.
While we also show pricing and allow scheduling based upon "Standard Instances", all CoreWeave Cloud instances are configurable, and all billing is à la carte, priced by the hour, billed by the minute. All billing is based upon the greater of resources requested in an instance, or, if burstable, the actual resources consumed during any minute billing window.

The following components are configurable in GPU based instances.
Type
Description
Label
Cost per Hour
VRAM
GPU
NVIDIA A100 for NVLINK (80GB)
A100_NVLINK_80GB
$2.21
GPU
NVIDIA A100 for NVLINK (40GB)
A100_NVLINK
$2.06
40GB HBM2e
GPU
NVIDIA A100 for PCIE (40GB)
A100_PCIE_40GB
$2.06
40GB HBM2e
GPU
NVIDIA A100 for PCIE (80GB)
A100_PCIE_80GB
$2.21
GPU
NVIDIA V100 for NVLINK
Tesla_V100_NVLINK
$0.80
16GB HBM2
GPU
NVIDIA A40
A40
$1.28
48GB GDDR6
GPU
NVIDIA RTX A6000
RTX_A6000
$1.28
48GB GDDR6
GPU
NVIDIA RTX A5000
RTX_A5000
$0.77
24GB GDDR6
GPU
NVIDIA RTX A4000
RTX_A4000
$0.61
16GB GDDR6
GPU
NVIDIA Quadro RTX 5000
Quadro_RTX_5000
$0.57
16GB GDDR6
GPU
NVIDIA Quadro RTX 4000
Quadro_RTX_4000
$0.24
8GB GDDR6
CPU
AMD Epyc Milan vCPU
amd-epyc-milan
$0.010
N/A
CPU
AMD Epyc Rome vCPU
amd-epyc-rome
$0.010
N/A
CPU
Intel Xeon Scalable
intel-xeon-scalable
$0.010
N/A
CPU
Intel Xeon v4
intel-xeon-v4
$0.010
N/A
RAM
System RAM per GB
memory
$0.005
N/A
An example, guaranteed request, hardware configuration of 4 Tesla V100 NVLINK GPUs with 32 Intel Xeon Scalable vCPU and 128GB of RAM would look something like:
containers:
- name: v100-example
resources:
requests:
cpu: 32
memory: 128Gi
nvidia.com/gpu: 4
limits:
cpu: 32
memory: 128Gi
nvidia.com/gpu: 4
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/class
operator: In
values:
- Tesla_V100_NVLINK
In the above example, the cost per hour of the instance would be:
Instance Configuration:
4 NVIDIA Tesla V100 for NVLINK
32 vCPU
128Gi System RAM
Instance Cost:
Tesla_V100_NVLINK -> $0.80 * 4
vCPU -> $0.01 * 32
System RAM -> $0.005 * 128
= $4.16 per hour

Instances without a GPU attached are configurable in combinations of vCPU and system RAM. For these instances, system RAM is included in the vCPU price. Combinations can be configured in multiples of:
CPU Type
Label
RAM per vCPU
Cost per vCPU per Hour
AMD Epyc Milan
amd-epyc-milan
4GB
$0.035
AMD Epyc Rome
amd-epyc-rome
4GB
$0.030
Intel Xeon Scalable
intel-xeon-scalable
4GB
$0.030
Intel Xeon v4
intel-xeon-v4
4GB
$0.020
Intel Xeon v3
intel-xeon-v3
4GB
$0.0125
An example configuration requesting 6 AMD Epyc Rome vCPU with 24GB of RAM would look like:
containers:
- name: epyc-example
resources:
requests:
cpu: 6
memory: 24Gi
limits:
cpu: 6
memory: 24Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.coreweave.cloud/cpu
operator: In
values:
- amd-epyc-rome
In the above example, the cost per hour of the instance would be:
Instance Configuration:
6 AMD Epyc Rome vCPU
24Gi System RAM
Instance Cost:
AMD Epyc vCPU -> $0.03 * 6
= $0.18 per hour
Public IP Addresses
IP Addresses are billed at $4.00 per IP per month. For periods of use less than one month, this charge is pro-rated like other billing, by the minute. If a Public IP Address is assigned to an instance and the instance is not running, billing will CONTINUE to accrue for this reserved Public IP Address.
Billing Periods
All CoreWeave Cloud billing periods cover the calendar month (i.e. 1 January 12:00am UTC thru 1 February 12:00am UTC).
Copy link
On this page
GPU Instance Resource Pricing
CPU Only Instance Resource Pricing