GB200 NVL72-powered cloud instances are now available in selected CoreWeave Regions, combining NVIDIA’s GB200 Superchips in a 72-GPU NVLink-connected fabric with CoreWeave’s managed services.Documentation Index
Fetch the complete documentation index at: https://docs.coreweave.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
GB200 NVL72-powered cloud instances are now available in selected CoreWeave Regions. These instances combine NVIDIA’s GB200 Superchips in a 72-GPU NVLink-connected fabric with CoreWeave’s managed services, observability, and high-performance networking. They are designed to set a new standard for GenAI development, delivering remarkable performance for training and deploying cutting-edge foundational models at scale.Key features
| Feature | Description |
|---|---|
| Exceptional Performance | Each rack delivers up to 1.4 exaFLOPS of AI compute power, enabling up to 4x faster training of large language models compared to previous-generation GPU instances |
| Advanced Memory Subsystem | Featuring 13.5TB of fifth-generation, NVLink connected, high-bandwidth GPU memory per rack, allowing for efficient loading and processing of massive datasets |
| Liquid Cooling | Results in lower cost and energy consumption compared to traditional air-cooled systems |
| High Throughput Storage | Supports CoreWeave’s AI Object Storage and Local Object Transport Accelerator (LOTA) natively, providing high throughput storage for clusters scaling to 100,000+ GPUs |
| Compatibility | Fully compatible with SUNK and CoreWeave Observability services for rack-level insights into large training jobs |