Skip to main content

February 03, 2025

CoreWeave Launches GB200 NVL72-Powered Cloud Instances

GB200 NVL72-powered cloud instances are now available in selected CoreWeave Regions. These instances combine NVIDIA's GB200 Superchips in a 72-GPU NVLink-connected fabric with CoreWeave's managed services, observability, and high-performance networking. They are designed to set a new standard for GenAI development, delivering remarkable performance for training and deploying cutting-edge foundational models at scale.

Key Features

  • Exceptional Performance: Each rack delivers up to 1.4 exaFLOPS of AI compute power, enabling up to 4x faster training of large language models compared to previous-generation GPU instances.
  • Advanced Memory Subsystem: Featuring 13.5TB of fifth-generation, NVLink connected, high-bandwidth GPU memory per rack, allowing for efficient loading and processing of massive datasets.
  • Liquid Cooling: Results in lower cost and energy consumption compared to traditional air-cooled systems.
  • High Throughput Storage: Supports CoreWeave's AI Object Storage and Local Object Transfer Accelerator (LOTA) natively, providing high throughput storage for clusters scaling to 100,000+ GPUs.
  • Compatibility: Fully compatible with SUNK (SlUrm oN Kubernetes) and CoreWeave Observability services for rack-level insights into large training jobs.

Availability

GB200 NVL72-powered instances are available through CoreWeave Kubernetes Service in our US-WEST-01 Region. Additional regions will be added in the coming months. Please contact your CoreWeave account manager or reach out to our sales team to learn more.

Learn more

For more information, see our GB200 NVL72 product information page.