Skip to main content

December 2022

❄️ New on CoreWeave Cloud this month:

Welcome NVIDIA HGX H100s to the CoreWeave fleet! 💪

CoreWeave's infrastructure has always been purpose-built for large-scale GPU-accelerated workloads. Since the beginning, CoreWeave Cloud has been specialized to serve the most demanding AI and machine learning applications. So it only makes sense that CoreWeave will soon be one of the only Cloud platforms in the world offering NVIDIA's most powerful end-to-end AI supercomputing platform.

NVIDIA HGX H100s enable...

  • seven times more efficient high-performance computing (HPC) applications,
  • up to nine times faster AI training on large models,
  • and up to thirty times faster AI inference than the NVIDIA HGX A100!

This speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models to "days or hours, instead of months."

HGX H100s will be available in Q1 of 2023!

Launch GPT DeepSpeed Models using Determined AI 🧠

DeepSpeed is an open source deep learning optimization library for PyTorch, designed for low latency and high throughput training while reducing compute power and memory use for the purpose of training large distributed models.

In our new walkthrough, a minimal GPT-NeoX DeepSpeed distributed training job is launched without the additional features such as tracking, metrics, and visualization that DeterminedAI offers.

Multi-namespace support 🎡

CoreWeave Cloud now supports multiple namespaces for organizations!

Kubernetes namespaces provide logical separations of resources within a Kubernetes cluster. While it is typical for CoreWeave client resources to be run inside a single namespace, there are sometimes cases in which more than one namespace within the same organization is required.

CoreWeave Cloud now supports multiple namespaces for organizations, enabled by default!

Accelerated Object Storage

Accelerated Object Storage provides local caching for frequently accessed objects across all CoreWeave data centers. Accelerated Object Storage is especially useful for large scale, multi-region rendering, or for inference auto-scaling where the same data needs to be loaded by hundreds or even thousands of compute nodes.

Tip

Import Disk Images from CoreWeave Object Storage

Did you know you can import your own Virtual Disk Images for Virtual Servers right from CoreWeave Object Storage? With the help of our new guide, you can learn how to do just that!

Introducing CoreWeave CoSchedulers

In Machine Learning, it is often necessary for all pieces of a project to begin at the same time. In the context of Kubernetes, this means that all Pods must be deployed at the same time.

With CoreWeave CoSchedulers, you can ensure that your Pods are all deployed at once, and that deployments only occur if required resources are already available, thereby eliminating the possibility of partial deployments!