Use GPUDirect RDMA with InfiniBand
Learn how to use GPUDirect with InfiniBand
In this guide, learn how to use GPUDirect RDMA with InfiniBand at CoreWeave, and how to test it with NCCL.
Prerequisites
CoreWeave supports GPUDirect RDMA over InfiniBand for some instance types.
To use this feature, you must:
- select a Node Pool with InfiniBand support,
- install NCCL and the OFED driver in the Pod image, then,
- configure the Pods to use GPUDirect RDMA.
Select a Node Pool with Infiniband support
To use GPUDirect RDMA, make sure the Node Pool has Nodes with InfiniBand, as shown in our list of instance types. All Nodes with InfiniBand have the required Mellanox OFED kernel drivers pre-installed.
Install NCCL and the OFED driver
Install the NVIDIA Collective Communications Library (NCCL) and NVIDIA OFED driver in the Pod image. These are required for GPUDirect RDMA over InfiniBand. CoreWeave publishes a repository of Dockerfiles with NCCL and the required OFED drivers pre-installed, which you can use for testing or as templates for your own distributed training workloads.
Configure the Pods
The Pods must be configured to use GPUDirect RDMA over InfiniBand. Follow these steps:
-
Set the value of
spec.containers.resources.requests.rdma/ib
to1
.This value does not indicate the number of InfiniBand devices requested, it's used as a boolean to schedule Pods onto servers with InfiniBand support.
Kubernetes schedules resources through
requests
andlimits
. When onlylimits
are specified, therequests
are set to the same amount as the limit. To learn more about container resource management on Kubernetes, see the official Kubernetes documentation.See the full YAML example below for reference showing how to set the
rdma/ib
value in the Pod spec for bothrequests
andlimits
. -
Configure the Pods to use GPUDirect RDMA by setting these environment variables:
NCCL_SOCKET_IFNAME
: The network interface name to use for NCCL communication. This should be set to the InfiniBand interface name.NCCL_IB_HCA
: The InfiniBand host channel adapter (HCA) to use for NCCL communication.UCX_NET_DEVICES
: The network devices to use for UCX communication. This should be set to the InfiniBand interface name.
Examples for Kubernetes and Slurm are in the sections below.
-
(Optional) Enable extended logging with the
NCCL_DEBUG
environment variable.To increase the verbosity of NCCL's logging, set the
NCCL_DEBUG
environment variable toINFO
for extra debug information. This can help diagnose issues with RDMA support, but it increases the log file size, so it should be disabled when testing is complete. SeeNCCL_DEBUG
in the NCCL documentation for more logging options.
Kubernetes example
When deploying a Kubernetes Pod in the cluster, use the highlighted lines below to set the rdma/ib
value in the Pod spec for both requests
and limits
, and set the required environment variables. Remove NCCL_DEBUG
unless extended logging is needed.
[...]spec:containers:- name: exampleresources:requests:cpu: 10memory: 10Girdma/ib: 1nvidia.com/gpu: 8limits:cpu: 10memory: 10Girdma/ib: 1nvidia.com/gpu: 8env:- name: NCCL_SOCKET_IFNAMEvalue: eth0- name: NCCL_IB_HCAvalue: ibp- name: UCX_NET_DEVICESvalue: ibp0:1,ibp1:1,ibp2:1,ibp3:1,ibp4:1,ibp5:1,ibp6:1,ibp7:1- name: NCCL_DEBUGvalue: INFORemove NCCL_DEBUG unless debug logging[...]
Slurm example
When deploying a Slurm job, use the highlighted lines below to set the required environment variables. Remove NCCL_DEBUG
unless extended logging is needed.
#!/bin/bash#SBATCH --partition h100#SBATCH --nodes 16#SBATCH --ntasks-per-node 8#SBATCH --gpus-per-node 8# [...] other SBATCH options, as neededexport NCCL_SOCKET_IFNAME=eth0export NCCL_IB_HCA=ibpexport UCX_NET_DEVICES=ibp0:1,ibp1:1,ibp2:1,ibp3:1,ibp4:1,ibp5:1,ibp6:1,ibp7:1export NCCL_DEBUG=INFORemove NCCL_DEBUG unless debug logging
Testing with NCCL
CoreWeave has several sample NCCL test jobs designed for use with MPI Operator or Slurm. These are in the nccl-tests
repository provides , which can be used to test GPUDirect RDMA support with InfiniBand. For more information, refer to instructions for testing in the repository.