Documentation Index
Fetch the complete documentation index at: https://docs.coreweave.com/llms.txt
Use this file to discover all available pages before exploring further.
Outline
This long-form tutorial is comprised of the pages underneath this section. They are designed to be followed in the order they are numbered. In this tutorial, you will:- Set up your first Slurm cluster.
- Submit your first training job, designed to introduce you to SUNK, Slurm, and its supporting utilities.
- Submit a more complex training job, which more closely reflects a real-world scenario.
- Monitor training jobs using CoreWeave Grafana.
🚀 What you'll need
Before you start, you must have:
- A working SUNK cluster deployed on CKS with a GPU Node Pool.
- The following tools installed in a local machine:
- Git
- SSH and
scp
- Basic familiarity with Slurm.
Know before you go
Key concepts
Slurm/SUNK
- Job: A compute workload submitted by a user. It can be a small task, requiring a single CPU for a few seconds, or a large job that requires thousands of CPUs and GPUs for days or even weeks.
- Login nodes: The entry points for users to access the Slurm cluster. In SUNK they are Kubernetes Pods where users prepare data, submit jobs, and check job status. They are not intended for heavy computation and do not typically have a GPU.
- Compute nodes: The machines that actually execute the jobs submitted by users. Kubernetes Pods that run
slurmdand are mapped to physical Nodes in the cluster - Syncer: A Kubernetes Pod that synchronizes the Kubernetes state with the Slurm state.
- Controller: A Kubernetes Pod that runs the Slurm controller,
slurmctld, and also schedules jobs.
HPC cluster components
- Network: Interconnects nodes for communication and data transfer. A CoreWeave HPC cluster usually has two networks: Ethernet, for normal communications including user sessions and storage, and InfiniBand, used for cases where performance is critical - for example, between nodes in a large training job.
- Observability: Metrics that describe the performance of Slurm jobs and nodes, collected and consolidated for monitoring job and cluster performance using tools such as CoreWeave Grafana.
- Storage: Provides space for data and applications. This can be either traditional file storage, or object storage, such as CoreWeave AI Object Storage.
Preinstalled software
The following tools are preinstalled on SUNK login nodes:- Miniconda:
- Initialize Miniconda in your shell using
/opt/conda/bin/conda init bash.
- Initialize Miniconda in your shell using
- Micromamba: A fast, lightweight alternative to conda.
- Initialize in your shell with
micromamba shell init; source ~/.bashrc.
- Initialize in your shell with
- Java OpenJDK
s3cmdandaws cli, for interacting with object storage.- For large object storage transfers, installing and using
rcloneors5cmdin a container orcondaenvironment is recommended. When using s5cmd with AI Object Storage, use the CoreWeave fork of s5cmd; see Migrate data to AI Object Storage.
- For large object storage transfers, installing and using
Good to know
- With the exception of SSH or
scpcommands, all commands in this tutorial are run on the Slurm login node. - Variables in code examples throughout this tutorial in all caps (for example,
USERNAME) are placeholders. Replace them with your own values when running commands in your own environment.
Do not SSH into compute nodesSSH is the preferred method to access Slurm login nodes. However, SSHing into Slurm compute nodes directly to run tasks is strongly discouraged - doing so is considered a bypass of Slurm, and can interfere with running jobs, cause nodes to drain unintentionally, or lead to a temporary loss of resources. SSHing into Slurm compute nodes should only be used for debugging purposes.
Third-party frameworks
Any framework that runs on Slurm or in Linux containers works on SUNK, including PyTorch, TensorFlow, JAX, DeepSpeed, Megatron-LM, and many others.Third-party frameworks
See popular AI and ML frameworks with links to CKS and SUNK guides.