Monitor Slurm node states
Learn about the different states a Slurm node can be in, and how to check the state of a node.
Monitoring node and job states is a crucial component of workload management. CoreWeave performs regular health checks on all nodes in a cluster, but proactively monitoring the health of your own cluster can help with early detection of issues that could result in failed jobs and underutilized hardware.
There are several options for monitoring Slurm node states. CoreWeave's Slurm Cluster Grafana dashboard is a graphical option that includes time-series graphs that track nodes by state. Slurm also has built in monitoring tools, sinfo and scontrol, which can be ran directly in the Slurm login pod shell and customized to provide your desired output.
Slurm node states versus job states
Slurm node states describe the availability and health of the compute hardware in the cluster, while Slurm job states track the lifecycle of a specific workload initiated by a user in the cluster.
The state of a Slurm node indicates whether the node is healthy and available to run workloads. For example, an IDLE Slurm node is healthy and available to run a Slurm job. An ALLOCATED Slurm node is healthy, but is already in use and is not currently available for additional workloads. In contrast, a Slurm node in the DOWN or DRAINED state is not available for new jobs, possibly due to an issue discovered during a routine node health check.
Slurm node states can provide context about Slurm job states. When a workload is working as expected, the Slurm node may be in the ALLOCATED state while the Slurm job on that node is RUNNING. If a Slurm job is stuck in the PENDING state, checking the node state may reveal that the job is on a DOWN node.
Slurm node states
The following table contains a complete list of possible Slurm node states:
| State | Meaning |
|---|---|
IDLE | The node is available for use and does not currently have any jobs allocated to it. |
MIXED or mix | Some of the node's CPUs are ALLOCATED and others are IDLE. Or the node has a suspended job allocated to some of its TRES (memory). |
ALLOCATED or alloc | The node is allocated to one or more Slurm jobs. |
COMPLETING | All jobs on the node have finished, but the node is still in use. In this state, the node is likely running an epilog script. |
DRAINING or drng | The node is in the process of being drained. Existing jobs will run until completion, but no new jobs will start on this node. |
DRAINED or drain | The node has been removed from the available compute pool. No new jobs will start on this node. |
DOWN | The node is offline and unavailable for job scheduling. Jobs on the node may be requeued or canceled. The scheduler will not send any new jobs to this node. |
BLOCKED | Topology-aware scheduling attempts to place a job within a single block to optimize performance. When a job occupies a block, any unallocated nodes in that block are placed in the BLOCKED state, which prevents new jobs being scheduled on them. This ensures that the entire block remains available for the current job, which may eventually require the unallocated resources, preventing potential resource fragmentation. This state only applies when using topology-aware scheduling, such as the --exclusive=topo flag. |
RESERVED or resv | A user with the appropriate permissions has reserved this node in advance, for a defined period of time. Jobs cannot be scheduled on this node, even if it's currently idle. Jobs that require the reservation must be submitted with the --reservation flag and the name of the reservation. |
UNKNOWN | The Slurm controller has just started and the node's state has not yet been determined. |
FAIL | The node is unavailable. No new jobs will be scheduled, and any job that was running on the node when it failed will be terminated. |
FAILING | The node is in the process of being made unavailable. Jobs currently running on the node will run until completion, but no new jobs will start on the node. |
FUTURE | The node will be made available, but is not yet ready to accept jobs. |
INVAL | The node's configuration is invalid or inconsistent with the Slurm controller. There may be inconsistencies between the slurm.conf file and the slurmd daemon running on the compute node. For troubleshooting information, see nodes in the INVAL state. |
MAINT | The node has been designated for planned maintenance and is not currently available for job scheduling. |
REBOOT_REQUESTED | An administrator has scheduled the node for a planned reboot. Any jobs currently running on this node will continue to run until completion. |
REBOOT_ISSUED | The node is actively in the process of shutting down and restarting. Once the node comes back online and its slurmd daemon successfully registers with slurmctld, this flag is automatically cleared, and the node will return to its normal state. |
PLANNED | The node is planned by the backfill scheduler for a higher priority job. |
PERFCTRS (NPC) | Network Performance Counters associated with this node are in use, rendering this node as not usable for any other jobs. |
POWER_DOWN | An administrator has flagged the node to power down. |
POWERING_DOWN | The node is in the process of powering down and is not available to for job scheduling. |
POWERED_DOWN | The node is currently powered down and not capable of running any jobs. |
POWERING_UP | The node is in the process of being powered up. |
Node state names may differ across Slurm versions. Refer to SchedMD's Slurm documentation for more information.
Node state flags
Node states may also include flags, indicated by a special character appended to the state. For example, an asterisk * appended to any of the above states indicates that the node is not responding.
| Flag | Meaning |
|---|---|
* | The node is not responding. No new work will be allocated to this node. If the node remains unresponsive, it will transition to a DOWN state. |
$ | The node is currently in a reservation with a flag value of "maintenance". |
- | The node is planned for a higher priority job by the backfill scheduler. |
! | The node is pending power down. |
% | The node is in the process of powering down. |
~ | The node is powered off. |
# | The node is in the process of powering up or is being configured. |
@ | The node is pending reboot. |
^ | The node reboot was issued. |
Nodes in a drain* or down* state have been removed from the cluster and can be ignored. You may see this suffix if you checked the state while the Pod was not yet fully connected.
Check the state of a Slurm node
There are multiple ways to check the state of a Slurm node. The scontrol command provides a detailed view of node states, while sinfo provides a simple overview. Regardless of method used, you must connect to the Slurm login pod and run all Slurm commands from the Slurm login pod shell.
Connect to the Slurm login pod
To drain and undrain nodes with scontrol, first connect to the Slurm login pod.
After connecting to the Slurm login node, you can use the scontrol commands detailed below to examine and manage drained nodes.
Run all Slurm commands, including scontrol and sinfo, from within the Slurm login pod shell.
View detailed node states with scontrol
The scontrol command provides a detailed view of the node states.
$scontrol show node <node-name>
Replace <node-name> with the actual name of the node you want to check, or remove the <node-name> entirely to list all nodes.
Command aliases for node monitoring
Alternatively, CoreWeave provides some useful aliases as part of the SUNK image.
The sn alias runs the scontrol show node command shown above.
$sn
The dl alias lists all nodes that are idle or in drain, along with the reason for the state.
$dl
The dl alias runs the following commands:
$sinfo -t "drain&idle" -NO "NodeList:45,Comment:10,Timestamp:25,Reason:130" | uniq
See an overview of node states with sinfo
sinfo reports the state of partitions and nodes managed by Slurm and provides a wide variety of filtering, sorting, and formatting options.
$sinfo
This command will provide the following information:
| Field | Description |
|---|---|
PARTITION | The name of the partition the node belongs to. |
AVAIL | State of the partition. |
TIMELIMIT | The maximum time limit for user jobs in days-hours:minutes:seconds format. |
NODES | The number of nodes in the partition. |
STATE | The state of the node. |
NODELIST | The names of the nodes in this partition. |
Use the --help flag to view the help menu for the sinfo command.
sinfo sends a remote procedure call to slurmctld. Too many calls to the slurmctld daemon can lead to performance loss, and possibly result in a denial of service. Avoid calling sinfo in loops within shell scripts or other programs.
Customize the output format of sinfo
By default, sinfo groups nodes with a common configuration into a single line. The configuration includes partition, state, CPU count, and amount of memory. You can change this behavior using format specifiers.
Slurm supports two options for customizing the output format of sinfo: --format and --Format. The syntax and capabilities of these options are different. This guide focuses on the --format option, as it provides greater flexibility and more granular control over the output format.
Output format options
The syntax for --format is a printf-style format string. Enclose all format and type specifiers in double quotes ", and attach them to the --format flag with an = sign, as shown below:
$sinfo --format="%<.><size><type><suffix>"
See the following table for descriptions of each element of the above format string:
| Format specifier | Description |
|---|---|
% | Indicates the start of a format specifier. |
. | Place a period before the <size> to specify right-justification (%.10P). Omit the period for the default, left-justified output.(%10P). |
<size> | The number specified is the minimum field width. If the actual text is shorter than the specified width, the field will be padded with spaces to meet this width. If the text is longer, the field will expand to fit it. |
<suffix> | Specifies a string to append to the field. |
Additional format options are detailed in the following table:
| Flag | Description |
|---|---|
--format or -o | Full customization of the output format. Allows you to specify a custom output format string to display specified fields in a specified order. |
--exact or -e | Prevents node grouping unless the nodes are exactly identical in every configuration aspect. Lists each node on a separate line, even if they're in the same partition or state. |
--Node or -N | Node-oriented output; displays one line per node instead of one line per partition. This effectively prevents all grouping and provides a granular view of every node in the cluster, each on its own line. |
--states or -t | Lists only nodes with the specified states, using the state names shown above. To view multiple states, use a comma-separated list. |
--partition or -p | Specifies the partition to display information for. |
--noheader or -h | Omits the header from the output. |
--help | Display the help menu for the sinfo command. |
Format string syntax example
Here's an example of how to use the --format flag to customize the output format of sinfo:
$sinfo --format="%10P %20N %10T"
%10P- The partition name, with a width of 10 characters.%20N- The node list, with a width of 20 characters.%10T- The state of the node, in extended format, with a width of 10 characters.
Complete list of type specifiers
Use field type specifiers to indicate the information you want to display.
Combine field type characters with format specifier flags to customize the output format.
There are numerous type specifiers available to use in the format string. Note that type specifiers are case-sensitive.
Details
| Type specifier | Description |
|---|---|
%all | Displays all fields. |
%a | State/availability of a partition. |
%A | Number of nodes by state in the format "allocated/idle". Do not use this with a node state option (%t or %T), or the different node states will be placed on separate lines. |
%b | Features currently active on the nodes. See also %f. |
%B | The max number of CPUs per node available to jobs in the partition. |
%c | The number of CPUs per node. |
%C | Number of CPUs by state in the format "allocated/idle/other/total". Do not use this with a node state option (%t or %T), or the different node states will be placed on separate lines. |
%d | Size of temporary disk space per node, in megabytes. |
%D | Number of nodes. |
%e | Total memory, in MB, currently free on the node as reported by the operating system. This value is for informational use only and is not used for scheduling. |
%E | The reason a node is unavailable. down, drained, or draining states. |
%f | Features currently active on the nodes. See also %b. |
%g | Groups which may use the nodes. |
%G | Generic resources associated with the nodes. |
%h | Print the OverSubscribe setting for the partition. |
%H | Print the timestamp of the reason a node is unavailable. |
%i | Print the name of a reservation for a node in advanced reservation. |
%I | Partition job priority weighting factor. |
%l | Maximum time for any job in the format days-hours:minutes:seconds. |
%L | Default time for any job in the format days-hours:minutes:seconds. |
%m | Size of memory per node, in megabytes. |
%M | PreemptionMode. |
%n | List of node hostnames. |
%N | List of node names. |
%o | List of node communication addresses. |
%O | CPU load of a node, as reported by the operating system. |
%p | Partition scheduling tier priority. |
%P | Partition name, followed by * for the default partition. See also %R. |
%r | Only user root may initiate jobs, "yes" or "no". |
%R | Partition name. See also %P. |
%s | Maximum job size, in nodes. |
%S | Allowed allocating nodes. |
%t | State of nodes, in compact format. |
%T | State of nodes, in extended format. |
%u | Print the name of the user who set the reason a node is unavailable. |
%U | Print the name and UID of the user who set the reason a node is unavailable. |
%v | Print the version of the current slurmd daemon. |
%V | Print the cluster name if running in a federation. |
%w | Scheduling weight of the nodes. |
%X | Number of sockets per node. |
%Y | Number of cores per socket. |
%z | Extended processor information: number of sockets, cores, threads per node. |
%Z | Number of threads per core. |