Use scrontab to Schedule Recurring Batch Jobs
scrontab
is a utility designed for scheduling recurring Slurm batch jobs in a High-Performance Computing (HPC) Slurm cluster. It uses similar syntax to cron
, but has additional functionality specific to Slurm, making it a suitable option for scheduling complex and resource-intensive jobs.
scrontab
uses the sbatch
mechanism to submit Slurm jobs at specified intervals, where they are then managed and executed by the Slurm workload manager. The Slurm workload manager dynamically allocates resources, queues jobs, and distributes jobs across the cluster.
Preliminary steps
- Enable
scrontab
in the cluster by addingScronParameters=enable
to theslurmConfig.extraConfig
section. See the Slurm parameter reference for a list of available parameters. - Enable the ability to run additional syscalls required by
scrontab
by adding the "SYS_ADMIN" security context capability. See the Slurm parameter reference. - Connect to the login node. For detailed instructions, see Connect to the Slurm Login Node.
- Ensure that scripts intended to be used with
scrontab
are executable by runningchmod +x <script>
. - Ensure that scripts intended to be used with
scrontab
are in a shared location.
Edit the scrontab
file
To edit the scrontab
file, execute the following command:
$scrontab -e
If the scrontab
file does not already exist, scrontab -e
will provide a default, unconfigured example for you to edit.
scrontab
behavior and considerations
scrontab
behaves differently than cron
and #SBATCH
directives in several ways. To ensure a smooth scheduling experience, remember the following concepts:
- Options set using
#SCRON
will be reset after eachscrontab
entry. - The entered times specify when a job becomes eligible to run. The actual execution time is dependent on Slurm's scheduler and resource availability.
#SCRON
directives configure the Slurm job when it is submitted. Thus, Slurm is likely to ignore#SBATCH
directives within the script itself.- When
scrontab
jobs are canceled, their future executions are also canceled. When an entirescrontab
file is deleted usingscrontab -r
, running jobs which were defined in that file will complete but will not be rescheduled afterwards. scrontab
jobs retain theirJobID
across executions. If a job is still running when its next scheduled time arrives, new instances will not be created.- Jobs can be rescheduled based on the times specified in the
scrontab
entries usingscontrol requeue <JobID>
. - The time specified in
scrontab
is relative to the Slurm controller's timezone, not your local time.
Apply #SCRON
directives
To apply #SCRON
options to an entry, add a line beginning with #SCRON
before the given entry. Options set with #SCRON
will apply only to the single following crontab
entry.
#SCRON --your-options
Most #SBATCH
directives are also available to use as #SCRON
directives. For a complete list of options, see SchedMD's sbatch
documentation.
Options set using #SCRON
are reset after each scrontab
entry.
These are some example options which are frequently useful:
#SCRON --chdir=<dir>
sets the directory the job will be executed in.#SCRON --time=<time>
sets the time limit for the job.#SCRON --output=<file>
sets the file location for capturingstdout
andstderr
outputs. The default location for these is/root/slurm-<JobID>.out
.
#SCRON
directives configure the Slurm job when it is submitted. Because of this, Slurm will treat the #SBATCH
directives defined within the running script as user environment variables and ignore them.
Scheduling
The entered time specifies when a job becomes eligible to run. The actual execution time depends on Slurm's scheduler and resource availability.
The format for defining the time and date is the same as in cron
. Times are defined in one line, using five fields to define the minute, hour, day of month, month, and day of week respectively.
The format is as follows:
#SCRON --optional-sbatch-directives<minute> <hour> <day of month> <month> <day of week> /path/to/script.sh
The time specified in scrontab
is relative to the Slurm controller's timezone.
Set the values for each field according to the tables below:
Field | Values |
---|---|
<minute> | (0-59) |
<hour> | (0-23) |
<day of month> | (1-31) |
<month> | 1-12 or Jan-Dec |
<day of week> | (0-7) where 0 and 7 are Sunday |
In addition to these values, you may also use special characters to control scheduling behavior:
Special Character | Name | Behavior |
---|---|---|
* | Wildcard | Matches all values for that field |
, | Comma | Specifies a list of values |
- | Hyphen | Specifies a range of values |
/ | Slash | Specifies step values |
The Examples section below demonstrates the syntax for some common scheduling patterns.
scrontab
also includes aliases for common settings. For example, the @hourly
alias will schedule a job to become eligible at the first minute of each hour. For more examples, see SchedMD's scrontab
documentation.
Rescheduling
To skip the next run of a cron
job and reschedule it for the next available time, use scontrol requeue
, as shown below:
$scontrol requeue <job_id>
scrontab
assigns a single job ID to all occurrences of a given job. Thus, cancelling one run of a job will also cancel all future runs of the job, and the definition of the cancelled job will be commented out in the users scrontab
file.
Examples
This section includes examples of scrontab
scheduling options and demonstrates how they fit with #SCRON
directives.
Recording timestamped job statistics
The following code block records data for selected jobs every other hour, and saves the log files to a specified location:
# Save data on specific jobs every other hour.#SCRON --job-name "log_jobs"0 */2 * * * sacct --jobs=<list> --format=JobID,AllocCPUS,ConsumedEnergy,NTasks > /logs/jobs/$(date +\%Y\%m\%d-\%H\%M).log
In this section, we will examine each element of this code block in detail.
#SCRON
directives
This command begins with a #SCRON
directive:
#SCRON --job-name "log_jobs"
--job-name
specifies the name for the slurm job. In this example, we name the job "log_jobs"
. This name will appear in Slurm's account systems, including sacct
and squeue
, making it easier to track and identify.
Scheduling
Next, the command specifies the scheduling behavior of this job:
0 */2 * * *
A value of 0
in the <minute>
field specifies that the job should run exactly on the hour.
A value of */2
in the <hour>
field specifies that this will execute every other hour. *
indicates "every hour", but adding /2
indicates "every 2nd occurrence". Thus, every other hour.
The *
in the <day of month>
, <month>
, and <day of week>
fields specify that this job will run every day of every month.
Slurm command
After the scheduling specifications, we have the actual command to be executed as a Slurm job. In this example, we use sacct
. The sacct
command is a Slurm CLI utility that provides detailed information about past jobs, including resource usage, status, and more.
$sacct --jobs=<list> --format=JobID,AllocCPUS,ConsumedEnergy,NTasks
The --jobs=<list>
option for sacct
specifies the job to retrieve information about. Replace <list>
with the job IDs you would like to query. This can be done by listing specific job IDs or by providing a range of job IDs.
To list specific jobs, use a comma-separated list: --jobs=123,456,789
To target a range of jobs, specify the beginning and end of the range and place a -
between them: --jobs=123-789
The --format=
option specifies what fields of information you want sacct
to provide for each job. The names of these fields correspond with column names.
In this example, we use --format=
to direct sacct
to return the following information:
Field | Content |
---|---|
JobID | The unique identifier for the job. |
AllocCPUS | The number of CPUs allocated to the specified job. |
ConsumedEnergy | The amount of energy consumed by the job, in Joules. To see this metric, you must have energy accounting enabled in your Slurm configuration. |
NTasks | The number of tasks, or processes, within the job. |
Save the logs to a specified location
To save the requested logs to a specified location, use >
to redirect the output of sacct
to a specified directory and use the date
command to create a logfile with the current date and time.
$sacct --jobs=<list> --format=JobID,AllocCPUS,ConsumedEnergy,NTasks > /logs/jobs/$(date +\%Y\%m\%d-\%H\%M).log
The following table defines the options used to generate the filename:
Option | Format |
---|---|
%Y | Displays the full year. |
%m | Displays the month, (01-12) |
%d | Day of the month, (01-31) |
%H | Hour, (00-23) |
%M | Minute, (00-59) |
\ | An escape character, used before the % , to prevent scrontab from reading % as a newline character. |
.log | The file extension. |
If this job ran at 16:00 (4:00 PM) on June 26, 2025, the filename would be /logs/jobs/20250626-1600.log
Using time shortcuts and additional sbatch
options
# Run a 30-minute job daily at 4:00 PM with a minimum of two nodes, and save the stdout and stderr# outputs to a specific location.#SCRON --chdir /project-location#SCRON --time 0:30:00#SCRON --nodes 2#SCRON --output /output-location/output.log@teatime ./script.sh
@teatime
is an alias provided by scrontab
. It specifies that the job will become eligible at 16:00 (4:00 PM) each day.
Automating node management
# Every Saturday at 6:00 AM, put nodes in drain state for maintenance.0 6 * * 6 scontrol update nodename=slurm-node-[000-009] state=drain reason="maintenance"