A Batch Job Scheduler for Clusters

SLURM is a batch job scheduler for clusters. It allows a user to set up a batch file describing how a program is to be executed in parallel. Once the batch file is submitted, it goes into a queue. waiting for a time when the desired number of processors are available, when it begins execution. On job completion, the program output is gathered and returned it to the user. Until the job is complete, the scheduler allows the user to query its current status.

A user typically logs into a special login node of the cluster, which is intended only for editing, file management, job submission, and other small interactive tasks.

The user wishes to run a parallel program on several processors of the cluster. To do so, the user must create an executable version of the program, write a suitable batch job script describing the job limits, and listing the commands to be executed, and then submit the script for processing.

The job script can be thought of as consisting of two parts:

The user has several separate issues when preparing a first job script:

and and

Local Installation:

At FSU RCC, users of the cluster must apply for an account by going to the web page:, and choosing the "My Account" item from the menu on the side, and on the new page, selecting "Sign Up". Accounts are available as general access accounts (anyone) and owner-based accounts (requiring authorization from the "owner").

Any FSU faculty member can get a general access account; any researcher can also get a general access account if they have an FSU faculty sponsor.

Some researchers support the system, and in return have priority access to components of the system - they are, in essence, "owners" of part of the system. You can get an account on an owner-based component at the discretion of the owner.

Once you have applied for an account, and it has been approved, you can access the system, using an ID (which may be the same as your FSU ID, or not) and a password associated with the RCC system. To log into the system, use ssh and the address of the component to which you have been assigned. For instance, I log in using the command:

        ssh -Y
to enable X window graphics, which are needed, for instance, if you want to work interactively with MATLAB.

To transfer files between a local system and the RCC, you need the sftp command. I do this in a second window, so that I have interactive access in my ssh window, while file transfers occur in the sftp window. The command that makes the connection for me is:

and I put files from my local system to the RCC by a command like
        put fred.txt
and get files from the RCC back to the local system by
        get jeff.txt

The main reason for using the cluster is to be able to compute in batch mode - one or many jobs, submitted to a queue, to run "eventually". You can log out after you submit jobs, and log in later at your convenience to collect the output from completed jobs. Parallel programs can be run on multiple processors this way. Matlab programs, whether parallel or sequential, can also be submitted to the batch queue.

The commands that make a job run in the batch queue form a job script. The first part of the script contains commands to the job scheduler. These commands specify the maximum time limit, the number of processes, the particular queue you will use, and so on. While general users might use the "classroom" queue, I access the queue "gunzburg_q" associated with my research group.

After the scheduler commands come a sequence of commands that you might imagine typing in interactively; that is, these might be the normal sequence of UNIX commands you would issue to run a particular job.

Briefly, if you have a job script file called you can first determine what partitions are available on the system by

Then you can submit your job by a command like
or submit to a specific partition by:
        sbatch -p myqueue
You can check to see the status of all jobs by the command
or the status of your jobs by the command
        squeue -u jburkardt
and, seeing that your job has the identifying number 1750, you could cancel your job by
        scancel 1750
or, if your job runs to completion,you should find an output file in your RCC directory containing the output, or the error messages that explain why you didn't actually get any output.


The computer code and data files made available on this web page are distributed under the GNU LGPL license.

Related Data and Programs:

MATLAB_COMMANDLINE, MATLAB programs which illustrate how MATLAB can be run from the UNIX command line, that is, not with the usual MATLAB command window.

MATLAB_COMPILER, MATLAB programs which illustrate the use of the Matlab compiler, which allows you to run a Matlab application outside the Matlab environment.

MATLAB_PARALLEL, examples which illustrate local parallel programming on a single computer with MATLAB's Parallel Computing Toolbox.

MOAB, examples which illustrate the use of the MOAB job scheduler for batch execution of jobs on a computer cluster.

MPI, C programs which illustrate the use of the MPI application program interface for carrying out parallel computations in a distributed memory environment.

OPENMP, FORTRAN90 programs which illustrate the use of the OpenMP application program interface for carrying out parallel computations in a shared memory environment.

Source Code:

ENVIRON is a batch job script that simply queries the values of certain environment variables.

HELLO is a batch job script that compiled and rund a program. The script also "cleans up" after itself, that is, it discards the executable program once the job is complete.

HELLO_OPENMP illustrates the compilation and execution of a program that includes OpenMP directives.

HELLO_MPI illustrates the compilation and execution of a program that includes MPI directives.

JOB_ARRAY illustrates how a single SLURM sbatch command can submit an array of jobs, which use the same batch file, but which are each given a different task id. In this example, the batch file was invoked using the command

        sbatch -a 0-3
meaning that the file was essentially submitted 4 times, with the environment variable $SLURM_ARRAY_TASK_ID set to 0, 1, 2 and 3.

POWER_TABLE shows how a MATLAB program can be run through the scheduler. We prepare a file of input commands, and invoke MATLAB on the commandline.

You can go up one level to the EXAMPLES source code page.

Last revised on 17 November 2015.