Interactive Jobs
This page is under construction! We cannot guarantee completion or accuracy of information herein while in development.
Overview
Sometimes it is necessary to test or run code in a quick and interactive shell environment. In these cases, batch jobs are inconvenient due to queue times and lack of interactivity, and the Open On Demand graphical options may be too cumbersome when it's only necessary to use the command line. Interactive sessions are ideal in these cases.
The term "interactive session" typically refers to jobs run from within the command line on a terminal client. Opening a terminal in an interactive graphical desktop is also equivalent, but these sessions are fixed to the resources allocated to that OOD session. As you'll see below, one has more control over their resources when requesting an interactive session via SSH on a terminal client.
Definition: An interactive session means that a terminal is being run on a compute node.
Click here to review the node structure of HPC and how to access a login node.
How to request an interactive session
Clusters
An interactive session can be requested on any of our three clusters: El Gato, Ocelote, and Puma. Since the request to start an interactive session is processed by Slurm, these jobs will be subject to the same wait times as batch jobs. Since Puma is typically busy with high traffic throughput, it is not recommended to request an interactive session on this cluster unless specific resources are needed and longer wait times are acceptable to the user.
Want your session to start even faster? Try one or both of the following:
- Switch to ElGato. This cluster shares the same operating system, software, and file system as Puma so often your workflows are portable across clusters. Ocelote and ElGato standard nodes have 28 and 16 CPUs, respectively, and are often less utilized than Puma meaning much shorter wait times. Before you run the interactive command, type elgato to switch.
Use the account flag. By default, interactive will request a session using the windfall partition. Windfall is lower priority than standard and so these types of jobs take longer to get through the queue. If you include the account flag, that will switch your partition to standard. An example of this type of request:
$ interactive -a YOUR_GROUP
The 'interactive' command
When you are on a login node, you can request an interactive session on a compute node. This is useful for checking available modules, testing submission scripts, compiling software, and running programs directly from the command line. We have a built-in shortcut command that will allow you to quickly and easily request a session by simply entering: interactive
Do not attempt to test code, run programs, or compile software on the login nodes. These resources are shared among all users and such activities can cause significant slowdowns. Instead, these actions can be performed on a compute node via an interactive session accessed by following the directions on this page.
The interactive
command is essentially a convenient wrapper for a Slurm command called salloc
. This can be thought of as similar to the sbatch
command, but for interactive jobs rather than batch jobs. When you request a session using interactive
, the full salloc
command being executed will be displayed for reference.
(ocelote) [netid@junonia ~]$ interactive Run "interactive -h for help customizing interactive use" Submitting with /usr/local/bin/salloc --job-name=interactive --mem-per-cpu=4GB --nodes=1 --ntasks=1 --time=01:00:00 --account=windfall --partition=windfall salloc: Pending job allocation 531843 salloc: job 531843 queued and waiting for resources salloc: job 531843 has been allocated resources salloc: Granted job allocation 531843 salloc: Waiting for resource configuration salloc: Nodes i16n1 are ready for job [netid@i16n1 ~]$
Notice in the example above how the command prompt changes once your session starts. When you're on a login node, your prompt will show "junonia" or "wentletrap". Once you're in an interactive session, you'll see the name of the compute node you're connected to.
If no options are supplied to the command interactive
, your job will automatically run using the windfall partition for one hour using one CPU. To use the standard partition, include the flag "-a" followed by your group's name. To see all the customization options:
(ocelote) [netid@junonia ~]$ interactive -h Usage: /usr/local/bin/interactive [-x] [-g] [-N nodes] [-m memory per core] [-n ncpus per node] [-Q optional qos] [-t hh::mm:ss] [-a account to charge]
You may also create your own salloc
commands using any desired SLURM directives for maximum customization.
Commonly used options
Formatting note: Flags with a single dash and single character do not require an equals sign (for example: "-n 2
"), but longer formatted options such as "--partition=windfall
" must have the equals sign with no space.
Flag | Default value | Description | Example |
---|---|---|---|
-n | 1 | Number of CPUs requested per node | requests 8 CPUs interactive -n 8 |
-m | 4GB | Memory per CPU. Note: memory per CPU is fixed per cluster. Total memory can be specified by requested the appropriate number of CPUs. See more. | interactive -m 5GB |
-a | none | Account to charge | interactive -a my_account |
--partition=<> | windfall | Partition to determine CPU time charges. Is set to windfall when no account is specified, and is set to standard when an account is provided. More info. | interactive --partition=windfall |
-t | 01:00:00 | Time allocated to session. Will expire without warning when time limit is reached | interactive -t 08:00:00 |
-N | 1 | Number of nodes. There is no reason to exceed 1 node unless the number of CPUs requested is greater than the number of CPUs per node on a given cluster. | elgato interactive -n 32 -N 2 switches to El Gato cluster, which has 16 CPUs per node. Requests 2 full nodes. |
Software
Once an interactive session has been activated, your session is now being run on a compute node. This means that not only do you have more computing power available compared to the login node, but you also have access to all of the software installed on HPC. To see the software available by default, use the module list
command:
[netid@cpu-node-name ~]$ module list Currently Loaded Modules: 1) autotools 2) prun/1.3 3) gnu8/8.3.0 4) openmpi3/3.1.4 5) ohpc 6) cmake/3.21.3
The module avail command prints out all available modules when given no argument, or searches the available modules for the search term placed afterwards. For example:
[netid@cpu-node-name ~]$ module avail py ------------------------------------ /opt/ohpc/pub/modulefiles ------------------------------------- pymol/2.4.0 python/3.8/3.8.12 pytorch/nvidia/22.04 pymol/2.5.0 (D) python/3.9/3.9.10 pytorch/nvidia/22.07 python/3.6/3.6.5 python/3.11/3.11.4 (D) pytorch/nvidia/22.12 (D) python/3.8/3.8.2 pytorch/nvidia/20.01
In this mode, only text-based command-line software can be used. To use graphical software, please see our page on GUI Jobs.