Job Examples
Single Serial Job Submission
PBS Script
#!/bin/bash #PBS -N Sample_PBS_Job #PBS -l select=1:ncpus=1:mem=1gb #PBS -l walltime=00:01:00 #PBS -q standard #PBS -W group_list=<group_name> cd $PBS_O_WORKDIR pwd; hostname; date module load python python --version
SLURM Script
#!/bin/bash #SBATCH --job-name=Sample_Slurm_Job #SBATCH --ntasks=1 #SBATCH --nodes=1 #SBATCH --mem=1gb #SBATCH --time=00:01:00 #SBATCH --partition=standard #SBATCH --account=<group_name> # SLURM Inherits your environment. cd $SLURM_SUBMIT_DIR not needed pwd; hostname; date module load python/3.6 python3 --version
Array Job
IMPORTANT:
When submitting jobs with named output files (i.e. with the line #SBATCH -o=Job.out) as arrays, SLURM will write every array element to that filename leaving you with only the output of the last completed job in the array. Use one of the following SLURM directives in your script to prevent this behavior:
Differentiates output files using array indices. Similar to PBS default. See SLURM Output Filename Patterns above for more information.
#SBATCH --output=Job-%a.out
Appends the output from all tasks in an array to the same output file. Warning: if a file exists with that name prior to running your job, the output will be appended to that file
#SBATCH --open-mode=append
PBS Script
#!/bin/bash #PBS -N Sample_PBS_Job #PBS -l select=1:ncpus=1:mem=1gb #PBS -l walltime=00:01:00 #PBS -q standard #PBS -W group_list=<group_name> #PBS -J 1-5 cd $PBS_O_WORKDIR pwd; hostname; date echo "./sample_command input_file_${PBS_ARRAY_INDEX}.in"
SLURM Script
#!/bin/bash #SBATCH --output=Sample_SLURM_Job-%a.out #SBATCH --ntasks=1 #SBATCH --nodes=1 #SBATCH --time=00:01:00 #SBATCH --partition=standard #SBATCH --account=<group_name> #SBATCH --array=1-5 # SLURM Inherits your environment. cd $SLURM_SUBMIT_DIR not needed pwd; hostname; date echo "./sample_command input_file_${SLURM_ARRAY_TASK_ID}.in"
MPI Job
PBS Script
#!/bin/bash #PBS -N Sample_MPI_Job #PBS -l select=1:ncpus=16:mem=16gb #PBS -l walltime=00:10:00 #PBS -W group_list=<group_name> #PBS -q standard cd $PBS_O_WORKDIR pwd; hostname; date module load openmpi /usr/bin/time -o mpit_prog.timing mpirun -np 16 a.out
SLURM Script
#!/bin/bash #SBATCH --job-name=Sample_MPI_Job #SBATCH --ntasks=16 #SBATCH --ntasks-per-node=16 #SBATCH --nodes=1 #SBATCH --mem-per-cpu=1gb #SBATCH --time=00:10:00 #SBATCH --account=<group_name> #SBATCH --partition=standard #SBATCH --output=Sample_MPI_Job_%A.out #SBATCH --error=Sample_MPI_Job_%A.err # SLURM Inherits your environment. cd $SLURM_SUBMIT_DIR not needed pwd; hostname; date module load openmpi3 /usr/bin/time -o mpit_prog.timing mpirun -np 16 a.out