Skip to content

Example job scripts

Warning

These scripts and some behavior might be different from VSC5 and VSC4

General remarks

By default certain default settings are automatically applied for job scripts, which are intended to simplify the configuration of job scripts. If more control of the requested resources is desired a special flag is available which prevents the job script parsing of lua.

Standard use cases

  1. One or multiple exclusive CPU node(s) for 1 hour
    #!/bin/bash
    #SBATCH --job-name "myjob"
    #SBATCH -N 1
    #SBATCH -p zen4_0768
    #SBATCH --qos zen4_0768
    #SBATCH --time=01:00:00
    
  2. 1 Task/1 Core with 2 GB memory on a CPU node
    #!/bin/bash
    #SBATCH --job-name "myjob"
    #SBATCH -n 1
    #SBATCH --mem=2G
    #SBATCH -p zen4_0768
    #SBATCH --qos zen4_0768
    #SBATCH --time=01:00:00
    
  3. One exclusive GPU node (with 4 GPUs) for 1 hour
    #!/bin/bash
    #SBATCH --job-name "myjob"
    #SBATCH -N 1
    #SBATCH --gres=gpu:4
    #SBATCH -p zen4_0768_h100x4
    #SBATCH --qos zen4_0768_h100x4
    #SBATCH --time=01:00:00
    
  4. Two exclusive GPU nodes (with 4 GPUs/node) for 1 hour
    #!/bin/bash
    #SBATCH --job-name "myjob"
    #SBATCH -N 2
    #SBATCH --gres=gpu:4
    #SBATCH -p zen4_0768_h100x4
    #SBATCH --qos zen4_0768_h100x4
    #SBATCH --time=01:00:00
    
  5. Three Quarter GPU Node (with 3 GPUs) for 1 hour
    #!/bin/bash
    #SBATCH --job-name "myjob"
    #SBATCH --gres=gpu:3
    #SBATCH -p zen4_0768_h100x4
    #SBATCH --qos zen4_0768_h100x4
    #SBATCH --time=01:00:00
    
  6. Half GPU Node (with 2 GPUs) for 1 hour
    #!/bin/bash
    #SBATCH --job-name "myjob"
    #SBATCH --gres=gpu:2
    #SBATCH -p zen4_0768_h100x4
    #SBATCH --qos zen4_0768_h100x4
    #SBATCH --time=01:00:00
    
  7. Quarter GPU Node (with 1 GPU) for 1 hour
    #!/bin/bash
    #SBATCH --job-name "myjob"
    #SBATCH --gres=gpu:1
    #SBATCH -p zen4_0768_h100x4
    #SBATCH --qos zen4_0768_h100x4
    #SBATCH --time=01:00:00
    

Note

For the time being: in order to get access to only the physical cores of the system we advise to specify in addition to the above:

#SBATCH --threads-per-core=1
If not specified, the job will access the logical cores (physical plus virtual) cores of the system as seen by hwloc.

Info

When using srun, on GPU nodes, processes will be correctly pinned to the NUMA domain(s) that are attached to the GPU(s) dedicated for the job

Advanced usage (Vanilla SLURM)

If unaltered SLURM behaviour with more individual tuning is desired, the following flag can be used to bypass all modifications introduced by lua:

#ASC --vanilla
Basic checks for the correctness of the job script will still be performed.

Particular examples

#!/bin/bash

#SBATCH --job-name lammps_eessi_test
#SBATCH -N 1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=1
#SBATCH -p zen4_0768
#SBATCH --qos zen4_0768
#SBATCH --time=00:05:00

module load LAMMPS/2Aug2023_update2-foss-2023a-kokkos

srun -n 4 lmp -i in.meam

Example of running LAMMPS on GPU's

#!/bin/bash

#SBATCH --job-name lammps_eessi_test
#SBATCH -N 1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=1
#SBATCH -p zen4_0768_h100x4
#SBATCH --qos zen4_0768_h100x4
#SBATCH --gres=gpu:4
#SBATCH --time=00:05:00

ml LAMMPS/2Aug2023_update2-foss-2023a-kokkos-CUDA-12.1.1

srun lmp -k on g 4 -sf kk -in in.meam -pk kokkos newton on neigh half

Example job script for running a singularity container:

#!/bin/bash
#SBATCH -N 2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=22
#SBATCH -p zen4_0768_h100x4
#SBATCH --qos zen4_0768_h100x4
#SBATCH --job-name <NAME>
#SBATCH --gres=gpu:4
#SBATCH --time=24:00:00

srun --ntasks-per-node=4 --cpu-bind=none --mpi=pmix singularity exec --nv --writable-tmpfs -B <MOUNT> <SIF IMAGE> <EXECUTEABLE>