Skip to content

Quick start

Need to start using the cluster ASAP? This page contains a brief introduction on using the VSC systems. You can find in-depth information about these topics in the corresponding documentation pages.

How to login

After Getting access you can connect to the cluster via ssh:

# To connect to VSC-5
ssh <username>@vsc5.vsc.ac.at

# Or for VSC-4
ssh <username>@vsc4.vsc.ac.at

Storage

The same directories exist on VSC-5 and VSC-4 as we make use of the same IBM Spectrum Scale GPFS storage. The home directory $HOME has a quota of 100GB and the $DATA has a quota of 10TB.

check quota

You can check your current quota usage on each of the two storage systems with:

mmlsquota --block-size auto -j data_fs7XXXX data [or home_fs7XXXX home]

User data on VSC is only backed up for disaster recovery (such as a fire or complete hardware failure). Individual restores are not possible. Due to the size of our volumes, some data may not make it into the backup.

User data backup

You are advised to always move data, and keep additional copies of the most important input and output files.

Transfer files

scp

Files and directories can be transferred to and from the VSC with the scp (secure copy) command.

# To transfer a file from a local computer to VSC
scp localfile <username>@vsc5.vsc.ac.at:<directory>/.

# To transfer a file from VSC to a local computer
scp <username>@vsc5.vsc.ac.at:<directory>/<file> .

rsync

rsync is a file copying tool that can be used both locally and over the network. Its is that is handles copying of whole directory trees well, and that rsync transfers can easily be restarted without having to re-transfer data.

rsync -av local-tree <username>@vsc5.vsc.ac.at:

Module system

VSC uses a module system, it allows for the dynamic adding and removal of installed software packages to the running environment. To access, list and search among the installed application programs, libraries, and tools, use the module command:

module --help         General help with module commands

module avail          List the available modules and recommendations

module load ...        Load the selected modules into your session (can also write: module add)

module list           List your currently loaded modules (will be flushed at logout)

module rm ...         Remove selected modules from your session

module purge          Remove all loaded modules

Submitting a job script

VSC uses the Slurm Workload Manager to schedule jobs. Please note that if you request a full node, your project allocation will be charged for use of all the cores even if your program uses a smaller number of cores.

A sample job script looks like this, in this example we will name it job.sh:

#!/bin/bash
# Set the allocation to be charged for this job
# not required if you have set a default allocation
#SBATCH -A <project name>
# The name of the job
#SBATCH -J jobname
# The requested time, in this example is 10 minutes
#SBATCH -t 00:10:00
# The partition
#SBATCH --partition=zen3_0512
#The quality of service (QOS)
#SBATCH --qos=zen3_0512
# The number of nodes
#SBATCH -N 1

#execute your program
./my_program

#Or in case you are using one of the installed softwares
module load vasp6/6.2.0-gcc-12.2.0-gh7sj5j
mpirun -n 64 vasp_std

To submit the job script to the queue:

sbatch job.sh
The job status can be monitored with squeue command:
squeue -u $USER
To cancel the job one can use scancel with the jobid (which can be obtained with squeue)
scancel <jobid>