WIEN2k¶
WIEN2k is a licensed software, in order to use VSC's WIEN2k modules the user must be included in a WIEN2k license.
Request access to WIEN2K
Submit a support ticket containing the relevant license information.
Available modules¶
Do note that the WIEN2k is for VSC5
wien2k/24.1-intel-2021.9.0-oneapi_vsc5
Initial setup¶
There are some settings which are needed by WIEN2k. This can be done by running userconfig after loading the WIEN2k module, which will put the settings into .bashrc. These settings could also be put into a file which you source before using WIEN2k (source filename). By default the module sets the following:
export SCRATCH=./
export EDITOR="vi"
export OMP_NUM_THREADS=1
You will most likely need to set OMP_NUM_THREADS depending on your problem.
Example batch script¶
Here is an example batch script for running WIEN2k on a single node (128 cores). The examples shows how to set up the creation of appropriate .machines files used by WIEN2k for parallel calculations. The module sets by default OMP_NUM_THREADS=1 but this can be changed by the user.
#!/bin/bash
#SBATCH -J wien2k
#SBATCH -A project_number
##SBATCH --exclusive
#SBATCH -t 12:00:00
#SBATCH --partition=zen3_0512
#SBATCH --qos=zen3_0512
#SBATCH -N 1
module load wien2k/.24.1-intel-2021.9.0-oneapi
export cores_per_node=128
################## user setting for parallelization
export OMP_NUM_THREADS=4
export mpi_jobs=1 # set it clever according to yourk-points
#
# together with the number of nodes this will create a .machines file with
# cores_per_node*number_of _nodes cores.
# it will use $mpi_jobs for the number of mpi-processes and create
# cores_per_node * number_of _nodes / mpi_jobs / OMP_NUM_THREADS
# lines for k-parallel runs
export nodes=($(srun hostname -s|uniq))
export number_of_nodes=$(echo ${!nodes[@]} |wc -w)
export total_cores=$(($cores_per_node*$number_of_nodes))
export k_parallel_per_node=$(($cores_per_node/$mpi_jobs/$OMP_NUM_THREADS))
echo nodes ${nodes[@]}
echo number of nodes: $number_of_nodes
echo total_cores: $total_cores
echo k_parallel_per_node: $k_parallel_per_node
init_lapw -b -red 0 -numk 1 -fermit 0.002
# set .machines for parallel job
# lapw0 running on one node
echo "lapw0: ${nodes[0]}:$cores_per_node " > .machines
# run one mpi job on each node (splitting k-mesh over nodes)
line=1
while [ $line -le $k_parallel_per_node ]
do
for i in ${nodes[@]}
do
echo "1:$i:$mpi_jobs " >> .machines
done
line=$(($line+1))
done
echo granularity:1 >> .machines
echo extrafine:1 >> .machines
echo 'omp_lapw0:32' >>.machines # or set to 16 or 64 parallel jobs for LAPW0
##### add your commands here
run_lapw -p -NI -i 1
WIEN2k documentation¶
Check the user guide for more detailed usage information.
Notes¶
-
For small unit cells one can use the serial version
#SBATCH -n 1 ... run_lapw
-
It is strongly advised to use parallelisation via OMP_NUM_THREAD first and only use mpi-jobs when the simulations get very large (e.g. supercells of more than 60 atoms). The reason is because OMP_NUM_THREAD (shared memory parallelisation) works about 2 times faster than mpi-jobs (no shared memory parallelisation).
-
One can check the performance of the simulations, timing and errors in the case.dayfile. It is a good indicator if one needs to change the parallelization settings to get better performance.
-
One can switch from the ELPA library to ScaLAPACK by changing the second line in the
.in1
or.in1c
file. fromELPA
toSCALA
.6.50 10 6 ELPA pxq BL 64 (R-MT*K-MAX,MAX L IN WF,V-NMT,LIB) to 6.50 10 6 SCALA pxq BL 64 (R-MT*K-MAX,MAX L IN WF,V-NMT,LIB)