COMSOL¶
COMSOL is a general purpose simulation software based on numerical methods. It has fully coupled multiphysics and single-physics modelling capabilities.
COMSOL is recommended to be used using !noMachine interface which is the GUI based interface to access VSC. (Learn more about !noMachine here ).
Once a !noMachine session opens, following commands will open a COMSOL,
module purge
module load Comsol/6.1
gpurender comsol &
This is indicated below, however, do note that if you consider working with a GUI in COMSOL, please note that either 16 (GUI vsc4) or 32 (GUI vsc5) processes and 256 GB RAM can be allocated.
Workflow¶
As mentioned above, use !noMachine to use COMSOL. Typically, you would define your entire case either on the interactive access (using noMachine) or on your local machine, and then save it as a *.mph file (you also save this identical file as an output file).
We recommend using noMachine because both the interactive access and the cluster itself have the same software packages installed. The *.mph file encapsulates all the necessary information required to successfully run a calculation on the cluster. The computation results on the cluster will then be saved to the identical output file.
Job script¶
Once a case has been tested and set up using !noMachine or locally, the .mph file could now be sent to the compute nodes in batch mode, to utilize the parallel computing features of COMSOL on VSC. This will speed up the compute time.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4 ### This can go upto 128 for 1 node on VSC5
#SBATCH --job-name="karman"
module purge
module load intel-mpi/2021.5.0
module load Comsol/6.1
INPUTFILE="karman.mph"
OUTPUTFILE="karmanout.mph"
BATCHLOG="LOGFILE.log"
comsol -mpi intel -np $SLURM_NTASKS_PER_NODE -nn $SLURM_JOB_NUM_NODES -tmpdir "/local" batch slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 600
Here,
-
comsol: This is the command to start the COMSOL Multiphysics software.
-
-mpi intel: Specifies the MPI (Message Passing Interface) implementation to use. In this case,
intel
indicates that the Intel MPI library is being used. This flag is necessary when running COMSOL in parallel using MPI. -
-np $SLURM_NTASKS_PER_NODE: Specifies the number of tasks (processes) to run per node. The
$SLURM_NTASKS_PER_NODE
is a SLURM environment variable that automatically sets the number of tasks per node as defined in the SLURM job script. This flag controls how many COMSOL processes are launched on each compute node. -
-nn $SLURM_JOB_NUM_NODES: Specifies the number of nodes to use for the simulation. The
$SLURM_JOB_NUM_NODES
environment variable automatically sets the number of nodes allocated for the job. This flag controls how many compute nodes are used in the simulation. -
-tmpdir: COMSOL generates huge amount of temporary files during the calculation. These files get saved in $HOME causing IO errors. To avoid this,
$TMPDIR
to e.g. /local can be set. So the temporary files will be stored on the SSD-storage local to the compute node. -
batch: Indicates that COMSOL should run in batch mode, which is non-interactive and suitable for running simulations on HPC clusters. This flag is necessary for running COMSOL without a graphical user interface (GUI).
-
slurm: Specifies that the job is being managed by SLURM. This is typically used to ensure that COMSOL integrates correctly with the SLURM job scheduler. This flag is necessary when running COMSOL on a SLURM-managed cluster.
-
-inputfile: Specifies the input file for the COMSOL simulation. The
${INPUTFILE}
is a variable that should be defined in the SLURM script or environment, pointing to the COMSOL model file (e.g.,.mph
file). This flag is required to tell COMSOL which model to run. -
-outputfile: Specifies the output file where the results of the simulation will be saved. in this case to the
${OUTPUTFILE}
, pointing to the desired output file (e.g.,.mph
or.txt
). This flag is optional but recommended to save the simulation results to a specific file. -
-batchlog: Specifies the log file where COMSOL will write the batch job's log messages, which in this case is the
${BATCHLOG}
, pointing to the desired log file. This flag is optional but recommended for debugging and monitoring the simulation. -
-alivetime 600: Sets the "alive time" for the COMSOL job, which is the maximum time (in seconds) that the job will wait for a connection from the client before terminating. In this case, the job will wait for 600 seconds (10 minutes). This flag is optional and is typically used to prevent jobs from hanging indefinitely if there is an issue with the client connection.
Additionally,
- -study my_study_a: In some cases, when there is a limitation in available RAM (QOS/PARTITION), it is advisable to divide your study into multiple segments (For example a transient analysis for turbulent fluid flow and transient mapping after). You submit then each segment as a separate study to the cluster. In COMSOL, achieving this is straightforward. Simply insert the study tag(find the Study Tag in the Model Builder, it is located in the Study properties for example Study 2 std2) into your job script, as demonstrated with this flag.