Hands-on course

The hands-on course aims at introducing the numerical modelling FALL3D with the following session structure:

  • Session 1: Meteorological data
  • Session 2: Deterministic simulation
  • Session 3: Ensemble simulations

MareNostrum4

The exercises will be carried out in MareNostrum, the most emblematic supercomputer and the most powerful supercomputer in Spain hosted by the Barcelona Supercomputing Center. Specifically, we'll use MareNostrum4, a supercomputer based on Intel Xeon Platinum processors from the Skylake generation. It is a Lenovo system composed of SD530 Compute Racks, an Intel Omni-Path high performance network interconnect and running SuSE Linux Enterprise Server as operating system. Its current Linpack Rmax Performance is 6.2272 Petaflops.

This general-purpose block consists of 48 racks housing 3456 nodes with a grand total of 165,888 processor cores and 390 Terabytes of main memory. Each computer node is equipped with 2 sockets Intel Xeon Platinum 8160 CPU with 24 cores each @ 2.10GHz for a total of 48 cores. For further information, please refer to the User Guide.

Log in to the cluster

You can connect to MareNostrum using three public login nodes:

  • mn1.bsc.es
  • mn2.bsc.es
  • mn3.bsc.es

All connections must be done through SSH (Secure SHell), for example:

ssh {username}@mn1.bsc.es

Notes:

  • On Windows machines you can use PuTTy, the most known Windows SSH client. See this website for more details.

Directories and file systems

There are different partitions of disk space with specific size limits and usage policies. The GPFS (General Parallel File System) is a distributed networked file system and can be accessed from all the nodes. The available GPFS directories and file systems are:

  • /gpfs/home: after login, it is the default work area where users can save source codes, scripts, and other personal data. Not recommended for run jobs; please run your jobs on your group’s /gpfs/projects or /gpfs/scratch instead.

  • /gpfs/projects: it's intended for data sharing between users of the same group or project. All members of the group share the space quota.

  • /gpfs/scratch: each user has their directory under this partition, for example, to store temporary job files during execution. All members of the group share the space quota.

For example, if your your group is nct01, you can create the following alias to access to your personal directories:

alias projects='cd /gpfs/projects/nct01/$USER'
alias scratch='cd /gpfs/scratch/nct01/$USER'

Running jobs

Jobs submission to the queue system have to be done through the Slurm directives, for example:

To submit a job:

sbatch {job_script}

To show all the submitted jobs:

squeue

To cancel a job:

scancel {job_id}

There are several queues present in the machines and different users may access different queues. All queues have different limits in amount of cores for the jobs and duration. You can check anytime all queues you have access to and their limits using:

bsc_queues

Software Environment

Modules environment

The Environment Modules package provides a dynamic modification of a user's environment via modulefiles. Each modulefile contains the information needed to configure the shell for an application or a compilation. Modules can be loaded and unloaded dynamically, in a clean fashion.

Use

module list

to show the loaded modules and

module avail

to show the available modules.

Modules can be invoked in two ways: by name alone or by name and version. Invoking them by name implies loading the default module version. This is usually the most recent version that has been tested to be stable (recommended) or the only version available. For example:

module load intel

Invoking by version loads the version specified of the application. As of this writing, the previous command and the following one load the same module:

module load intel/2017.4

Compilers

The latest Intel compilers provide the best possible optimizations for the Xeon Platinum architecture. By default, when starting a new session on the system the basic modules for the Intel suite will be automatically loaded. That is the compilers (intel/2017.4), the Intel MPI software stack (impi/2017.4) and the math kernel libraries MKL (mkl/2017.4) in their latest versions. Alternatively, you can load the module using:

module load intel/2017.4
module load impi/2017.4

The corresponding optimization flags for Fortan are

FCFLAGS="-xCORE-AVX512 -mtune=skylake"

As the login nodes are of the exact same architecture as the compute node you can also use the flag -xHost which enables all possible optimizations available on the compile host. In addition, the Intel compilers will optimise more aggressively when the -O2 flag is specified:

FCFLAGS="-xCORE-AVX512 -mtune=skylake -xHost -O2"

Training course material

In order to copy the course material, go to your own project folder

cd /gpfs/projects/nct01/$USER

and copy this folder:

cp -r /gpfs/projects/nct00/nct00014/FALL3D_material .

Next, you can load the required modules and environmental variables with the command:

cd FALL3D_material
source set_env.sh