User Guide
Introduction
FALL3D is an Eulerian model for atmospheric passive transport and deposition based on the so-called advection–diffusion–sedimentation (ADS) equation. The code version 8.x has been redesigned and rewritten from scratch in order to overcome legacy issues and allow for successive optimisations in the preparation towards extreme-scale computing. The new versions include significant improvements from the point of view of model physics, numerical algorithmic methods, and computational efficiency. In addition, the capabilities of the model have been extended by incorporating new features such as the possibility of running ensemble forecasts and dealing with multiple atmospheric species (i.e. volcanic ash and gases, mineral dust, and radionuclides). Ensemble run capabilities are supported since version 8.1, making it possible to quantify model uncertainties and improve forecast quality.
The FALL3D code is one of the flagship codes included in the European Centre of Excellence for Exascale in Solid Earth (ChEESE).
- FALL3D is an open-source code available through this GitLab repository
- FALL3D is also available in Zenodo:
References
- Folch, A., Mingari, L., Gutierrez, N., Hanzich, M., Macedonio, G., and Costa, A.: FALL3D-8.0: a computational model for atmospheric transport and deposition of particles, aerosols and radionuclides – Part 1: Model physics and numerics, Geosci. Model Dev., 13, 1431–1458, https://doi.org/10.5194/gmd-13-1431-2020, 2020.
- Prata, A. T., Mingari, L., Folch, A., Macedonio, G., and Costa, A.: FALL3D-8.0: a computational model for atmospheric transport and deposition of particles, aerosols and radionuclides – Part 2: Model validation, Geosci. Model Dev., 14, 409–436, https://doi.org/10.5194/gmd-14-409-2021, 2021.
Contributing
The FALL3D documentation is free and open source. You can find the source code on GitLab and issues and feature requests can be posted on the GitLab issue tracker.
We encourage the community to fix bugs and add features: if you'd like to contribute, please read the CONTRIBUTING.md
guide and consider opening a merge request.
License
FALL3D and this User Guide are released under the GNU General Public License (GPL).
FALL3D in a nutshell
FALL3D is an open-source off-line Eulerian model for atmospheric passive transport and deposition. The model, originally developed for inert volcanic particles (tephra), has a track record of 50+ on different model applications and code validation, as well as an ever-growing community of users worldwide, including academia, private, research, and several institutions tasked with operational forecast of volcanic ash clouds.
The model solves the so-called Advection-Diffusion-Sedimentation (ADS) equation: \[ \frac{\partial c}{\partial t} + \nabla \vec{F} + \nabla \vec{G} + \nabla \vec{H} = S - I \] where \(\vec{F}= c~\vec{u}\) is the advective flux, \(\vec{G}= c~\vec{u_s}\) is the sedimentation flux, \(\vec{H}=-\mathbb{K} \nabla c\) is the diffusive flux, and \(S\) and \(I\) are the source and sink terms respectively. In the expressions above, \(t\) denotes time, \(c\) is concentration, \(\vec{u}\) is the fluid velocity vector (wind velocity), \(\vec{u_s}\) is the terminal settling velocity of the substance, and \(\mathbb{K}\) is the diffusion tensor.
In FALL3D v8.x, the ADS equation has been extended to handle passive transport of other substances different from tephra. In a general sense, substances in FALL3D are grouped in 3 broad categories:
- Category particles: includes any inert substance characterized by a sedimentation velocity.
- Category aerosol: refers to substances potentially non-inert (i.e. with chemical or phase changes mechanisms) and having a negligible sedimentation velocity.
- Category radionuclides: refers to isotope particles subjected to radioactive decay.
Each category admits, in turn, different sub-categories or species, defined internally as structures of data that inherit the parent category properties. For example, particles can be subdivided into tephra or mineral dust; aerosol species can include H2O, SO2, etc.; and radionuclides can include several isotope species. Finally, each sub-category of species is tagged with an attribute name that is used for descriptive purposes only.
How to get FALL3D
FALL3D is maintained in a public git repository, which contains the stable releases and the current working code. Stable version releases of the FALL3D source code are available as a tarball or zip file and can be downloaded from the Releases section.
However, we strongly recommend to clone the git repository especially if you want to update the source code or select different versions in case of problems. In order to obtain the software from the gitlab repository, you must first download and install the git software.
Then clone the repository using the following command line:
> git clone git@gitlab.com:fall3d-suite/fall3d.git
After cloning, a directory fall3d
should have been created.
Code changes are made fairly frequently in the gitlab repository. It is recommended to update the code periodically so that you have the latest version of FALL3D available on your computer. In order to update your local version of the source code, enter the FALL3D directory and pull all changes from a remote repository:
> cd fall3d
> git pull
Notes:
- Every time the source code is updated with
git pull
, the compilation process detailed below must be repeated.
Verify that installation program prerequisites are present on your system before proceeding. See section Installation requirements for further details.
Basic installation
There are two different options to configure and install FALL3D.
The basic installation uses the configure
script and this option
is intended for users that need a single configuration of the code.
In constrast, the second option can be useful when multiple
installations are required. The basic installation is described in
this section. See section Multiple installations
for a description of the second installation procedure.
For the basic installation, simply type the following commands from the installation folder:
> ./configure
> make
> make install
in order to configure, compile and place the executable file in
the bin
folder. The configure
script admits the following options:
Option | Comments |
---|---|
--with-r4 | compiles using single precision |
--enable-parallel | enables parallel option |
--disable-parallel | disables parallel option |
--prefix= | installation folder |
FCFLAGS= | FORTRAN compiler flags |
NETCDF= | path of the netCDF library base folder |
NETCDF_INC= | path of the netCDF include folder |
NETCDF_LIB= | path of the netCDF lib folder |
If no arguments are provided, configure
is run by default as:
> .\configure --enable-parallel --prefix=. FCFLAGS='-g -O2'
and the configure
script automatically searches for the correct
netCDF paths.
Multiple installations
Alternatively, FALL3D can be installed using the build
script
in order to allow multiple installations with different configurations
(e.g. serial and parallel, single and double precision, different
compilation flag options). In addition, with this approach it is
possible to have installations for different High Performance Computing
(HPC) architectures sharing a common file system.
For each installation based on the build
script, use the command:
> build [-h] [par|ser] [r4|r8] [build-dir]
where the arguments in brackets are optional and take the following default values:
Argument | Default | Comments |
---|---|---|
-h | none | prints the build options |
par or ser | par | builds with parallel/serial code options |
r4 or r8 | r8 | builds with single/double code options |
build-dir | binary | build directory name |
For example,
> ./build r4 machine1-r4-par
> ./build ser machine2-r8-ser
will build (i.e. configure, compile and install) two FALL3D instances:
in the machine1-r4-par
folder a parallel with single precision installation
and another serial with double precision installation in the machine2-r8-ser
folder. Different installations are completely independent and can be
reconfigured and recompiled at any time within the respective folder
following the Basic installation procedure.
Uninstalling
To uninstall FALL3D use the command
> make uninstall
in the installation folder to get rid of all the installed files.
Running FALL3D
Model execution requires a unique namelist input file *.inp
to define the computation domain, time range, physics options
and other model parameters. Start with the default input file
Example.inp
in the Example
directory and edit it for your
case.
FALL3D can be executed in serial mode by typing:
> ./Fall3d.x TASK namelist.inp
in order to perform the task TASK
. Here, Fall3d.x
refers to
the executable file in the bin
folder generated during the
installation and namelist.inp
is your namelist input file
(see section Namelist file for a detailed
description of the structure of this file). The available tasks
are:
SetTgsd
SetDbs
SetSrc
FALL3D
All
PosEns
PosVal
For more details about the different model tasks, see the section Model tasks.
If FALL3D was compiled with the MPI option, run the model in parallel mode by typing:
> mpirun -n np Fall3d.x Task name.inp [NPX] [NPY] [NPZ] [-nens SIZE]
where:
np
: number of processorsNPX
,NPY
,NPZ
: number of processors along the dimensions x, y, and z in the domain decomposition (default=1)SIZE
: ensemble size (default=1)
Notes:
- The execution command for MPI runs may be different, depending on the MPI installations (e.g.,
mpirun
ormpiexec
)- The total number of processors is
NPX
\(\times\)NPY
\(\times\)NPY
\(\times\)SIZE
and should be equivalent to the argumentnp
A log file name.Task.log
will be generated to track the task execution
and to report eventual warnings and errors. If the job is successful,
the last line printed in the log file should be
Task FALL3D : ends NORMALLY
Ensemble runs
From version release 8.1, ensemble simulations can be performed with FALL3D as a single model task. Ensemble-based approaches can give a deterministic product based on some combination of the single ensemble members (e.g. the ensemble mean) and, as opposed to single deterministic runs, attach to it an objective quantification of the forecast uncertainty. On the other hand, ensemble runs can also furnish probabilistic products based on the fraction of ensemble members that verify a certain (threshold) condition, e.g. the probability of cloud being detected by satellite-based instrumentation, the probability that the cloud mass concentration compromises the safety of air navigation, the probability of particle fallout or of aerosol concentration at surface to exceed regulatory values for impacts on infrastructures or on air quality, etc.
In order to perform a ensemble runs, FALL3D must be executed with the optional
argument -nens
, defining the ensemble size, set to a value greater than one.
For example, the following command will generate a 24-member ensemble and
perform the FALL3D
task for each ensemble member:
> mpirun -np 24 Fall3d.x FALL3D name.inp -nens 24
A new folder structure will be created and the results for each ensemble member will be organized in different sub-folders.
Notes:
- Ensemble simulations are only supported in parallel mode
Example test case
The FALL3D distribution includes multiple test cases (benchmark suite) that can be downloaded from the central repository. These cases are used to check the model configuration and installation rather than to perform an accurate simulations for event reconstruction.
Let's run the example case from the Example
folder. First, enter the folder
and use the wget
command to download the input meteorological file required
by FALL3D:
> cd Example
> wget https://gitlab.com/fall3d-distribution/testsuite/-/raw/master/example-8.0/InputFiles/example-8.0.wrf.nc
You can execute FALL3D in serial mode using the input file Example.inp
with the following command:
> ../bin/Fall3d.x Example.inp All
Notes:
- Depending on your installation the executable file name
Fall3d.x
can be different- More input files for testing can be downloaded from the Benchmark suite repository
Checking the results
If the run was successful, you should obtain a log file Example.Fall3d.log
with the end message:
Number of warnings : 0
Number of errors : 0
Task FALL3D : ends NORMALLY
The results of the simulation are stored in the output netCDF file
Example.res.nc
. For a quick view of the output file, you can use the ncview
tool (available here)
to graphically display netCDF files:
> ncview Example.res.nc
Running FALL3D using Docker
Another method to install and run Fall3D is by using Docker containers. In the official Fall3D repository, you can find a section called "Container Registry," which contains the two available images of Fall3D. These images are available for Linux AMD64 and Mac ARM64 platforms and architectures.
The only prerequisite is to install Docker on your operating system. You can find the information on how to install it depending on your distribution.
Once you have installed Docker, download the image from the image repository using this command:
- If you are using a Linux AMD64 distribution:
> docker pull registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_linux:latest
- If you are using a Mac ARM64 distribution:
> docker pull registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_mac_arm64v8:latest
Next, you will find the steps, depending on your distribution, to run Fall3D inside the container or using Docker volumes.
Linux distribution
Run image inside the container and run Fall3D example
- Run the image
> docker run --rm --name fall3d -it registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_linux:latest
- Run the example in serial mode:
> cd fall3d/Example
> ./Fall3d.r8.x All Example.inp
- Run the example in parallel mode:
Note: Adapt this command depending on your available resources. The 8 processors is the result from 4x2x1
> cd fall3d/Example
> mpirun -n 8 --allow-run-as-root ./Fall3d.r8.x All Example.inp 4 2 1
Run Fall3D with docker volumes
- Create the volume:
> docker volume create fall3d_files
- Inside this new volume you have to include these files(you can obtain these files by clonning the Fall3D repository):
ls /var/lib/docker/volumes/fall3d_files/_data
- Output:
Example-8.0.wrf.nc Example.inp Example.pts Example.tgsd
- Open the Example.inp file and insert the complete path of the Example-8.0.wrf.nc file:
METEO_DATA_FORMAT = WRF
METEO_DATA_DICTIONARY_FILE =
METEO_DATA_FILE = /home/fall3d/Example/Example-8.0.wrf.nc
METEO_ENSEMBLE_BASEPATH = ./
METEO_LEVELS_FILE = ../Other/Meteo/Tables/L137_ECMWF.levels
- Save and close the file
Execute the container using the docker volume
- Serial:
> docker run --rm -it --name fall3d-test -v fall3d_files:/home/fall3d/Example registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_linux:latest /home/fall3d/bin/Fall3d.r8.x All /home/fall3d/Example/Example.inp
- Parallel:
> docker run --rm -it --name fall3d-test -v fall3d_files:/home/fall3d/Example registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_linux:latest mpirun -n 8 --allow-run-as-root /home/fall3d/bin/Fall3d.r8.x All /home/fall3d/Example/Example.inp 4 2 1
MAC distribution
Run image inside the container and run Fall3D example
- To run the image
> docker run --rm --name fall3d -it registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_mac_arm64v8:latest sh
Run fall3D Example
> cd fall3d/Example
- Serial mode:
> ./Fall3d.r8.x All Example.inp
- Parallel Mode:
Note: Adapt this command depending on your available resources. The 8 processors is the result from 4x2x1
> mpirun -n 8 --allow-run-as-root ./Fall3d.r8.x All Example.inp 4 2 1
Run Fall3D with docker volumes
- Create the volume:
> docker volume create --opt type=none --opt device=/Users/geo3bcn/Documents/DT-GEO/Dockers_MacOS --opt o=bind fall3d_files
-
docker volume create
: This is the main command used to create a Docker volume. -
--opt type=none
: This option specifies that the volume type is "none", indicating that Docker will not manage the volume directly. -
--opt device=/Users/geo3bcn/Documents/DT-GEO/Dockers_MacOS
: This option specifies the device or path on the host system that will be used as the volume source. In this case, the directory/Users/geo3bcn/Documents/DT-GEO/Dockers_MacOS
will be used. -
--opt o=bind
: This option specifies the mount options for the volume. In this case, it sets the mount option to "bind", meaning that the specified device or path will be directly mounted into the container. -
fall3d_files
: This is the name of the Docker volume that will be created. You can use this name to reference the volume when mounting it into containers.
In summary, this command creates a Docker volume named fall3d_files
using the specified directory /Users/geo3bcn/Documents/DT-GEO/Dockers_MacOS
on the host system. The volume is created with the "bind" type, meaning that the contents of the specified directory will be directly accessible within containers that use this volume.
Add the file shared in Docker Desktop
- Open Docker Desktop in your MAC
- Go to "Settings"
- Now go to "Resources" and "File Sharing"
- Add the specified directory used by the created volume, in this case
/Users/geo3bcn/Documents/DT-GEO/Dockers_MacOS
- Press "Apply & restart"
- Inside this new volume you have to include these files(you can obtain these files by clonning the Fall3D repository):
> ls /Users/geo3bcn/Documents/DT-GEO/Dockers_MacOS/fall3d_files/_data
- Output:
Example-8.0.wrf.nc Example.inp Example.pts Example.tgsd
- Open Example.inp file and insert the complete path of the Example-8.0.wrf.nc file:
METEO_DATA_FORMAT = WRF
METEO_DATA_DICTIONARY_FILE =
METEO_DATA_FILE = /home/fall3d/Example/Example-8.0.wrf.nc
METEO_ENSEMBLE_BASEPATH = ./
METEO_LEVELS_FILE = ../Other/Meteo/Tables/L137_ECMWF.levels
- Save and close file
Execute the container with the volume
- Serial:
> docker run --rm -it --name fall3d-test -v fall3d_files:/home/fall3d/Example registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_mac_arm64v8:latest /home/fall3d/bin/Fall3d.r8.x All /home/fall3d/Example/Example.inp
- Parallel:
> docker run --rm -it --name fall3d-test -v fall3d_files:/home/fall3d/Example registry.gitlab.com/fall3d-suite/fall3d/fall3d_alpine_mac_arm64v8:latest mpirun -n 8 --allow-run-as-root /home/fall3d/bin/Fall3d.r8.x All /home/fall3d/Example/Example.inp 4 2 1
Installation requirements
In order to compile and install FALL3D you need:
- Unix-like operating system (MacOS or Linux)
- A modern Fortran compiler (e.g., gfortran, ifort, xlf90)
- Git version management (optional)
The parallel version also requires:
- An MPI library with Fortran support. For example: OpenMPI, Intel MPI or MVAPICH2
In addition, the following optional dependencies are recommended:
- Doxygen, required to generate useful code documentation for developers
- Python (>=3.8), required to run pre-processing and download tools
External libraries
FALL3D requires netCDF libraries. Specifically:
- The netCDF-Fortran library with netCDF-4 support
- For parallel I/O support on classic netCDF files, PnetCDF 1.6.0 or later is required
Getting pre-built netCDF libraries
The easiest way to get netCDF is through a package management program (e.g. apt, rpm, yum, homebrew). NetCDF is available from many different repositories, including the Ubuntu main repositories.
When getting netCDF from a software repository, you should get a development version. A development version will typically have a name such as "netcdf-devel" or "libnetcdf-dev". For example, you can install the netCDF Fortran library in Ubuntu with the command:
sudo apt install libnetcdff-dev
Building netCDF
The netCDF Fortran libraries depend on the netCDF C library which requires third-party libraries for full functionality, including:
- HDF5 1.8.9 or later (for netCDF-4 support)
- zlib 1.2.5 or later (for netCDF-4 compression)
And, optionally,
- PnetCDF 1.6.0 or later (for parallel I/O support on classic netCDF files)
For information regarding the netCDF-C libraries, see Building netCDF-C and for netCDF-Fortran libraries, see Building the NetCDF-4.2 and later Fortran libraries.
Model tasks
In FALL3D, the former pre-process auxiliary programs (SetTgsd
, SetDbs
and SetSrc
) have been parallelized and embedded within the main code, so that a single executable file exists for all the pre-process steps and execution workflow. These formerly independent programs can still be run individually invoked as model tasks (specified by a program call argument) or, alternatively, concatenated as a single model execution. In the first case, pre-process tasks generate output files that are later on furnished as inputs to the model (or to subsequent pre-process tasks). In contrast, the second option does not require of intermediate file writing/reading and, therefore, saves disk space and overall computing time.
The next figure shows a diagram of the FALL3D workflow, the data transfer between different tasks and the involved I/O files:
List of model tasks
This is the list of available tasks:
- Task
SetTgsd
generates particle Total Grain Size Distributions (TGSD) for species of category particles or radionuclides - Task
SetDbs
interpolates (for a defined time slab) meteorological variables from the meteorological model grid to the FAL3D computational domain - Task
SetSrc
generates emission source terms for the different species. It can also perform a-priori particle aggregation and a TGSD cut-off - Task
FALL3D
runs the FALL3D solver - Task
ALL
runs all previous tasks concatenated - Task
PosEns
merges and post-processes outputs from single ensemble members. Only for ensemble runs - Task
PosVal
validates single or ensemble-based results
All the tasks require a common configuration file as an input, i.e. the namelist file name.inp
.
In addition, there are different input and output files associated with each task, as is summarized in the next table:
Task | Namelist file required | Mandatory input files | Optional input files | Output files |
---|---|---|---|---|
SetTgsd | Yes | Custom distribution | name.specie.tgsd | |
SetDbs | Yes | Meteorological file in netCDF format | Dictionary file | name.dbs.nc , name.dbs.pro |
SetSrc | Yes | name.specie.tgsd , name.dbs.pro | name.specie.grn , name.src | |
FALL3D | Yes | name.src , name.specie.grn , name.dbs.nc | name.pts | name.res.nc , name.rst.nc |
All | Yes | Meteorological file in netCDF format | name.pts | name.res.nc , name.rst.nc |
PosEns | Yes | List of name.res.nc files | name.ens.nc | |
PosVal | Yes | name.res.nc or name.ens.nc | see section Task PosVal |
The task SetEns
is not explicitly executed by the user, but is automatically invoked when ensemble runs are performed in order to set up the ensemble.
The ensemble run option is enabled whenever FALL3D is executed with the argument -nens
, which allows one to define the size of the ensemble.
Notes:
- The model tasks
SetEns
,PosEns
andPosVal
have been added from version 8.1.
The namelist file
The namelist file name.inp
is an input configuration file in ASCII format required by all tasks.
It consists of a set of blocks defining physical parameterisations, time ranges, model domain, emission source, numerical schemes, etc.
Each task reads only the necessary blocks generating self-consistent input files.
Parameters within a block are listed one per record, in arbitrary order, and optionally can be followed by one (or more) blank space and a comment.
Comments start with the exclamation mark symbol "!".
Real numbers can be expressed using the FORTRAN notation (e.g. 12E7).
See section Namelist file for a full description of the file structure.
The LOG file
In addition, each task TASK
produces a log file name.TASK.log
reporting relevant information and potential errors or warnings.
Task SetTGSD
The task SetTGSD
generates particle Total Grain Size Distributions (TGSD) for species of category particles or radionuclides.
Input file
This task requires a single input file, i.e. the namelist file name.inp
.
Output files
The total grain size distribution file
The Total Grain Size Distribution (TGSD) file name.tgsd.species
is generated for each particle and radionuclide species. The TGSD file is an ASCII file containing the definition of the particle bins (each characterized by size, density and sphericity). It is generated by the task SetTgsd
or directly furnished by the user. The file format is described in the following table:
Column 1 | Column 2 | Column 3 | Column 4 |
---|---|---|---|
nc | |||
diam(1) | rho(1) | sphe(1) | fc(1) |
... | ... | ... | ... |
diam(nc) | rho(nc) | sphe(nc) | f(nc) |
where the symbols are:
symbol | definition |
---|---|
nc | Number of bins |
diam | bin diameter in \(mm\) |
rho | bin density in \(kg~m^{-3}\) |
sphe | bin sphericity |
fc | bin mass fraction satisfying \(\sum fc = 1\) |
The following is an example with 12 bins:
12
4.000000 1200.0 0.900 0.182820520E+00
2.000000 1200.0 0.900 0.143339097E+00
1.000000 1357.1 0.900 0.118086765E+00
0.500000 1514.3 0.900 0.101542551E+00
0.250000 1671.4 0.900 0.127680856E+00
0.125000 1828.6 0.900 0.143945117E+00
0.062500 1985.7 0.900 0.109276263E+00
0.031250 2142.9 0.900 0.531505247E-01
0.015625 2300.0 0.900 0.164749370E-01
0.007812 2300.0 0.900 0.324367807E-02
0.003906 2300.0 0.900 0.405760887E-03
0.001953 2300.0 0.900 0.339308115E-04
Task SetDbs
The task SetDbs
interpolates (for a defined time slab) meteorological variables from the meteorological model grid to the FALL3D computational domain.
Input files
This task requires two input files, i.e. the namelist file name.inp
and a gridded dataset of meteorological variables in a netCDF file.
Task SetSrc
The task SetSrc
generates emission source terms for the different species. It can also perform a-priori particle aggregation and apply a TGSD cut-off.
Input files
This task requires the namelist file name.inp
and the files:
name.tgsd.species
generated by the taskSetTGSD
with the Total Grain Size Distributions (TGSD) for each speciesname.dbs.pro
profile file generated by the taskSetDbs
Output files
The granulometry file
The granulometry file name.grn
is an ASCII file containing information about each bins, including aggregates (if any) and definition of effective bins. This file is created by the SetSrc
task from the preliminary TGSD file(s) (i.e., name.tgsd.species
). The file format is described in the Table below:
Column 1 | Column 2 | Column 3 | Column 4 | Column 5 | Column 6 | Column 7 | Column 8 | Column 9 |
---|---|---|---|---|---|---|---|---|
nc | nc_eff | |||||||
diam(1) | rho(1) | sphe(1) | fc(1) | cat_code(1) | spe_code(1) | spe_name(1) | spe_tag(1) | active(1) |
... | ... | ... | ... | ... | ... | ... | ... | ... |
diam(nc) | rho(nc) | sphe(nc) | fc(nc) | cat_code(nc) | spe_code(nc) | spe_name(nc) | spe_tag(nc) | active(nc) |
where the symbols mean:
symbol | definition |
---|---|
nc | total number of bins |
nc_eff | number of effective bins |
diam | bin diameter in \(mm\) |
rho | bin density in \(kg~m^{-3}\) |
sphe | bin sphericity |
fc | bin mass fraction satisfying \(\sum fc = 1\) |
cat_code | bin category code (see Table below) |
spe_code | bin species code (see Table below) |
spe_name | bin species name (see Table below) |
spe_tag | bin tag name (for descriptive purposes only) |
active | logical value specifying if the bin is effective or not. |
The table below specifies the codes assigned to different species:
category type | category code | species code | species name |
---|---|---|---|
particle | 1 | 1 | tephra |
particle | 1 | 2 | dust |
aerosol | 2 | 3 | H2O |
aerosol | 2 | 4 | SO2 |
radionuclide | 3 | 5 | CS134 |
radionuclide | 3 | 6 | CS137 |
radionuclide | 3 | 7 | I131 |
radionuclide | 3 | 8 | SR90 |
radionuclide | 3 | 9 | Y90 |
Example with 7 classes without cut-off:
7 7
4.000000 1200.0 0.900 0.137572886E+00 1 1 tephra lapilli-01 T
1.000000 1357.1 0.900 0.924286798E-01 1 1 tephra coarse_ash-01 T
0.250000 1671.4 0.900 0.194773804E+00 1 1 tephra coarse_ash-02 T
0.062500 1985.7 0.900 0.384212886E+00 1 1 tephra fine_ash-01 T
0.015625 2300.0 0.900 0.175148480E+00 1 1 tephra fine_ash-02 T
0.003906 2300.0 0.900 0.158632644E-01 1 1 tephra fine_ash-03 T
0.001000 1000.0 1.000 0.100000000E-01 2 4 SO2 SO2 T
The source file
The source file name.src
is an ASCII file containing the definition of the source term. The source can be defined for different time phases during which source values are kept constant. The number, position and values (i.e., Mass Flow Rate) of the source points can vary from one time slice to another and cannot overlap. There is no restriction on the number and duration of the time slices. It allows, in practice, to discretize any kind of source term. This file can be defined directly by the user or generated by the SetSrc
task. The format of the file is described in the next table (nc=6 is assumed in the example):
Column 1 | Column 2 | Column 3 | Column 4 | Column 5 | Column 6 |
---|---|---|---|---|---|
time1 | time2 | ||||
nsrc | nc | ||||
MFR | |||||
x(1) | y(1) | z(1) | src(1,1) | ... | src(1,nc) |
... | ... | ... | ... | ... | ... |
x(nsrc) | y(nsrc) | z(nsrc) | src(nsrc,1) | ... | src(nsrc,nc) |
This block is repeated for each eruption phase.
The symbols mean:
symbol | definition |
---|---|
time1 | Source phase starting time in seconds after 00:00 UTC of the run day |
time2 | Source phase end time in seconds after 00:00 UTC of the run day |
nsrc | Number of source points for this phase. It can vary from one phase to another |
nc | Total number of (effective) bins |
MFR | Mass Flow Rate in \(kg~s^{-1}\) |
x(i) | Longitude of the source i |
y(i) | Latitude of the source i |
z(i) | Height in meters a.g.l. of the source i |
src(i,j) | Mass flow rate in \(kg~s^{-1}\) of bin j and the source point i |
The mass flow rate must verify \(\sum \sum src(i,j) = MFR\).
Example for an emission column with 4 particle classes corresponding to a 1-h phase:
75600 79200
100 4
0.121336522E+08
-72.112200 -40.582800 1520. 0.977000246E+02 0.326596193E+02 0.171776303E+02 0.740473318E+01
-72.112200 -40.582800 1620. 0.113424788E+03 0.379161663E+02 0.199423601E+02 0.859652075E+01
(...)
-72.112200 -40.582800 11320. 0.334110824E+01 0.111688122E+01 0.587434061E+00 0.253224246E+00
-72.112200 -40.582800 11420. 0.000000000E+00 0.000000000E+00 0.000000000E+00 0.000000000E+00
Task FALL3D
Runs the FALL3D solver
Input files
This task requires the namelist file name.inp
and the files:
name.grn
, the granulometry file created by theSetSrc
task. It specifies the relative fractions and properties of each bin, including which bins are effectivename.src
, the emission source file created by theSetSrc
task. It specifies the mass flow rate released at source points (e.g. the eruptive column)name.dbs.nc
, meteo database file in netCDF format created by theSetDbs
task. It includes the meteorological variables interpolated to the computational gridname.pts
(optional), this file specifies a list of tracking points where the time series of relevant variables (e.g., surface PM10 concentration) will be output
Output files
LOG file
The LOG file name.Fall3d.log
is an ASCII file where critical information about the simulation run is stored. The contents of this file include:
- FALL3D copyright
- Code version
- Number of processors
- Starting time of the simulation
- Input and output files
- Time range and grid of the meteorological database
- FALL3D input data (e.g., time range, numerical parameters, output options, etc)
- Memory requirements
- Source terms features
- Particle classes
- Atmospheric properties, horizontal, and vertical diffusion
- Terminal velocities
- Main parameters of the gravity current model
- Updates about the simulation such as iteration number, critical time step, elapsed time, current simulation time, and a mass balance for the total mass (inside and ouside the computational domain) and the erupted mass
- Warnings and error
Result file
The result file name.res.nc
, written in netCDF format, contains the following output variables (for each species):
- bin properties (diameter, density, and sphericity)
- Topography
- Ground load and, if specified in the control input file, class ground load
- Wet deposition and, if specified in the control input file, class wet deposition
- Deposit thickness
- Total and PMxx (xx = 5, 10, 20) concentration at ground level
- Total and PMxx (xx = 5, 10, 20) column mas load (vertical integration of concentration)
- Concentration at different flight levels
- Total and class concentration at all model layers (if specified in the control input file only)
The output variables are defined by the block MODEL_OUTPUT
of the namelist file.
Restart file
The restart file name.rst.nc
, written in netCDF format, can be used to start a new run from the end of a previous simulation. This file is automatically generated by FALL3D at specified times as specified in the block MODEL_OUTPUT
.
A run is initialized with the data stored in the restart file name.rst.nc
when restart is enabled in the block TIME_UTC
using RESTART
= YES
.
Task SetEns
The task SetEns
runs automatically and foremost whenever ensemble runs are performed. It generates and sets the ensemble members by perturbing a reference state which defines so-called central or reference member. This task also creates a structure of sub-folders (0001
, 0002
, etc...) where the input and output files for each ensemble member are stored. In order to enable ensemble runs, FALL3D must be executed with the argument -nens
, which defines the ensemble size. The parameters to be perturbed are defined in the block ENSEMBLE
of the configuration file. In addition, in this block it's possible to set the perturbation amplitude (given as a percentage of the reference value or in absolute terms) and the perturbation sampling strategy (constant or Gaussian distribution).
Notes:
- The
ENSEMBLE
block in the configuration file is ignored when the ensemble option is disabled- This task is available since the code version 8.1.
Input file
This task requires the namelist file name.inp
and the random number files name.ens
under the ensemble member sub-folders contain the random numbers required to construct the ensemble. They must be read whenever an ensemble run is performed and the variable RANDOM_NUMBERS_FROM_FILE
is set to YES
in the configuration file.
Output files
LOG file
The LOG file name.SetEns.log
is an ASCII file where the status of the task is reported. In addition, the random numbers used to construct the ensemble are summarized here.
Random number files
The random number files name.ens
under the ensemble member sub-folders contain the random numbers required to construct the ensemble. The random numbers are regenerated and the files name.ens
updated in every new run as long as the variable RANDOM_NUMBERS_FROM_FILE
is set to NO
in the configuration file. Otherwise, the files name.ens
are read in a new ensemble run and remain unchanged. This is useful to reproduce a previous ensemble run, for example. If the files name.ens
do not exist (e.g. in a first run), they are generated regardless of the value assigned to RANDOM_NUMBERS_FROM_FILE
.
Task PosEns
The PosEns
is a post-processing task and must be executed once an ensemble run has been performed previously. The task merges individual member outputs from a previous ensemble run, computes deterministic and probabilistic variables and produces a single output file name.ens.nc
in netCDF format. The required ensemble outputs are specified in the ENSEMBLE_POST
block and include ensemble mean, median or user-defined percentiles.
Notes:
- Only supported in parallel mode
- This task is available since the code version 8.1.
Input file
This task requires a single input file, i.e. the namelist file name.inp
.
Output files
LOG file
The LOG file name.PosEns.log
is an ASCII file where the status of the task is reported.
Result file
The result file name.ens.nc
contains multiple ensemble outputs as specified by the block ENSEMBLE_POST
.
Task PosVal
The task PosVal
is used to validate both single-run (compatible code versions 8.x) and ensemble-based deterministic and/or probabilistic outputs against various types of gridded and scattered observation datasets. Observation datasets include satellite-based observations and quantitative retrievals (to validate against cloud column mass), deposit isopach/isopleth maps, and point-wise deposit observations (to validate against deposit thickness or mass load). In all cases, this model task reads the required files, interpolates model and observations into the same grid and computes a series of categorical and quantitative validation metrics that are detailed in the following Section. This model validation task inherits the model domain decomposition structure and, consequently, all metrics are first computed (in parallel) over each spatial sub-domain and then gathered and added to get global results over the whole computational domain.
Meteorological data
In order to run FALL3D, you need to provide meteorological data as an input. Multiple datasets are supported, including global model forecasts (GFS), global analyses and re-analyses (GDAS, ERA5), and mesoscale models.
List of supported datasets
This is the list of meteorological models available:
WRF-ARW
: The Advanced Research WRF (ARW) (mesoscale model)GFS
: The Global Forecast System (global weather forecast model)GEFS
: The Global Ensemble Forecast System (global weather forecast model)ERA5
: The ERA5 ECMWF reanalysis in pressure levels (global climate and weather renalysis)ERA5ML
: The ERA5 ECMWF reanalysis in model levels (global climate and weather renalysis)IFS
: The ECMWF Integrated Forecasting System (global weather forecast model)CARRA
: The Copernicus Arctic Regional Reanalysis (Arctic regional reanalysis )
ID | Map Projection | Resolution | Time Resolution | Vertical Coordinates | Vertical levels | Period | Format |
---|---|---|---|---|---|---|---|
Global model forecasts | |||||||
GFS (NCEP) | Regular lat-lon | 0.25º | 1 h | Isobaric | 41 | +384 h | GRIB2 |
0.5º | 3 h | 57 | +384 h | ||||
1.0º | 3 h | 41 | +384 h | ||||
GEFS (NCEP) | Regular lat-lon | 0.5º | 3 h | Isobaric | 31 | +840 h | GRIB2 |
IFS (ECMWF) | Regular lat-lon | 0.125º | 1 h | Hybrid | 137 | 10 days | netCDF or GRIB2 |
Global model final analyses and re-analyses | |||||||
ERA5 | Regular lat-lon | 0.25º | 1 h | Isobaric | 37 | 1940-present | netCDF or GRIB1 |
ERA5ML | Regular lat-lon | 0.25º | 1 h | Hybrid | 137 | 1940-present | netCDF or GRIB2 |
CARRA | Lambert Conformal | 2.5 km | 3 h | Isobaric | 12 | 1990-present | GRIB2 |
Mesoscale models | |||||||
WRF-ARW | Regular lat-lon | user-defined | user-defined | Terrain Following or Hybrid |
user-defined | user-defined | netCDF |
Lambert Conformal | |||||||
Mercator | |||||||
Polar Stereographic |
Utilities and dependencies
The FALL3D distribution package comes with a set of Python utilities for downloading and pre-processing global meteorological fields required by the model. These python scripts allow also downloading meteorological fields for a particular region and date range. The following scripts are included in the folder Other/Meteo/Utils
:
Script | Comments |
---|---|
era5_pl.py | Crops and downloads ERA5 data on pressure levels (37 vertical levels) from the Copernicus Climate Data Store (CDS) infrastructure |
era5_ml.py | Crops and downloads ERA5 data on model levels (137 vertical levels) from the Copernicus Climate Data Store (CDS) infrastructure |
era5_sfc.py | Crops and downloads ERA5 data at surface from the Copernicus Climate Data Store (CDS) infrastructure |
carra_pl.py | Downloads CARRA data on pressure levels (12 vertical levels) from the Copernicus Climate Data Store (CDS) infrastructure |
carra_sfc.py | Downloads CARRA data at surface from the Copernicus Climate Data Store (CDS) infrastructure |
gfs.py | Crops and downloads National Centers for Environmental Prediction (NCEP) GFS forecasts in GRIB2 native format (pre-processing required) format |
gefs.py | Crops and downloads National Centers for Environmental Prediction (NCEP) GEFS forecasts in GRIB2 native format (pre-processing required) format |
The Python scripts above require the fall3dutil
package, which can be installed on Linux and MacOS systems from the Python Package Index (PyPI) using pip. To this purpose, you can use a Python virtual environment to avoid conflicts. To create a virtual environment on a typical Linux system, use the basic venv command:
python3 -m venv fall3d_env
source fall3d_env/bin/activate
This will create a new virtual environment in the fall3d_env
subdirectory, and configure the current shell to use it as the default python environment. For more information click here. Once the virtual environment is activated, you can install the last version of the fall3dutil
package in the fall3d_env
using:
pip install fall3dutil
If you decide not to work in an environment, then you can execute this command
python3 -m pip install --user fall3dutil
ERA5: Getting and merging data
ERA5 is the fifth generation ECMWF reanalysis for the global climate and weather from 1940 onwards. Data regridded to a regular lat-lon grid of 0.25 degrees can be obtained from the Climate Data Store. Alternatively, we provide some scripts to automate the downloading process, select the variables required by FALL3D, domain cropping, etc.
Important note: In order to use the Climate Data Store (CDS) API, you need to have an account and install a key as explained here (see Section Install the CDS API key).
Surface data
You need to obtain surface and upper level data in different requests. The script era5_sfc.py
can be used to retrieve surface data.
-
era5_sfc.py
: Download ERA5 data at surface level via the Climate Data Store (CDS) infrastructure.Command line options. Click to expand
usage: era5_sfc.py [-h] [-d start_date end_date] [-x lonmin lonmax] [-y latmin latmax] [-r resolution] [-s step] [-b block] [-i file] [-v] Download ERA5 data (single level) required by FALL3D model. options: -h, --help show this help message and exit -d start_date end_date, --date start_date end_date Date range in format YYYYMMDD -x lonmin lonmax, --lon lonmin lonmax Longitude range -y latmin latmax, --lat latmin latmax Latitude range -r resolution, --res resolution Spatial resolution (deg) -s step, --step step Temporal resolution (h) -b block, --block block Block in the configuration file -i file, --input file Configuration file -v, --verbose increase output verbosity
Upper level data
In addition to surface data, you need to obtain upper level data as well. You can choose between model level data (high vertical resolution) or pressure level data (low vertical resolution).
-
era5_ml.py
: Download ERA5 meteorological data on model levels (137 vertical levels) via the Climate Data Store (CDS) infrastructure.Command line options. Click to expand
usage: era5_ml.py [-h] [-d start_date end_date] [-x lonmin lonmax] [-y latmin latmax] [-r resolution] [-s step] [-b block] [-i file] [-v] Download ERA5 data (model levels) required by FALL3D model. options: -h, --help show this help message and exit -d start_date end_date, --date start_date end_date Date range in format YYYYMMDD -x lonmin lonmax, --lon lonmin lonmax Longitude range -y latmin latmax, --lat latmin latmax Latitude range -r resolution, --res resolution Spatial resolution (deg) -s step, --step step Temporal resolution (h) -b block, --block block Block in the configuration file -i file, --input file Configuration file -v, --verbose increase output verbosity
-
era5_pl.py
: Download ERA5 data required on pressure levels (37 vertical levels) via the Climate Data Store (CDS) infrastructure.Command line options. Click to expand
usage: era5_pl.py [-h] [-d start_date end_date] [-x lonmin lonmax] [-y latmin latmax] [-r resolution] [-s step] [-b block] [-i file] [-v] Download ERA5 data (pressure levels) required by FALL3D model. options: -h, --help show this help message and exit -d start_date end_date, --date start_date end_date Date range in format YYYYMMDD -x lonmin lonmax, --lon lonmin lonmax Longitude range -y latmin latmax, --lat latmin latmax Latitude range -r resolution, --res resolution Spatial resolution (deg) -s step, --step step Temporal resolution (h) -b block, --block block Block in the configuration file -i file, --input file Configuration file -v, --verbose increase output verbosity
Merging data
Once downloaded, ERA5 data on upper levels (pressure or model levels) and at surface has to be merged in order to generate a complete database for the FALL3D model. To this purpose, you can run the command:
cdo merge upper_level.nc surface.nc merged_meteo.nc
to generate the merged_meteo.nc
file that will be ingested into FALL3D.
Important notes:
-
Single level and pressure level parameters are available on the C3S Climate Data Store (CDS) disks.
-
ERA5 data on model levels is not available on the CDS disks, but can be accessed from ECMWF data archive (MARS). However, MARS access is relatively slower.
-
You may need to download data using multiple requests for very large datasets. In this case, you should download data from different time ranges separately and then concatenate the files.
-
Note that you cannot concatenate the files directly, because of the packed format in the netcdf files (i.e., data is stored as 16-bit short variables). You can convert the data to float datatype and performing the concatenation in a single command using:
cdo -b F32 mergetime april.nc may.nc output.nc
It will generate a correct output file: output.nc
for the total period April-May.
GFS: Getting data
The Global Forecast System (GFS) is a National Centers for Environmental Prediction (NCEP) weather forecast model that generates data for dozens of atmospheric and land-soil variables, including temperatures, winds, precipitation, soil moisture, and atmospheric ozone concentration. GFS is a global model with a base horizontal resolution of 18 miles (28 kilometers) between grid points. Temporal resolution covers analysis and forecasts out to 16 days. Horizontal resolution drops to 44 miles (70 kilometers) between grid points for forecasts between one week and two weeks (see more details here).
Data from the last 10 days can be obtained from the NOAA Operational Model Archive and Distribution System (NOMADS). Alternatively, we provide some scripts to automate the downloading process, select the variables required by FALL3D, domain cropping, etc.
GFS data in GRIB2 format
A script is provided to download the NCEP operational Global Forecast System (GFS) forecasts in GRIB2 format (conversion required):
-
gfs.py
: Download GFS forecasts in GRIB2 format. This option needs further conversion to netCDF format before data is ingested by FALL3D.Command line options. Click to expand!
usage: gfs.py [-h] [-d start_date] [-x lonmin lonmax] [-y latmin latmax] [-t tmin tmax] [-r resolution] [-c cycle] [-s step] [-b block] [-i file] [-v] Download the NCEP operational Global Forecast System (GFS) analysis and forecast data required by the FALL3D model options: -h, --help show this help message and exit -d start_date, --date start_date Initial date in format YYYYMMDD -x lonmin lonmax, --lon lonmin lonmax Longitude range -y latmin latmax, --lat latmin latmax Latitude range -t tmin tmax, --time tmin tmax Forecast time range (h) -r resolution, --res resolution Spatial resolution (deg) -c cycle, --cycle cycle Cycle -s step, --step step Temporal resolution (h) -b block, --block block Block in the configuration file -i file, --input file Configuration file -v, --verbose increase output verbosity
GFS data in netCDF format
FALL3D requires input meteorological data in netCDF format. Consequently, GFS data must be concatenated and converted from GRIB2 to obtain a single netCDF file. To this purpose, the grib utility wgrib2
can be used (more information here). As an example, a bash script is included in the FALL3D distribution to perform the conversion: Other/Meteo/Utils/grib2nc.sh
. Just edit the file header according to your GFS files:
########## Edit header ##########
WGRIBEXE=wgrib2
OUTPUTFILE=output.nc
TABLEFILE=grib_tables/gfs_0p25.levels
TMIN=0
TMAX=12
STEP=6
CYCLE=12
DATE=20230404
GRIBPATH=~/fall3d/fall3dutil/tests
GRIBFILE (){
fname=${GRIBPATH}/gfs.t${CYCLE}z.pgrb2.0p25.f${HOUR}
echo ${fname}
}
#################################
Important note:
- The variable
TABLEFILE
specifies the list of vertical levels and depends on the resolution of your GFS data. In the foldergrib_tables
you'll find the tables for resolutions of 0.25° (gfs_0p25.levels
), 0.50° (gfs_0p50.levels
) and 1.00° (gfs_1p00.levels
). DefineTABLEFILE
accordingly.
GEFS: Getting data
The Global Ensemble Forecast System (GEFS) is a weather model created by the National Centers for Environmental Prediction (NCEP) that generates 31 separate forecasts (ensemble members) to address underlying uncertainties in the input data such limited coverage, instruments or observing systems biases, and the limitations of the model itself. GEFS quantifies these uncertainties by generating multiple forecasts, which in turn produce a range of potential outcomes based on differences or perturbations applied to the data after it has been incorporated into the model. Each forecast compensates for a different set of uncertainties (see more details here).
Data from the last 4 days can be obtained from the NOAA Operational Model Archive and Distribution System (NOMADS). Alternatively, we provide some scripts to automate the downloading process, select the variables required by FALL3D, domain cropping, etc.
GEFS data in GRIB2 format
A script is provided to download the NCEP Global Ensemble Forecast System (GEFS) forecasts in GRIB2 format (conversion required):
-
gefs.py
: Download GEFS forecasts in GRIB2 format. This option needs further conversion to netCDF format before data is ingested by FALL3D.Command line options. Click to expand!
usage: gefs.py [-h] [-d start_date] [-x lonmin lonmax] [-y latmin latmax] [-t tmin tmax] [-e ensmin ensmax] [-r resolution] [-c cycle] [-s step] [-b block] [-i file] [-v] Download data from the atmospheric component of NCEP operational global ensemble modeling suite, Global Ensemble Forecast System (GEFS), required by the FALL3D model options: -h, --help show this help message and exit -d start_date, --date start_date Initial date in format YYYYMMDD -x lonmin lonmax, --lon lonmin lonmax Longitude range -y latmin latmax, --lat latmin latmax Latitude range -t tmin tmax, --time tmin tmax Forecast time range (h) -e ensmin ensmax, --ens ensmin ensmax Ensemble member range -r resolution, --res resolution Spatial resolution (deg) -c cycle, --cycle cycle Cycle -s step, --step step Temporal resolution (h) -b block, --block block Block in the configuration file -i file, --input file Configuration file -v, --verbose increase output verbosity
GEFS data in netCDF format
FALL3D requires input meteorological data in netCDF format. Consequently, GEFS data must be concatenated and converted from GRIB2 to obtain a set of netCDF files. To this purpose, the grib utility wgrib2
can be used (more information here). As an example, a bash script is included in the FALL3D distribution to perform the conversion: Other/Meteo/Utils/grib2nc-ens.sh
. Just edit the file header according to your GEFS files:
########## Edit header ##########
WGRIBEXE=wgrib2
OUTPUTFILE=output.nc
TABLEFILE=grib_tables/gefs_0p50.levels
TMIN=0
TMAX=48
STEP=3
CYCLE=12
DATE=20230404
GRIBPATH=./
GRIBFILE (){
fnameA=${GRIBPATH}/gep04.t${CYCLE}z.pgrb2a.0p50.f${HOUR}
fnameB=${GRIBPATH}/gep04.t${CYCLE}z.pgrb2b.0p50.f${HOUR}
}
#################################
CARRA: Getting data
The goal of the Copernicus Arctic Regional Reanalysis (CARRA) system was to produce the first regional atmospheric reanalysis targeted for European parts of the Arctic areas. The reanalysis covers the period from September 1990 (>30 years). The CARRA reanalysis dataset is produced at 2.5 km horizontal mesh. The reanalysis data cover two domains in the European sector of the Arctic shown below (see more information here):
Data can be obtained from the Climate Data Store. Alternatively, we provide some scripts to automate the downloading process, select the variables required by FALL3D, domain cropping, etc.
Important note: In order to use the Climate Data Store (CDS) API, you need to have an account and install a key as explained here (see Section Install the CDS API key).
Surface data
You need to obtain surface and upper level data in different requests. The script carra_sfc.py
can be used to retrieve surface data.
-
carra_sfc.py
: Download CARRA data at surface level via the Climate Data Store (CDS) infrastructure.Command line options. Click to expand
usage: carra_sfc.py [-h] [-d start_date end_date] [-s step] [-b block] [-i file] [-v] Download CARRA data (single level) required by FALL3D model. options: -h, --help show this help message and exit -d start_date end_date, --date start_date end_date Date range in format YYYYMMDD -s step, --step step Temporal resolution (h) -b block, --block block Block in the configuration file -i file, --input file Configuration file -v, --verbose increase output verbosity
Upper level data
In addition to surface data, you need to obtain upper level data as well.
-
carra_pl.py
: Download CARRA data required on pressure levels (12 vertical levels) via the Climate Data Store (CDS) infrastructure.Command line options. Click to expand
usage: carra_pl.py [-h] [-d start_date end_date] [-s step] [-b block] [-i file] [-v] Download CARRA data (pressure levels) required by FALL3D model. options: -h, --help show this help message and exit -d start_date end_date, --date start_date end_date Date range in format YYYYMMDD -s step, --step step Temporal resolution (h) -b block, --block block Block in the configuration file -i file, --input file Configuration file -v, --verbose increase output verbosity
Merging data
Once downloaded, CARRA data on upper levels (pressure levels) and at surface in GRIB format has to be converted and merged in order to generate a single netCDF file for the FALL3D model. Unfortunately, the CDS API does not allow domain subsets to be retrieved and, typically, very large files need to be downloaded. This can lead to memory conflicts during the format conversion or merging process. This simple python scripts can be used in these cases:
import xarray as xr
from dask.diagnostics import ProgressBar
import cfgrib
### Parameters ###
fname_out = 'merged_meteo.nc'
fname_in_sfc = 'surface.grib'
fname_in_pl = 'pressure.grib'
##################
ds_list = cfgrib.open_datasets(fname_in_pl)
ds_list += cfgrib.open_datasets(fname_in_sfc)
keys2remove = ['surface','heightAboveGround']
for i,ds in enumerate(ds_list):
ds_list[i] = ds.drop_vars(keys2remove,errors='ignore')
if 'sr' in ds:
print("******Warn******* Correcting coordinates...")
ds_list[i]['latitude'] = ds_list[0].latitude
ds_list[i]['longitude'] = ds_list[0].longitude
print("merging....")
ds = xr.merge(ds_list).chunk(chunks={"time": 1})
print("saving...")
delayed_obj = ds.to_netcdf(fname_out, compute=False)
with ProgressBar():
results = delayed_obj.compute()
to generate the merged_meteo.nc
file that will be ingested into FALL3D.
Important notes:
- This process allows access to data of the CARRA-west domain. Access to CARRA-east domain has not been implemented yet.
WRF-ARW
The Advanced Research WRF (WRF-ARW) model is a flexible, state-of-the-art atmospheric simulation system, and is portable and efficient on parallel computing platforms. It is suitable for use across scales, ranging from meters to thousands of kilometers, for a broad range of applications. For detailed information go to the WRF Users' Guide.
Tipically, WRF data is not publicly available. However, WRF is used by weather agencies all over the world to generate weather forecasts and, in some cases, data can be ordered by special request. Alternatively, you can run the model by yourself since WRF is an open-source code in the public domain. The following tutorial can be helpful for this purpose.
The namelist file
The namelist file name.inp
is an input configuration file in ASCII format required by all tasks.
It consists of a set of blocks defining physical parameterisations, time ranges, model domain, emission source, numerical schemes, etc.
Each task reads only the necessary blocks generating self-consistent input files.
Parameters within a block are listed one per record, in arbitrary order, and optionally can be followed by one (or more) blank space and a comment.
Comments start with the exclamation mark symbol "!".
Real numbers can be expressed using the FORTRAN notation (e.g. 12E7).
An example of namelist file can be found in the FALL3D distribution under the folder Example/Example.inp
.
Next, a full description of each block in the namelist input file is presented.
Block TIME_UTC
This block defines variables related to date and time. It is used by FALL3D, SetDbs, and SetSrc tasks
YEAR
= (integer)- Year of the starting date
MONTH
= (integer)- Month of the starting date
DAY
= (integer)- Day of the starting date
RUN_START_(HOURS_AFTER_00)
= (float)- Starting time in hours after 00:00 UTC of the starting day
RUN_END_(HOURS_AFTER_00)
= (float)- End time in hours after 00:00 UTC of the starting day
- Note: Runs can continue even after the source term has been switched off (e.g., when the eruption has stopped)
INITIAL_CONDITION
= (options)- Type of initial concentration field
- Input options:
NONE
: Initial concentration is zeroINSERTION
: Initial concentration from an input fileRESTART
: Initial concentration from a previous run
RESTART_FILE
= (string)- Path to the restart file
- Note: Only used if
INITIAL_CONDITION = RESTART
RESTART_ENSEMBLE_BASEPATH
= (string)- Root path for
RESTART_FILE
- Optional
- Note: Used for ensemble runs when multiple restart files are available (
RESTART_ENSEMBLE_BASEPATH
/0001/RESTART_FILE
...). If not provided a single restart file is used for the ensemble (RESTART_FILE
)
- Root path for
Block INSERTION_DATA
This block is read by the FALL3D task only if INITIAL_CONDITION
= INSERTION
INSERTION_FILE
= (string)- Path to the initial condition file in netCDF format (e.g. a satellite retrieval)
INSERTION_DICTIONARY_FILE
= (string)- Path to the insertion dictionary file defining the variable names. An example can be found in
Other/Sat/Sat.tbl
- Optional
- Note: If not given, a default dictionary will be used
- Path to the insertion dictionary file defining the variable names. An example can be found in
INSERTION_TIME_SLAB
= (integer)- Time slab in the insertion file to be used as the initial conditions
DIAMETER_CUT_OFF_(MIC)
= (float)- Cut-off diameter in microns. Maximum particle diameter used to define the initial condition
- Optional
Block METEO_DATA
This block defines variables related to the input meteorological dataset. It is read by the SetDbs task
METEO_DATA_FORMAT
= (options)- Input meteorological model
- Input options:
WRF
: Advanced Research WRF (ARW), WRF-ARW - Mesoscale modelGFS
: Global Forecast System - Global weather forecast modelERA5
: ERA5 ECMWF reanalysis in pressure levels - Global climate and weather renalysisERA5ML
: ERA5 ECMWF reanalysis in model levels - Global climate and weather renalysisIFS
: ECMWF Integrated Forecasting System - Global weather forecast modelCARRA
: Copernicus Arctic Regional Reanalysis in pressure levels - Arctic regional reanalysis
METEO_DATA_DICTIONARY_FILE
= (string)- Path to the database dictionary file defining the variable names. An example can be found in the folder
Other/Meteo/Tables
- Optional
- Note: If not given, a default dictionary for each model will be used
- Path to the database dictionary file defining the variable names. An example can be found in the folder
METEO_DATA_FILE
= (string)- Path to the meteo model data file in netCDF format
METEO_ENSEMBLE_BASEPATH
= (string)- Root path for
METEO_DATA_FILE
- Optional
- Note: Used for ensemble runs when multiple meteo files are available (
METEO_ENSEMBLE_BASEPATH
/0001/METEO_DATA_FILE
...). If not provided a single meteo file is used for the ensemble (METEO_DATA_FILE
)
- Root path for
METEO_LEVELS_FILE
= (string)- Path to the table defining the coefficients for vertical hybrid levels
- Note: Only required if
METEO_DATA_FORMAT
=ERA5ML
DBS_BEGIN_METEO_DATA_(HOURS_AFTER_00)
= (float)- Starting time for the database file in hours after 00:00 UTC of the starting day
DBS_END_METEO_DATA_(HOURS_AFTER_00)
= (float)- End time for the database file in hours after 00:00 UTC of the starting day
METEO_COUPLING_INTERVAL_(MIN)
= (float)- Time interval to update (couple) meteorological variables
- Note: wind velocity is linearly interpolated in time for each time step
MEMORY_CHUNK_SIZE
= (integer)- Size of memory chunks used to store meteo data timesteps
- Note: Must be greater than 1
Block GRID
This block defines the grid variables needed by the SetDbs and FALL3D tasks
HORIZONTAL_MAPPING
= (options)- Horizontal mapping
- Input options:
CARTESIAN
: Cartesian mappingSPHERICAL
: Spherical mapping
VERTICAL_MAPPING
= (options)- Vertical mapping
- Input options:
SIGMA_NO_DECAY
: Terrain following levels with no decaySIGMA_LINEAR_DECAY
: Terrain following levels with linear decay from surface to the flat topSIGMA_EXPONENTIAL_DECAY
: Terrain following levels with exponential decay from surface to the flat top
LONMIN
= (float)- West longitude in decimal degrees of the domain
LONMAX
= (float)- East longitude in decimal degrees of the domain
LATMIN
= (float)- South latitude in decimal degrees of the domain
LATMAX
= (float)- North latitude in decimal degrees of the domai
NX
= (options)- Define the number of grid cells or resolution along dimension x
- Input options:
- (integer): Number of grid cells (mass points) along x
RESOLUTION
(float): Resolution (grid size) along x
NY
= (options)- Define the number of grid cells or resolution along dimension y
- Input options:
- (integer): Number of grid cells (mass points) along y
RESOLUTION
(float): Resolution (grid size) along y
NZ
= (integer)- Define the number of grid cells (mass points) along dimension z
ZMAX_(M)
= (float)- Top height of the computational domain in meters
SIGMA_VALUES
= (float_list)- List of values of sigma coordinate in the range (0,1). The list size should be less or equal than
NZ
+1 - Optional
- Note: If not present, uniform distribution of vertical layers is assumed
- List of values of sigma coordinate in the range (0,1). The list size should be less or equal than
Block SPECIES
This block is used by FALL3D, SetTgsd, and SetSrc tasks and defines which species are modeled
TEPHRA
= (options)- Indicate if
TEPHRA
species are included - Input options:
ON
: Activate tephra particle speciesOFF
: Deactivate tephra particle species
- Indicate if
DUST
= (options)- Indicate if
DUST
species are included - Input options:
ON
: Activate dust particle speciesOFF
: Deactivate dust particle species
- Indicate if
H2O
= (options)- Indicate if
H2O
aerosol species are included - Input options:
ON
MASS_FRACTION_(%)
=
(float): Activate H2O aerosol species and set the mass fraction in percentOFF
: Deactivate H2O aerosol species
- Note: Aerosols can run independently or coupled with tephra species.
If
TEPHRA
=ON
it is assumed that the aerosol species are all of magmatic origin mass fraction ofSO2
andH2O
are relative to tephra. IfTEPHRA
=OFF
then the mass fraction of aerosols must sum 1
- Indicate if
SO2
= (options)- Indicate if
SO2
aerosol species are included - Input options:
ON
MASS_FRACTION_(%)
=
(float): Activate SO2 aerosol species and set the mass fraction in percentOFF
: Deactivate SO2 aerosol species
- Note: Aerosols can run independently or coupled with tephra species.
If
TEPHRA
=ON
it is assumed that the aerosol species are all of magmatic origin mass fraction ofSO2
andH2O
are relative to tephra. IfTEPHRA
=OFF
then the mass fraction of aerosols must sum 1
- Indicate if
CS134
= (options)- Indicate if
CS-134
radionuclide species are included - Input options:
ON
MASS_FRACTION_(%)
=
(float): Activate Cs-134 radionuclide species and set the mass fraction in percentOFF
: Deactivate Cs-134 radionuclide species
- Note: All species of RADIONUCLIDES can run simultaneously but are incompatible with species of category PARTICLES. Mass fraction of all radionuclides must sum 1
- Indicate if
CS137
= (options)- Indicate if
CS-137
radionuclide species are included - Input options:
ON
MASS_FRACTION_(%)
=
(float): Activate Cs-137 radionuclide species and set the mass fraction in percentOFF
: Deactivate Cs-137 radionuclide species
- Note: All species of RADIONUCLIDES can run simultaneously but are incompatible with species of category PARTICLES. Mass fraction of all radionuclides must sum 1
- Indicate if
I131
= (options)- Indicate if
I-131
radionuclide species are included - Input options:
ON
MASS_FRACTION_(%)
=
(float): Activate I-131 radionuclide species and set the mass fraction in percentOFF
: Deactivate I-131 radionuclide species
- Note: All species of RADIONUCLIDES can run simultaneously but are incompatible with species of category PARTICLES. Mass fraction of all radionuclides must sum 1
- Indicate if
SR90
= (options)- Indicate if
SR-90
radionuclide species are included - Input options:
ON
MASS_FRACTION_(%)
=
(float): Activate Sr-90 radionuclide species and set the mass fraction in percentOFF
: Deactivate Sr-90 radionuclide species
- Note: All species of RADIONUCLIDES can run simultaneously but are incompatible with species of category PARTICLES. Mass fraction of all radionuclides must sum 1
- Indicate if
Y90
= (options)- Indicate if
Y-90
radionuclide species are included - Input options:
ON
MASS_FRACTION_(%)
=
(float): Activate Y-90 radionuclide species and set the mass fraction in percentOFF
: Deactivate Y-90 radionuclide species
- Note: All species of RADIONUCLIDES can run simultaneously but are incompatible with species of category PARTICLES. Mass fraction of all radionuclides must sum 1
- Indicate if
Block SPECIES_TGSD
These blocks define the TGSD for each species and are used by the SetTgsd task to generate some basic distributions
NUMBER_OF_BINS
= (integer)- Number of bins in the TGSD
- Note: This value can be different from the number of classes defined in the granulometry file
*.grn
created by the SetSrc task if aggregate bins are included later on or if a cut-off diameter is imposed
FI_RANGE
= (float_list)- A list of two values defining the minimum and maximum diameters in phi units
DENSITY_RANGE
= (float_list)- A list of two values defining the minimum and maximum densities
- Note: Linear interpolation (in phi) is assumed between the extremes
SPHERICITY_RANGE
= (float_list)- A list of two values defining the minimum and maximum values for sphericity
- Note: Linear interpolation (in phi) is assumed between the extremes
DISTRIBUTION
= (options)- Type of TGSD distribution
- Input options:
GAUSSIAN
: Gaussian distributionBIGAUSSIAN
: bi-Gaussian distributionWEIBULL
: Weibull distributionBIWEIBULL
: bi-Weibull distributionCUSTOM
: Custom distribution from an external user fileESTIMATE
: TGSD is estimated from column height and magma viscosity (for tephra species only)
Sub-block IF_GAUSSIAN
Used when DISTRIBUTION
=GAUSSIAN
FI_MEAN
= (float)- Mean of the gaussian distribution in phi units
FI_DISP
= (float)- Standard deviation of the gaussian distribution in phi units
Sub-block IF_BIGAUSSIAN
Used when DISTRIBUTION
=BIGAUSSIAN
FI_MEAN
= (float_list)- A list of two values in phi units defining the means of two gaussian distributions
FI_DISP
= (float_list)- A list of two values in phi units defining the standard deviations of two gaussian distributions
MIXING_FACTOR
= (float)- The mixing factor between both gaussian distributions
Sub-block IF_WEIBULL
Used when DISTRIBUTION
=WEIBULL
FI_SCALE
= (float)- Parameter of the Weibull distribution
W_SHAPE
= (float)- Parameter of the Weibull distribution
Sub-block IF_BIWEIBULL
Used when DISTRIBUTION
=BIWEIBULL
FI_SCALE
= (float_list)- List of parameters for the bi-Weibull distribution
W_SHAPE
= (float_list)- List of parameters for the bi-Weibull distribution
MIXING_FACTOR
= (float)- Parameter of the bi-Weibull distribution
Sub-block IF_CUSTOM
Used when DISTRIBUTION
=CUSTOM
FILE
= (string)- Path to the file with the custom TGSD
Sub-block IF_ESTIMATE
Used when DISTRIBUTION
=ESTIMATE
VISCOSITY_(PAS)
= (float)- Magma viscosity in Pa.s
HEIGHT_ABOVE_VENT_(M)
= (float)- Eruption column height above vent in meters
Block PARTICLE_AGGREGATION
This block is used by task SetSrc and controls particle aggregation and cut-off (for categories particles and radionuclides only)
PARTICLE_CUT_OFF
= (options)- Specifies a cut off diamater
- Input options:
NONE
: No cut-off usedFI_LOWER_THAN
(float): Define an upper cut-off limit in phi unitsFI_LARGER_THAN
(float): Define a lower cut-off limit in phi unitsD_(MIC)_LARGER_THAN
(float): Define an upper cut-off limit for diameter in micronsD_(MIC)_LOWER_THAN
(float): Define a lower cut-off limit for diameter in microns
AGGREGATION_MODEL
= (options)- Define a particle aggregation parametrization
- Input options:
NONE
: No aggregationCORNELL
: Cornell model (Cornell et al., 1983)COSTA
: Costa model (Costa et al., 2010). Valid only for tephra particles andPLUME
source typePERCENTAGE
: Percentage model (Sulpizio et al., 2012)
NUMBER_OF_AGGREGATE_BINS
= (integer)- Number of aggregate bins (default value is 1)
- Note: Only used if
AGGREGATION_MODEL
!=NONE
DIAMETER_AGGREGATES_(MIC)
= (float_list)- Diameter of aggregate bins in microns
- Note: Only used if
AGGREGATION_MODEL
!=NONE
DENSITY_AGGREGATES_(KGM3)
= (float_list)- Density of aggregate bins in kg m-3
- Note: Only used if
AGGREGATION_MODEL
!=NONE
PERCENTAGE_(%)
= (float_list)- Percentage of aggregate bins
- Note: Only used if
AGGREGATION_MODEL
=PERCENTAGE
VSET_FACTOR
= (float)- Multiplicative correction factor for settling velocity of aggregates
- Note: Only used if
AGGREGATION_MODEL
=PERCENTAGE
FRACTAL_EXPONENT
= (float)- Fractal exponent (see Costa et al., 2010 for details)
- Note: Only used if
AGGREGATION_MODEL
=COSTA
Block SOURCE
This block defines the variables needed by the SetSrc task to generate the source term for the emission phases
SOURCE_TYPE
= (options)- Type of source vertical distribution
- Input options:
POINT
: Point sourceSUZUKI
: Suzuki-type sourceTOP-HAT
: Top-hat sourcePLUME
: Plume source (based on the 1D BPT)RESUSPENSION
: Resuspension source
SOURCE_START_(HOURS_AFTER_00)
= (float_list)- List of source start time for each phase
- Note: Alternatively, a file can be provided with 3 columns specifying start time/end time/height above vent for each phase. This option is useful in case of many source phases
SOURCE_END_(HOURS_AFTER_00)
= (float_list)- List of source end time for each phase
- Note: If a file is provided in
SOURCE_START_(HOURS_AFTER_00)
this line is ignored
LON_VENT
= (float)- Vent longitude
LAT_VENT
= (float)- Vent latitude
VENT_HEIGHT_(M)
= (float)- Height of the vent in meters a.s.l.
HEIGHT_ABOVE_VENT_(M)
= (float_list)- List of source heights in meters above the vent for each eruptive phase
- Note: The plume heights must be lower than the top of the computational domain. If a file is provided in
SOURCE_START_(HOURS_AFTER_00)
this line is ignored
MASS_FLOW_RATE_(KGS)
= (options)- Defines the MFR or how the it should be computed (i.e. derived from column height)
- Input options:
- (float_list): Mass flow rate in kg/s for each eruptive phase
ESTIMATE-MASTIN
: MFR is computed using the empirical fit by Mastin et al. (2009)ESTIMATE-WOODHOUSE
: MFR is computed using the empirical fit by Woodhouse et al. (2013)ESTIMATE-DEGRUYTER
: MFR is computed using the empirical fit by Degruyter and Bonadonna (2012)
ALFA_PLUME
= (float)- Entrainment coefficient
- Note: Only used if
MASS_FLOW_RATE_(KGS)
is set toESTIMATE-WOODHOUSE
orESTIMATE-DEGRUYTER
BETA_PLUME
= (float)- Entrainment coefficient
- Note: Only used if
MASS_FLOW_RATE_(KGS)
is set toESTIMATE-WOODHOUSE
orESTIMATE-DEGRUYTER
EXIT_TEMPERATURE_(K)
= (float)- Mixture temperature
- Note: Only used if
SOURCE_TYPE
=PLUME
or to estimate MFR
EXIT_WATER_FRACTION_(%)
= (float)- Total (magmatic) water fraction
- Note: Only used if
SOURCE_TYPE
=PLUME
or to estimate MFR
Sub-block IF_SUZUKI_SOURCE
Used when SOURCE_TYPE
=IF_SUZUKI_SOURCE
A
= (float_list)- List of values giving the parameter A in the Suzuki distribution (Pfeiffer et al., 2005) for each phase
L
= (float_list)- List of values giving the parameter lambda in the Suzuki distribution (Pfeiffer et al., 2005) for each phase
Sub-block IF_TOP-HAT_SOURCE
Used when SOURCE_TYPE
=IF_TOP-HAT_SOURCE
THICKNESS_(M)
= (float_list)- List of thickness of the emission slab in meters for each phase
Sub-block IF_PLUME_SOURCE
Used when SOURCE_TYPE
=PLUME
SOLVE_PLUME_FOR
= (options)- Configure the FPLUME model embedded in FALL3D
- Input options:
MFR
: SetSrc solves for mass flow rate given the column height (inverse problem)HEIGHT
: SetSrc solves for column height given the mass flow rate
MFR_SEARCH_RANGE
= (float_list)- Two values n and m such that 10^n and 10^m specify the range of MFR values admitted in the iterative solving procedure
- Note: Only used if
SOLVE_PLUME_FOR
=MFR
EXIT_VELOCITY_(MS)
= (float_list)- List of values of the magma exit velocity in m/s at the vent for each eruptive phase
EXIT_GAS_WATER_TEMPERATURE_(K)
= (float_list)- List of values of the exit gas water temperature in K for each eruptive phase
EXIT_LIQUID_WATER_TEMPERATURE_(K)
= (float_list)- List of values of the exit liquid water temperature in K for each eruptive phase
EXIT_SOLID_WATER_TEMPERATURE_(K)
= (float_list)- List of values of the exit solid water temperature in K for each eruptive phase
EXIT_GAS_WATER_FRACTION_(%)
= (float_list)- List of values of the exit gas water fraction in percent for each eruptive phase
EXIT_LIQUID_WATER_FRACTION_(%)
= (float_list)- List of values of the exit liquid water fraction in percent for each eruptive phase
EXIT_SOLID_WATER_FRACTION_(%)
= (float_list)- List of values of the exit solid water fraction in percent for each eruptive phase
WIND_COUPLING
= (options)- If wind coupling is considered
- Input options:
YES
: EnabledNO
: Vertical wind velocity profile is assumed zero
AIR_MOISTURE
= (options)- Air moisture
- Input options:
YES
: EnabledNO
: Dry entrained air only
LATENT_HEAT
= (options)- Latent heat
- Input options:
YES
: EnabledNO
: Latent heat contribution is neglected
REENTRAINMENT
= (options)- Reentrainment
- Input options:
YES
: EnabledNO
: Particle reentrainment is neglected
BURSIK_FACTOR
= (float)- Bursik factor xi
- Note: If not given, assumed equal to 0.1
Z_MIN_WIND
= (float)- Ignore wind entrainment below this zvalue (low jet region)
- Note: If not given, assumed equal to 100
C_UMBRELLA
= (float)- Thickness of umbrella region relative to Hb (>1)
- Note: If not given, assumed equal to 1.32
A_S
= (options)- Plume entrainment coefficient
- Input options:
CONSTANT
(float_list): A list of two values for: (value jet, value plume)KAMINSKI-R
: TODOKAMINSKI-C
: TODOOLD
: TODO
A_V
= (options)- Plume entrainment coefficient
- Input options:
CONSTANT
(float): A constant value is assumedTATE
: TODO
Block ENSEMBLE
This block is used by task SetEns to generate the ensemble members
RANDOM_NUMBERS_FROM_FILE
= (options)- Indicate whether the ensemble should be reconstructed
- Input options:
YES
: Read ensemble perturbations from*.ens
filesNO
: Reconstruct the ensemble and generate new*.ens
files
- Note: If
*.ens
are not found, optionRANDOM_NUMBERS_FROM_FILE
=NO
will be used
PERTURBATE_COLUMN_HEIGHT
= (options)- Perturbate eruption column height
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
PERTURBATE_MASS_FLOW_RATE
= (options)- Perturbate mass eruption rate
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: This option is deactivated for
SOURCE_TYPE=PLUME
orMASS_FLOW_RATE_(KGS)=ESTIMATE-*
options
PERTURBATE_SOURCE_START
= (options)- Perturbate the starting time of each eruptive phase
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
PERTURBATE_SOURCE_DURATION
= (options)- Perturbate the duration time of each eruptive phase
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
PERTURBATE_TOP-HAT_THICKNESS
= (options)- Perturbate the top-hat thickness
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Only used if
SOURCE_TYPE = TOP-HAT
PERTURBATE_SUZUKI_A
= (options)- Perturbate the Suzuki coefficient
A
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Only used if
SOURCE_TYPE = SUZUKI
- Perturbate the Suzuki coefficient
PERTURBATE_SUZUKI_L
= (options)- Perturbate the Suzuki coefficient
L
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Only used if
SOURCE_TYPE = SUZUKI
- Perturbate the Suzuki coefficient
PERTURBATE_WIND
= (options)- Perturbate the horizontal wind components
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Zonal and meridional wind components are independently perturbed (two random numbers are generated)
PERTURBATE_DATA_INSERTION_CLOUD_HEIGHT
= (options)- Perturbate the cloud insertion height
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Only used if
INITIAL_CONDITION = INSERTION
PERTURBATE_DATA_INSERTION_CLOUD_THICKNESS
= (options)- Perturbate the cloud insertion thickness
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Only used if
INITIAL_CONDITION = INSERTION
PERTURBATE_FI_MEAN
= (options)- Perturbate the mean of the TGSD
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: For bimodal distributions
BIGAUSSIAN
andESTIMATE
, only the fine subpopulation is perturbed
PERTURBATE_DIAMETER_AGGREGATES_(MIC)
= (options)- Perturbate the diameter of the particle aggregates
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Only used if
AGGREGATION_MODEL
!=NONE
PERTURBATE_DENSITY_AGGREGATES
= (options)- Perturbate the density of the particle aggregates
- Input options:
NO
: Disable perturbationRELATIVE
: Perturbation range given as a fraction of the central valueABSOLUTE
: Perturbation range given as an absolute value
- Note: Only used if
AGGREGATION_MODEL
!=NONE
Sub-block IF_PERTURBATE_COLUMN_HEIGHT
Used when PERTURBATE_COLUMN_HEIGHT
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in meters (
PERTURBATE_COLUMN_HEIGHT
=ABSOLUTE
) or percent (PERTURBATE_COLUMN_HEIGHT
=RELATIVE
)
- Define the perturbation range in meters (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_MASS_FLOW_RATE
Used when PERTURBATE_MASS_FLOW_RATE
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in kg/s (
PERTURBATE_MASS_FLOW_RATE
=ABSOLUTE
) or percent (PERTURBATE_MASS_FLOW_RATE
=RELATIVE
)
- Define the perturbation range in kg/s (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_SOURCE_START
Used when PERTURBATE_SOURCE_START
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in s (
PERTURBATE_SOURCE_START
=ABSOLUTE
) or percent (PERTURBATE_SOURCE_START
=RELATIVE
)
- Define the perturbation range in s (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_SOURCE_DURATION
Used when PERTURBATE_SOURCE_DURATION
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in s (
PERTURBATE_SOURCE_DURATION
=ABSOLUTE
) or percent (PERTURBATE_SOURCE_DURATION
=RELATIVE
)
- Define the perturbation range in s (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_TOP-HAT_THICKNESS
Used when PERTURBATE_TOP-HAT_THICKNESS
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in meters (
PERTURBATE_TOP-HAT_THICKNESS
=ABSOLUTE
) or percent (PERTURBATE_TOP-HAT_THICKNESS
=RELATIVE
)
- Define the perturbation range in meters (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_SUZUKI_A
Used when PERTURBATE_SUZUKI_A
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range of the dimensionless parameter
A
in absolute units (IF_PERTURBATE_SUZUKI_A
=ABSOLUTE
) or percent (IF_PERTURBATE_SUZUKI_A
=RELATIVE
)
- Define the perturbation range of the dimensionless parameter
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_SUZUKI_L
Used when PERTURBATE_SUZUKI_L
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range of the dimensionless parameter
L
in absolute units (IF_PERTURBATE_SUZUKI_L
=ABSOLUTE
) or percent (IF_PERTURBATE_SUZUKI_L
=RELATIVE
)
- Define the perturbation range of the dimensionless parameter
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_WIND
Used when PERTURBATE_WIND
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in m/s (
PERTURBATE_WIND
=ABSOLUTE
) or percent (PERTURBATE_WIND
=RELATIVE
)
- Define the perturbation range in m/s (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_DATA_INSERTION_CLOUD_HEIGHT
Used when PERTURBATE_DATA_INSERTION_CLOUD_HEIGHT
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in meters (
IF_PERTURBATE_DATA_INSERTION_CLOUD_HEIGHT
=ABSOLUTE
) or percent (IF_PERTURBATE_DATA_INSERTION_CLOUD_HEIGHT
=RELATIVE
)
- Define the perturbation range in meters (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_DATA_INSERTION_CLOUD_THICKNESS
Used when PERTURBATE_DATA_INSERTION_CLOUD_THICKNESS
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in meters (
PERTURBATE_DATA_INSERTION_CLOUD_THICKNESS
=ABSOLUTE
) or percent (PERTURBATE_DATA_INSERTION_CLOUD_THICKNESS
=RELATIVE
)
- Define the perturbation range in meters (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_FI_MEAN
Used when PERTURBATE_FI_MEAN
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in phi units (
PERTURBATE_FI_MEAN
=ABSOLUTE
) or percent (PERTURBATE_FI_MEAN
=RELATIVE
)
- Define the perturbation range in phi units (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_DIAMETER_AGGREGATES_(MIC)
Used when PERTURBATE_DIAMETER_AGGREGATES_(MIC)
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in microns (
PERTURBATE_DIAMETER_AGGREGATES_(MIC)
=ABSOLUTE
) or percent (PERTURBATE_DIAMETER_AGGREGATES_(MIC)
=RELATIVE
)
- Define the perturbation range in microns (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Sub-block IF_PERTURBATE_DENSITY_AGGREGATES
Used when PERTURBATE_DENSITY_AGGREGATES
!= NO
PERTURBATION_RANGE
= (float)- Define the perturbation range in kg/m3 (
PERTURBATE_DENSITY_AGGREGATES
=ABSOLUTE
) or percent (PERTURBATE_DENSITY_AGGREGATES
=RELATIVE
)
- Define the perturbation range in kg/m3 (
PDF
= (options)- Define the sampling Probability Density Function (PDF)
- Input options:
UNIFORM
: Uniform (constant) PDFGAUSSIAN
: Gaussian PDF centred at the reference value
Block ENSEMBLE_POST
The block is used by the task PosEns for post-processing ensemble runs
POSTPROCESS_MEMBERS
= (options)- Define if individual member results are included in the output file
.ens.nc
- Input options:
YES
: Include individual membersNO
: Exclude individual members
- Note: The option
POSTPROCESS_MEMBERS
=YES
may result in very large files
- Define if individual member results are included in the output file
POSTPROCESS_MEAN
= (options)- Postprocess the ensemble mean in the output file
*.ens.nc
- Input options:
YES
: Include ensemble meanNO
: Exclude ensemble mean
- Postprocess the ensemble mean in the output file
POSTPROCESS_LOGMEAN
= (options)- Postprocess the ensemble (log) mean in the output file
*.ens.nc
- Input options:
YES
: Include ensemble (log) meanNO
: Exclude ensemble (log) mean
- Postprocess the ensemble (log) mean in the output file
POSTPROCESS_MEDIAN
= (options)- Postprocess the ensemble median in the output file
*.ens.nc
- Input options:
YES
: Include ensemble medianNO
: Exclude ensemble median
- Postprocess the ensemble median in the output file
POSTPROCESS_STANDARD_DEV
= (options)- Postprocess the ensemble standard deviation in the output file
*.ens.nc
- Input options:
YES
: Include ensemble standard deviationNO
: Exclude ensemble standard deviation
- Postprocess the ensemble standard deviation in the output file
POSTPROCESS_PROBABILITY
= (options)- Generate probabilistic forecasts based on thresholds (see below) in the output file
*.ens.nc
. Probabilities are obtained by counting the fraction of ensemble members exceeding a given threshold - Input options:
YES
: Include probabilistic forecastsNO
: Exclude probabilistic forecasts
- Generate probabilistic forecasts based on thresholds (see below) in the output file
POSTPROCESS_PERCENTILES
= (options)- Generate percentile forecasts in the output file
*.ens.nc
- Input options:
YES
: Include percentile forecastsNO
: Exclude percentile forecasts
- Generate percentile forecasts in the output file
Sub-block IF_POSTPROCESS_PROBABILITY
Used when POSTPROCESS_PROBABILITY
= YES
CONCENTRATION_THRESHOLDS_(MG/M3)
= (float_list)- List of values giving the concentration threshold
- Note: A probability forecast map is generated for each value
COLUMN_MASS_THRESHOLDS_(G/M2)
= (float_list)- List of values giving the column mass threshold for species
TEPHRA
- Note: A probability forecast map is generated for each value
- List of values giving the column mass threshold for species
COLUMN_MASS_THRESHOLDS_(DU)
= (float_list)- List of values giving the column mass threshold for species
SO2
- Note: A probability forecast map is generated for each value
- List of values giving the column mass threshold for species
GROUND_LOAD_THRESHOLDS_(KG/M2)
= (float_list)- List of values giving the deposit ground load threshold for species
TEPHRA
- Note: A probability forecast map is generated for each value
- List of values giving the deposit ground load threshold for species
Sub-block IF_POSTPROCESS_PERCENTILES
Used when POSTPROCESS_PERCENTILES
= YES
PERCENTILE_VALUES_(%)
= (float_list)- List of values giving the percentiles in percent
- Note: A percentile forecast map is generated for each value
Block MODEL_PHYSICS
This block defines the specific variables related to physics in the FALL3D model
LIMITER
= (options)- Flux limiter option
- Input options:
MINMOD
: MINMOD optionSUPERBEE
: SUPERBEE optionOSPRE
: OSPRE option
TIME_MARCHING
= (options)- Time integration scheme
- Input options:
EULER
: Euler (1st order)RUNGE-KUTTA
: Runge-Kutta (4th order)
CFL_CRITERION
= (options)- Courant criterion (critical time step)
- Input options:
ONE_DIMENSIONAL
: Minimum for each dimensionALL_DIMENSIONS
: Minimum for all dimensions (default)
CFL_SAFETY_FACTOR
= (float)- Safety factor for critical time step
- Optional
- Note: Default = 0.9
TERMINAL_VELOCITY_MODEL
= (options)- Parametrization for terminal settling velocity estimation
- Input options:
ARASTOOPOUR
: TODOGANSER
: TODOWILSON
: TODODELLINO
: TODOPFEIFFER
: TODODIOGUARDI2017
: TODODIOGUARDI2018
: TODO
HORIZONTAL_TURBULENCE_MODEL
= (options)- Type of parametrization used to compute the horizontal diffusion
- Input options:
CONSTANT
(float): Constant diffusion coefficient in m^2/sCMAQ
: CMAQ optionRAMS
: RAMS option
VERTICAL_TURBULENCE_MODEL
= (options)- Type of parametrization used to compute the vertical diffusion
- Input options:
CONSTANT
(float): Constant diffusion coefficient in m^2/sSIMILARITY
: Equations based on similarity theory
RAMS_CS
= (float)- Parameter CS required by the horizontal diffusion parametrization
- Note: Only used if
HORIZONTAL_TURBULENCE_MODEL
=RAMS
WET_DEPOSITION
= (options)- Defines whether wet deposition model is enabled
- Input options:
YES
: Enable wet depositionNO
: Disable wet deposition
DRY_DEPOSITION
= (options)- Defines whether dry deposition model is enabled
- Input options:
YES
: Enable dry depositionNO
: Disable dry deposition
GRAVITY_CURRENT
= (options)- Define whether the gravity current model is enabled
- Input options:
YES
: Enable gravity currentNO
: Disable gravity current
- Note: Only for
TEPHRA
species
Sub-block IF_GRAVITY_CURRENT
Used when GRAVITY_CURRENT
= YES
C_FLOW_RATE
= (float)- Empirical constant for volumetric flow rate at the Neutral Buoyancy Level (NBL)
LAMBDA_GRAV
= (float)- Empirical constant for the gravity current model
K_ENTRAIN
= (float)- Entrainment coefficient for the gravity current model
BRUNT_VAISALA
= (float)- Brunt-Väisälä frequency accounting for ambient stratification
GC_START_(HOURS_AFTER_00)
= (float)- Gravity current starting time
GC_END_(HOURS_AFTER_00)
= (float)- Gravity current end time
Block MODEL_OUTPUT
This block is read by task FALL3D and defines specific variables related to output strategy
PARALLEL_IO
= (options)- Use parallel Input/Output for netCDF files
- Input options:
YES
: Enable parallel I/ONO
: Disable parallel I/O
- Note: The option
PARALLEL_IO
=YES
requires the installation of the high-performance parallel I/O library for accessing NetCDF files
LOG_FILE_LEVEL
= (options)- Level of detail in the FALL3D log file
- Input options:
NONE
: TODONORMAL
: TODOFULL
: TODO
RESTART_TIME_INTERVAL_(HOURS)
= (options)- Define the restart file output frequency
- Input options:
- (float): Frequency in hours
NONE
: Restart file not writtenEND_ONLY
: Restart file written only at the end of the simulation
OUTPUT_JSON_FILES
= (options)- Generate json files
- Optional
- Input options:
YES
: Generate json filesNO
: Do not generate json files
- Note: Only for tasks SetTGSD SetDbs SetSrc FALL3D
OUTPUT_INTERMEDIATE_FILES
= (options)- Generate intermediate files
- Input options:
YES
: Generate intermediate filesNO
: Do not generate intermediate files
- Note: Only for task ALL can be set
OUTPUT_INTERMEDIATE_FILES
=NO
OUTPUT_TIME_START_(HOURS)
= (options)- Start time for output
- Input options:
- (float): Time (in hours) from which the output file is written
RUN_START
: Start writing output file from the beginning
OUTPUT_TIME_INTERVAL_(HOURS)
= (float)- Time period of model output in hours
OUTPUT_3D_CONCENTRATION
= (options)- Specify whether the 3D concentration field should be written in the output file
- Input options:
YES
: Write the 3D concentration fieldNO
: Do not write the 3D concentration field
OUTPUT_3D_CONCENTRATION_BINS
= (options)- Specify whether the 3D concentration field for each bin should be written in the output file
- Input options:
YES
: Write the 3D concentration field for each binNO
: Do not write the 3D concentration field for each bin
OUTPUT_SURFACE_CONCENTRATION
= (options)- Specify whether the surface concentration field should be written in the output file
- Input options:
YES
: Write the surface concentration fieldNO
: Do not write the surface concentration field
OUTPUT_COLUMN_LOAD
= (options)- Specify whether the column mass load field should be written in the output file
- Input options:
YES
: Write the column mass load fieldNO
: Do not write the column mass load field
OUTPUT_CLOUD_TOP
= (options)- Specify whether the cloud top height field should be written in the output file
- Input options:
YES
: Write the cloud top height fieldNO
: Do not write the cloud top height field
OUTPUT_GROUND_LOAD
= (options)- Specify whether the deposit mass load field should be written in the output file
- Input options:
YES
: Write the deposit mass load fieldNO
: Do not write the deposit mass load field
OUTPUT_GROUND_LOAD_BINS
= (options)- Specify whether the deposit mass load field for each bin should be written in the output file
- Input options:
YES
: Write the deposit mass load field for each binNO
: Do not write the deposit mass load field for each bin
OUTPUT_WET_DEPOSITION
= (options)- Specify whether the wet deposition mass field should be written in the output file
- Input options:
YES
: Write the wet deposition mass fieldNO
: Do not write the wet deposition mass field
TRACK_POINTS
= (options)- Specifies whether the timeseries for the tracking points should be written
- Input options:
YES
: Generate tracking point fileNO
: Do not generate tracking point file
TRACK_POINTS_FILE
= (string)- Path to the file with the list of tracked points
- Note: Used only if
TRACK_POINTS_FILE
=YES
OUTPUT_CONCENTRATION_AT_XCUTS
= (options)- Output concentration at x-coordinate values specified in
X-VALUES
- Input options:
YES
: Output concentration cutsNO
: Do not output concentration cuts
- Output concentration at x-coordinate values specified in
OUTPUT_CONCENTRATION_AT_YCUTS
= (options)- Output concentration at y-coordinate values specified in
Y-VALUES
- Input options:
YES
: Output concentration cutsNO
: Do not output concentration cuts
- Output concentration at y-coordinate values specified in
OUTPUT_CONCENTRATION_AT_ZCUTS
= (options)- Output concentration at z-coordinate values specified in
Z-VALUES
- Input options:
YES
: Output concentration cutsNO
: Do not output concentration cuts
- Output concentration at z-coordinate values specified in
OUTPUT_CONCENTRATION_AT_FL
= (options)- Output concentration at flight level values specified in
FL-VALUES
- Input options:
YES
: Output concentration cutsNO
: Do not output concentration cuts
- Output concentration at flight level values specified in
X-VALUES
= (float_list)- List of x-coordinate values read when
OUTPUT_CONCENTRATION_AT_XCUTS
=YES
- List of x-coordinate values read when
Y-VALUES
= (float_list)- List of y-coordinate values read when
OUTPUT_CONCENTRATION_AT_YCUTS
=YES
- List of y-coordinate values read when
Z-VALUES
= (float_list)- List of z-coordinate values read when
OUTPUT_CONCENTRATION_AT_ZCUTS
=YES
- List of z-coordinate values read when
FL-VALUES
= (float_list)- List of flight level values read when
OUTPUT_CONCENTRATION_AT_FL
=YES
- List of flight level values read when
Block MODEL_VALIDATION
This block is read by task PosVal to perform automatic model validation with a set of quantitative and categorical metrics
OBSERVATIONS_TYPE
= (options)- Type of observations
- Input options:
SATELLITE_DETECTION
: Satellite yes/no flag contoursSATELLITE_RETRIEVAL
: Satellite quantitative retrievals ofTEPHRA
/SO2
speciesDEPOSIT_CONTOURS
: Deposit isopach/isopleth contours read from a gridded netCDF fileDEPOSIT_POINTS
: Deposit point-wise observations
OBSERVATIONS_FILE
= (string)- Path to the files with observations
OBSERVATIONS_DICTIONARY_FILE
= (string)- Path to the observations dictionary file
- Optional
- Note: Examples can be found in
Other/Dep
RESULTS_FILE
= (string)- Path to the FALL3D output file with simulation results (
*.res.nc
or*.ens.nc
)
- Path to the FALL3D output file with simulation results (
Sub-block IF_OBSERVATIONS_TYPE_SATELLITE
Used when OBSERVATIONS_TYPE
= SATELLITE_DETECTION
or OBSERVATIONS_TYPE
= SATELLITE_RETRIEVAL
COLUMN_MASS_OBSERVATION_THRESHOLD_(G/M2)
= (float)- Column mass threshold in g/m2 for
TEPHRA
species - Note: It should be consistent with
COLUMN_MASS_THRESHOLDS_(G/M2)
for ensemble runs
- Column mass threshold in g/m2 for
COLUMN_MASS_OBSERVATION_THRESHOLD_(DU)
= (float)- Column mass threshold in DU for
SO2
species - Note: It should be consistent with
COLUMN_MASS_THRESHOLDS_(DU)
for ensemble runs
- Column mass threshold in DU for
Sub-block IF_OBSERVATIONS_TYPE_DEPOSIT
Used when OBSERVATIONS_TYPE
= DEPOSIT_CONTOURS
or OBSERVATIONS_TYPE
= DEPOSIT_POINTS
GROUND_LOAD_OBSERVATION_THRESHOLD_(KG/M2)
= (float)- Deposit load threshold in kg/m2 for
TEPHRA
species - Note: It should be consistent with
GROUND_LOAD_THRESHOLDS_(KG/M2)
for ensemble runs
- Deposit load threshold in kg/m2 for
Hands-on course
The hands-on course aims at introducing the numerical modelling FALL3D with the following session structure:
- Session 1: Meteorological data
- Session 2: Deterministic simulation
- Session 3: Ensemble simulations
MareNostrum4
The exercises will be carried out in MareNostrum, the most emblematic supercomputer and the most powerful supercomputer in Spain hosted by the Barcelona Supercomputing Center. Specifically, we'll use MareNostrum4, a supercomputer based on Intel Xeon Platinum processors from the Skylake generation. It is a Lenovo system composed of SD530 Compute Racks, an Intel Omni-Path high performance network interconnect and running SuSE Linux Enterprise Server as operating system. Its current Linpack Rmax Performance is 6.2272 Petaflops.
This general-purpose block consists of 48 racks housing 3456 nodes with a grand total of 165,888 processor cores and 390 Terabytes of main memory. Each computer node is equipped with 2 sockets Intel Xeon Platinum 8160 CPU with 24 cores each @ 2.10GHz for a total of 48 cores. For further information, please refer to the User Guide.
Log in to the cluster
You can connect to MareNostrum using three public login nodes:
- mn1.bsc.es
- mn2.bsc.es
- mn3.bsc.es
All connections must be done through SSH (Secure SHell), for example:
ssh {username}@mn1.bsc.es
Notes:
- On Windows machines you can use PuTTy, the most known Windows SSH client. See this website for more details.
Directories and file systems
There are different partitions of disk space with specific size limits and usage policies. The GPFS (General Parallel File System) is a distributed networked file system and can be accessed from all the nodes. The available GPFS directories and file systems are:
-
/gpfs/home
: after login, it is the default work area where users can save source codes, scripts, and other personal data. Not recommended for run jobs; please run your jobs on your group’s/gpfs/projects
or/gpfs/scratch
instead. -
/gpfs/projects
: it's intended for data sharing between users of the same group or project. All members of the group share the space quota. -
/gpfs/scratch
: each user has their directory under this partition, for example, to store temporary job files during execution. All members of the group share the space quota.
For example, if your your group is nct01
, you can create the following alias to access to your personal directories:
alias projects='cd /gpfs/projects/nct01/$USER'
alias scratch='cd /gpfs/scratch/nct01/$USER'
Running jobs
Jobs submission to the queue system have to be done through the Slurm directives, for example:
To submit a job:
sbatch {job_script}
To show all the submitted jobs:
squeue
To cancel a job:
scancel {job_id}
There are several queues present in the machines and different users may access different queues. All queues have different limits in amount of cores for the jobs and duration. You can check anytime all queues you have access to and their limits using:
bsc_queues
Software Environment
Modules environment
The Environment Modules package provides a dynamic modification of a user's environment via modulefiles. Each modulefile contains the information needed to configure the shell for an application or a compilation. Modules can be loaded and unloaded dynamically, in a clean fashion.
Use
module list
to show the loaded modules and
module avail
to show the available modules.
Modules can be invoked in two ways: by name alone or by name and version. Invoking them by name implies loading the default module version. This is usually the most recent version that has been tested to be stable (recommended) or the only version available. For example:
module load intel
Invoking by version loads the version specified of the application. As of this writing, the previous command and the following one load the same module:
module load intel/2017.4
Compilers
The latest Intel compilers provide the best possible optimizations for the Xeon Platinum architecture. By default, when starting a new session on the system the basic modules for the Intel suite will be automatically loaded. That is the compilers (intel/2017.4), the Intel MPI software stack (impi/2017.4) and the math kernel libraries MKL (mkl/2017.4) in their latest versions. Alternatively, you can load the module using:
module load intel/2017.4
module load impi/2017.4
The corresponding optimization flags for Fortan are
FCFLAGS="-xCORE-AVX512 -mtune=skylake"
As the login nodes are of the exact same architecture as the compute node you can also use the flag -xHost
which enables all possible optimizations available on the compile host. In addition, the Intel compilers will optimise more aggressively when the -O2
flag is specified:
FCFLAGS="-xCORE-AVX512 -mtune=skylake -xHost -O2"
Training course material
In order to copy the course material, go to your own project folder
cd /gpfs/projects/nct01/$USER
and copy this folder:
cp -r /gpfs/projects/nct00/nct00014/FALL3D_material .
Next, you can load the required modules and environmental variables with the command:
cd FALL3D_material
source set_env.sh
Meteorological data
In order to run FALL3D, you need to provide meteorological data as an input. Multiple datasets are supported, including global model forecasts (GFS), global analyses and re-analyses (ERA5), and mesoscale models (WRF-ARW). In this session we'll retrieve global forecasts from two datasets:
GFS
: The Global Forecast System (global weather forecast model)GEFS
: The Global Ensemble Forecast System (global weather forecast model)
The GFS (Global Forecast System) is the most well-known global weather model and it’s updated every six hours (00Z, 06Z, 12Z, 18Z). It's produced by the National Centers for Environmental Prediction (NCEP) of the United States National Oceanic and Atmospheric Administration (NOAA).
In this session you need to copy some files to your projects
folder:
cd /gpfs/projects/nct01/$USER
cp -r /gpfs/projects/nct00/nct00014/FALL3D_material/hands-on-1 .
cd hands-on-1
The filtering problem
The filtering problem is the most common estimation problem in geophysical applications, and is characterized by sequential processing in which measurements are utilized as they become available.
- Filtering problem: the information from observation up to the time of the current sample is used (e.g. GFS)
- Smoothing problem: all observation samples (from future) are used. This approach is used to produce reanalysis datasets
Notes:
- Atmospheric reanalysis is a method to reconstruct the past weather by combining historical observations with a dynamical model. It provides a physically and dynamically coherent description of the state of the atmosphere
Getting GFS and GEFS data (local machine)
Let's see how to get weather forecast data from GFS using the scripts gfs.py
and gefs.py
included in the FALL3D distribution under the folder Other/Meteo/Utils
.
Notes:
- Unfortunately, there's no outgoing internet connection from the cluster, which prevents the use of external repositories directly from MN4 machines. Consequently, meteorological data must be downloaded from a remote server to a local machine and then copied to the cluster.
A. Installing Python packages
The Python scripts above require the fall3dutil
package, which can be installed on Linux and MacOS systems from the Python Package Index (PyPI) using pip. Python "Virtual Environments" allow Python packages to be installed in an isolated location for a particular application. In order to create a virtual environment on a typical Linux system, use the basic venv command:
python3 -m venv fall3d_env
source fall3d_env/bin/activate
This will create a new virtual environment in the fall3d_env
subdirectory, and configure the current shell to use it as the default python environment. For more information, please visit the Python Packaging User Guide.
Once the virtual environment is activated, you can install the last version of the fall3dutil
package using:
pip install fall3dutil
B. Configuration
Let's obtain GFS data for two domains centered around Etna (Italy) and Fuego (Guatemala) volcanoes. To this purpose, we define a configuration file config.inp
with the downloading parameters:
[etna]
date = 20240306 # Initial date
lon = 8 36 # Longitude range
lat = 20 42 # Latitude range
time = 0 48 # Time forecast range
step = 3 # Time resolution
cycle = 0 # Cycle (00Z, 06Z, 12Z or 18Z)
res = 0.5 # Resolution in degrees
[fuego]
date = 20240306 # Initial date
lon = -130 -52 # Longitude range
lat = 0 33 # Latitude range
ens = 1 12 # Ensemble member range
time = 0 48 # Time forecast range
step = 3 # Time resolution
cycle = 0 # Cycle (00Z, 06Z, 12Z or 18Z)
res = 0.5 # Resolution in degrees
C. Getting GFS data
To obtain GFS data to simulate a volcanic eruption at Etna use the following command:
./gfs.py --input config.inp --block etna --verbose
We get a list of GRIB2 files for every forecast time:
gfs.t00z.pgrb2full.0p50.f000 gfs.t00z.pgrb2full.0p50.f012 gfs.t00z.pgrb2full.0p50.f024 gfs.t00z.pgrb2full.0p50.f036 gfs.t00z.pgrb2full.0p50.f048
gfs.t00z.pgrb2full.0p50.f003 gfs.t00z.pgrb2full.0p50.f015 gfs.t00z.pgrb2full.0p50.f027 gfs.t00z.pgrb2full.0p50.f039
gfs.t00z.pgrb2full.0p50.f006 gfs.t00z.pgrb2full.0p50.f018 gfs.t00z.pgrb2full.0p50.f030 gfs.t00z.pgrb2full.0p50.f042
gfs.t00z.pgrb2full.0p50.f009 gfs.t00z.pgrb2full.0p50.f021 gfs.t00z.pgrb2full.0p50.f033 gfs.t00z.pgrb2full.0p50.f045
D. Getting GEFS data
To obtain GEFS data to simulate a volcanic eruption at Fuego use the following command to retrieve an ensemble of weather forecasts:
./gefs.py --input config.inp --block fuego --verbose
For each ensemble member, we get a list of GRIB2 files (two files for every forecast time). For example, the files for the first ensemble member are:
gep01.t00z.pgrb2a.0p50.f000 gep01.t00z.pgrb2a.0p50.f021 gep01.t00z.pgrb2a.0p50.f042 gep01.t00z.pgrb2b.0p50.f012 gep01.t00z.pgrb2b.0p50.f033
gep01.t00z.pgrb2a.0p50.f003 gep01.t00z.pgrb2a.0p50.f024 gep01.t00z.pgrb2a.0p50.f045 gep01.t00z.pgrb2b.0p50.f015 gep01.t00z.pgrb2b.0p50.f036
gep01.t00z.pgrb2a.0p50.f006 gep01.t00z.pgrb2a.0p50.f027 gep01.t00z.pgrb2a.0p50.f048 gep01.t00z.pgrb2b.0p50.f018 gep01.t00z.pgrb2b.0p50.f039
gep01.t00z.pgrb2a.0p50.f009 gep01.t00z.pgrb2a.0p50.f030 gep01.t00z.pgrb2b.0p50.f000 gep01.t00z.pgrb2b.0p50.f021 gep01.t00z.pgrb2b.0p50.f042
gep01.t00z.pgrb2a.0p50.f012 gep01.t00z.pgrb2a.0p50.f033 gep01.t00z.pgrb2b.0p50.f003 gep01.t00z.pgrb2b.0p50.f024 gep01.t00z.pgrb2b.0p50.f045
gep01.t00z.pgrb2a.0p50.f015 gep01.t00z.pgrb2a.0p50.f036 gep01.t00z.pgrb2b.0p50.f006 gep01.t00z.pgrb2b.0p50.f027 gep01.t00z.pgrb2b.0p50.f048
gep01.t00z.pgrb2a.0p50.f018 gep01.t00z.pgrb2a.0p50.f039 gep01.t00z.pgrb2b.0p50.f009 gep01.t00z.pgrb2b.0p50.f030
Copy the GRIB2 files to the cluster, for example, using scp
.
Processing GFS/GEFS data (cluster)
FALL3D requires input meteorological data in netCDF format. Consequently, GFS data must be concatenated and converted from GRIB2 to obtain a single netCDF file. To this purpose, you can use the utility to read and write grib2 files wgrib2
, which is available in the cluster loading the corresponding module:
module load wgrib
In addition, the command wgrib2
can be used to display the contents of a grib2 file. For example, run:
wgrib2 gfs.t00z.pgrb2full.0p50.f000
Converting GFS data
The following bash script (grib2nc.sh
) invokes wgrib2
and can be used to concatenate and convert a list of GRIB2 files:
########## Edit header ##########
WGRIBEXE=wgrib2
TABLEFILE=TABLES/gfs_0p50.levels
TMIN=0
TMAX=48
STEP=3
CYCLE=0
GRIBPATH=GFS
UPDATE_FNAME (){
fname=${GRIBPATH}/gfs.t${CYCLE}z.pgrb2full.0p50.f${HOUR}
OUTPUTFILE=etna.gfs.nc
}
#################################
variables="HGT|TMP|RH|UGRD|VGRD|VVEL|PRES|PRATE|LAND|HPBL|SFCR"
CYCLE=$(printf %02d $CYCLE)
for i in $(seq ${TMIN} ${STEP} ${TMAX})
do
HOUR=$(printf %03d $i)
UPDATE_FNAME
echo "Processing ${fname}..."
${WGRIBEXE} "${fname}" \
-match ":(${variables}):" \
-match ":(([0-9]*[.])?[0-9]+ mb|surface|2 m above ground|10 m above ground):" \
-nc_table "${TABLEFILE}" \
-append \
-nc3 \
-netcdf \
"${OUTPUTFILE}" > wgrib.log
done
A single file etna.gfs.nc
will be generated, which can be read by FALL3D.
Converting GEFS data
The following script grib2nc-ens.sh
can be used to concatenate and convert GEFS data in grib2 format:
########## Edit header ##########
WGRIBEXE=wgrib2
TABLEFILE=TABLES/gefs_0p50.levels
ENSMIN=1
ENSMAX=12
TMIN=0
TMAX=48
STEP=3
CYCLE=0
GRIBPATH=GEFS
UPDATE_FNAME (){
fnameA=${GRIBPATH}/gep${ENS}.t${CYCLE}z.pgrb2a.0p50.f${HOUR}
fnameB=${GRIBPATH}/gep${ENS}.t${CYCLE}z.pgrb2b.0p50.f${HOUR}
OUTPUTFILE=fuego.gefs_p${ENS}.nc
}
#################################
variables="HGT|TMP|RH|UGRD|VGRD|VVEL|PRES|PRATE|LAND|HPBL|SFCR"
CYCLE=$(printf %02d $CYCLE)
#
# Ensemble member loop
#
for iens in $(seq ${ENSMIN} ${ENSMAX}); do
ENS=$(printf %02d $iens)
#
# Time loop
#
for i in $(seq ${TMIN} ${STEP} ${TMAX}); do
HOUR=$(printf %03d $i)
UPDATE_FNAME
echo "Processing ${fnameA}..."
${WGRIBEXE} "${fnameA}" \
-match ":(${variables}):" \
-match ":(([0-9]*[.])?[0-9]+ mb|surface|2 m above ground|10 m above ground):" \
-nc_table "${TABLEFILE}" \
-append \
-nc3 \
-netcdf \
"${OUTPUTFILE}" > wgrib.log
echo "Processing ${fnameB}..."
${WGRIBEXE} "${fnameB}" \
-match ":(${variables}):" \
-match ":(([0-9]*[.])?[0-9]+ mb|surface|2 m above ground|10 m above ground):" \
-nc_table "${TABLEFILE}" \
-append \
-nc3 \
-netcdf \
"${OUTPUTFILE}" >> wgrib.log
done
done
A set of netCDF files will be generated for each ensemble member: fuego.gefs_p01.nc
, fuego.gefs_p02.nc
, etc...
Quick visualization using ncview
You can use the ncview
tool to generate quick view of the netCDF files. First, load the required modules:
module load netcdf
module load udunits
module load gsl
module load nco
module load ncview
You can execute ncview
from the cluster now:
ncview etna.gfs.nc
Notes:
- To run your X apps remotely, log in to the remote server over SSH with the
-X
option, which will enable X forwarding on the client end.ssh -X {username}@server
Deterministic simulation
Let's simulate a volcanic eruption at Etna using GFS forecasts. The following input files are required for a single FALL3D run:
etna.inp
: FALL3D configuration fileetna.gfs.nc
: GFS weather data in netCDF formatGFS.tbl
: Dictionary file with GFS variable names (included in the FALL3D distribution under the folderOther/Meteo/Tables
)JOB.cmd
: A job script with a series of directives to inform the batch system about the characteristics of the jobFall3d.r8.x
: FALL3D executable program
Copy the required files to your projects
folder:
cd /gpfs/projects/nct01/$USER
cp -r /gpfs/projects/nct00/nct00014/FALL3D_material/hands-on-2 .
Enter to the new folder and create a symlink to the FALL3D executable file. In other words, if the FALL3D installation path is $FALL3D_PATH
, run the commands:
cd hands-on-2
ln -s $FALL3D_PATH/bin/Fall3d.r8.x .
Now you can run FALL3D!
Submitting jobs
The job script JOB.cmd
contains a series of directives to inform the batch system about the characteristics of the job:
#!/bin/bash
#SBATCH --job-name=FALL3D
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err
#SBATCH --nodes=1
#SBATCH --ntasks=20
#SBATCH --time=00:10:00
#SBATCH --qos=training
#SBATCH --reservation=Computational24
module purge
module load intel/2017.4
module load impi/2017.4
module load netcdf
INPFILE="etna.inp"
FALLTASK="all"
NX=5
NY=2
NZ=2
srun ./Fall3d.r8.x ${FALLTASK} ${INPFILE} ${NX} ${NY} ${NZ}
You can submit the job with sbatch
:
sbatch JOB.cmd
In this case, we are requesting 20 tasks for the job FALL3D.
Visualizing and analyzing model outputs
An output file etna.res.nc
will be generated if the run has been completed successfully. The ncdump
program generates an ASCII representation of a netCDF file and can be used to explore the content of the FALL3D output file. First, load the netcdf
module in the cluster:
module load netcdf
and run the following command to show the list of variables and some metadata information:
ncdump -h etna.res.nc
Quick visualization using ncview
Next, we use the ncview
tool to generate quick view of the model output. Load the required modules:
module load netcdf
module load udunits
module load gsl
module load nco
module load ncview
You can execute ncview
from the cluster now:
ncview etna.res.nc
Notes:
- To run your X apps remotely, log in to the remote server over SSH with the
-X
option, which will enable X forwarding on the client end.ssh -X {username}@server
Visualization using Python
The model results can be plotted using the python package Cartopy
. First, activate the following Anaconda environment:
module load anaconda
source activate volcanology
and run the python script under the folder POSTPROCESSING
:
python plot_map.py
Content of file plot_map.py
. Click to expand!
import numpy as np
import xarray as xr
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import BoundaryNorm
import cartopy.crs as crs
import cartopy.feature as cfeature
###
### Parameters
###
minval = 0.1
key = "tephra_col_mass"
fname = "../etna.res.nc"
levels = np.arange(0.0,4,0.25)
vlon, vlat = 15.0, 37.75
cmap = plt.cm.RdYlBu_r
###
### Set mininmum level
###
if minval>0: levels[0] = minval
###
### Read file
###
ds = xr.open_dataset(fname)
###
### Generate map
###
proj = crs.PlateCarree()
fig, ax = plt.subplots( subplot_kw={'projection': proj} )
###
### Add map features
###
BORDERS = cfeature.NaturalEarthFeature(
scale = '10m',
category = 'cultural',
name = 'admin_0_countries',
edgecolor = 'gray',
facecolor = 'none'
)
LAND = cfeature.NaturalEarthFeature(
'physical', 'land', '10m',
edgecolor = 'none',
facecolor = 'lightgrey',
alpha = 0.8
)
ax.add_feature(LAND,zorder=0)
ax.add_feature(BORDERS, linewidth=0.4)
###
### Add grid lines
###
gl = ax.gridlines(
crs = crs.PlateCarree(),
draw_labels = True,
linewidth = 0.5,
color = 'gray',
alpha = 0.5,
linestyle = '--')
gl.top_labels = False
gl.right_labels = False
gl.ylabel_style = {'rotation': 90}
###
### Add vent location
###
ax.plot(vlon,vlat,color='red',marker='^')
###
### Plot contours
###
cbar = None
for it in range(ds.time.size):
time_fmt = ds.isel(time=it)['time'].dt.strftime("%d/%m/%Y %H:%M").item()
ax.set_title(time_fmt, loc='right')
fc = ax.contourf(
ds.lon,ds.lat,ds.isel(time=it)[key],
levels = levels,
norm = BoundaryNorm(levels,cmap.N),
cmap = cmap,
extend = 'max',
transform = crs.PlateCarree()
)
###
### Generate colorbar
###
if not cbar:
cbar=fig.colorbar(fc,
orientation = 'horizontal',
label = r'Tephra column mass [$g~m^{-2}$]',
)
###
### Output plot
###
fname_plt = f"map_{it:03d}.png"
plt.savefig(fname_plt,dpi=300,bbox_inches='tight')
###
### Clear contours
###
for item in fc.collections: item.remove()
You can create an animated gif using the following command:
convert -delay 10 -loop 0 *.png animation.gif
Notes:
- The first time you run the python script, the Cartopy package will try to download some data, including coastlines information. Since we have no internet connection in the cluster, you can copy this data tou your
home
folder:cp -r /gpfs/projects/nct00/nct00014/share ~/.local
Ensemble simulations
Let's simulate a volcanic eruption at Fuego using GEFS forecasts. The following input files are required:
fuego.inp
: FALL3D configuration filefuego.gfs_pXX.nc
: A list of GEFS files in netCDF formatGFS.tbl
: Dictionary file with GFS variable names (included in the FALL3D distribution under the folderOther/Meteo/Tables
)JOB.cmd
: A job script with a series of directives to inform the batch system about the characteristics of the jobFall3d.r8.x
: FALL3D executable program
Copy the required files to your projects
folder:
cd /gpfs/projects/nct01/$USER
cp -r /gpfs/projects/nct00/nct00014/FALL3D_material/hands-on-3 .
Enter to the new folder and create a symlink to the FALL3D executable file. In other words, if the FALL3D installation path is $FALL3D_PATH
, run the commands:
cd hands-on-3
ln -s $FALL3D_PATH/bin/Fall3d.r8.x .
Now you can run FALL3D!
Introduction and motivation
- Atmospheric dispersion models can provide realistic distributions of airborne volcanic ash and gases or tephra deposits
- Traditionally, operational forecast systems rely on volcanic ash transport and dispersal (VATD) models to produce deterministic forecasts
Why ensemble modelling?
- Uncertainty in model input parameters: Deterministic models are highly sensitive to uncertain model input parameters (e.g. eruption source parameters) and meteorological fields. We can take into account these uncertainties using ensemble modelling
- Quantification of model output uncertainty: Ensemble-based modelling allows one to characterise and quantify model output uncertainties. In addition to traditional forecasting products, the associated errors can be provided
- Improvement of forecast skill: Real observations can be incorporated into dispersal models using ensemble-based data assimilation techniques
- Source inversion: Different techniques for source term inversion have been proposed based on ensemble modelling
Running ensemble runs
Ensemble simulations can be performed as a single parallel task. In order to perform ensemble runs, FALL3D must be executed with the optional argument -nens
to define the ensemble size. For example, the following command will generate a 12-member ensemble and perform the FALL3D task for each ensemble member:
mpirun -np 12 ./Fall3d.r8.x FALL3D name.inp -nens 12
A new folder structure will be created and the results for each ensemble member will be organized in different sub-folders.
Submitting jobs
The job script JOB.cmd
contains a series of directives to inform the batch system about the characteristics of the job:
#!/bin/bash
#SBATCH --job-name=FALL3D
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err
#SBATCH --nodes=1
#SBATCH --ntasks=48
#SBATCH --time=00:15:00
#SBATCH --qos=training
#SBATCH --reservation=Computational24
module purge
module load intel/2017.4
module load impi/2017.4
module load netcdf
INPFILE="fuego.inp"
FALLTASK="all"
NX=2
NY=2
NZ=1
NENS=12
if [ "${NENS}" -gt 1 ] ; then
for i in $(seq ${NENS})
do
ENSDIR="$(printf "%04d" ${i})"
IENS="$(printf "%02d" ${i})"
echo "Creating folder ${ENSDIR}"
mkdir -p ${ENSDIR}
ln -sfr fuego.gfs_p${IENS}.nc ${ENSDIR}/fuego.gfs.nc
done
fi
srun ./Fall3d.r8.x ${FALLTASK} ${INPFILE} ${NX} ${NY} ${NZ} -nens ${NENS}
You can submit the job with sbatch
:
sbatch JOB.cmd
In this case, we are requesting 48 tasks to run 12 (ensemble size) instances of FALL3D.
Visualizing and analyzing model outputs
Once the model has run, the task PosEns
can be executed to merge and post-process the outputs from individual ensemble members (fuego.res.nc
) in order to produce a single netCDF file containing ensemble-based deterministic and/or probabilistic outputs for all variables of interest (e.g. concentration at native model levels or at flight levels, cloud column mass, ground deposit load, etc...). Run the command:
mpirun -np 12 ./Fall3d.r8.x PosEns fuego.inp -nens 12
to generate a single ensemble output file: fuego.ens.nc
. The content of this file depends on the ENSEMBLE_POST
block definition in the configuration file:
--------------------
ENSEMBLE_POSTPROCESS
--------------------
!
POSTPROCESS_MEMBERS = yes
POSTPROCESS_MEAN = yes
POSTPROCESS_LOGMEAN = no
POSTPROCESS_MEDIAN = no
POSTPROCESS_STANDARD_DEV = no
POSTPROCESS_PROBABILITY = no
POSTPROCESS_PERCENTILES = no
!
IF_POSTPROCESS_PROBABILITY
CONCENTRATION_THRESHOLDS_(MG/M3) = 2
COLUMN_MASS_THRESHOLDS_(G/M2) = 1
COLUMN_MASS_THRESHOLDS_(DU) = 100
GROUND_LOAD_THRESHOLDS_(KG/M2) = 1
!
IF_POSTPROCESS_PERCENTILES
PERCENTILE_VALUES_(%) = 50
where you can enable/disable different deterministic and probabilistic outputs.
Visualization using Python
The model results can be plotted using the python package Cartopy
. First, activate the following Anaconda environment:
module load anaconda
source activate volcanology
and run the following python script under the folder POSTPROCESSING
:
python plot_map.py
to plot the ensemble mean of SO2 column mass for every time.
Content of file plot_map.py
. Click to expand!
import numpy as np
import xarray as xr
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import BoundaryNorm
import cartopy.crs as crs
import cartopy.feature as cfeature
###
### Parameters
###
minval = 10
key = "SO2_col_mass_mean"
fname = "../fuego.ens.nc"
levels = np.arange(0.0,3000,200)
vlon, vlat = -90.88, 14.473
cmap = plt.cm.RdYlBu_r
###
### Set mininmum level
###
if minval>0: levels[0] = minval
###
### Read file
###
ds = xr.open_dataset(fname)
###
### Generate map
###
proj = crs.PlateCarree()
fig, ax = plt.subplots( subplot_kw={'projection': proj} )
###
### Add map features
###
BORDERS = cfeature.NaturalEarthFeature(
scale = '10m',
category = 'cultural',
name = 'admin_0_countries',
edgecolor = 'gray',
facecolor = 'none'
)
LAND = cfeature.NaturalEarthFeature(
'physical', 'land', '10m',
edgecolor = 'none',
facecolor = 'lightgrey',
alpha = 0.8
)
ax.add_feature(LAND,zorder=0)
ax.add_feature(BORDERS, linewidth=0.4)
###
### Add grid lines
###
gl = ax.gridlines(
crs = crs.PlateCarree(),
draw_labels = True,
linewidth = 0.5,
color = 'gray',
alpha = 0.5,
linestyle = '--')
gl.top_labels = False
gl.right_labels = False
gl.ylabel_style = {'rotation': 90}
###
### Add vent location
###
ax.plot(vlon,vlat,color='red',marker='^')
###
### Plot contours
###
ax.set_title("Ensemble mean", loc='left')
cbar = None
for it in range(ds.time.size):
time_fmt = ds.isel(time=it)['time'].dt.strftime("%d/%m/%Y %H:%M").item()
ax.set_title(time_fmt, loc='right')
fc = ax.contourf(
ds.lon,ds.lat,ds.isel(time=it)[key],
levels = levels,
norm = BoundaryNorm(levels,cmap.N),
cmap = cmap,
extend = 'max',
transform = crs.PlateCarree()
)
###
### Generate colorbar
###
if not cbar:
cbar=fig.colorbar(fc,
orientation = 'horizontal',
label = 'SO2 column mass [DU]',
)
###
### Output plot
###
fname_plt = f"map_{it:03d}.png"
plt.savefig(fname_plt,dpi=300,bbox_inches='tight')
###
### Clear contours
###
for item in fc.collections: item.remove()
You can create an animated gif using the following command:
convert -delay 10 -loop 0 *.png animation.gif
References
2023
- Mingari, L., Costa, A., Macedonio, G., and Folch, A., Reconstructing tephra fall deposits via ensemble-based data assimilation techniques, Geosci. Model Dev. Discuss. [preprint], https://doi.org/10.5194/gmd-2022-246, in review, 2022.
2022
-
Mingari, L., A. Folch, A. T. Prata, F. Pardini, G. Macedonio, and A. Costa, Data Assimilation of Volcanic Aerosol Observations Using FALL3D+PDAF, Atmos. Chem. Phys. 22 (3): 1773–92. https://doi.org/10.5194/acp-22-1773-2022, 2022.
-
Folch, Arnau, Leonardo Mingari, and Andrew T. Prata, Ensemble-Based Forecast of Volcanic Clouds Using FALL3D-8.1, Front. Earth Sci. 9. https://doi.org/10.3389/feart.2021.741841, 2022.
-
Titos, M., B. Martı́nez Montesinos, S. Barsotti, L. Sandri, A. Folch, L. Mingari, G. Macedonio, and A. Costa, Long-Term Hazard Assessment of Explosive Eruptions at Jan Mayen (Norway) and Implications for Air Traffic in the North Atlantic, Nat. Hazards Earth Syst. Sci. 22 (1): 139–63. https://doi.org/10.5194/nhess-22-139-2022, 2022.
2021
- Prata, A., Mingari, L., Folch, A., Costa, A., Macedonio, G., FALL3D-8.0: a computational model for atmospheric transport and deposition of particles and aerosols. Part II: model validation, Geoscientific Model Development dicussions, 14(1):409-436, https://doi.org/10.5194/gmd-14-409-2021, 2021.
2020
- Mingari, L., Folch, A., Dominguez, L., Bonadonna, C., Volcanic ash resuspension in Patagonia: numerical simulations and observations, Atmosphere, 11, 977; https://doi.org/10.3390/atmos11090977, 2020.
- Folch, A., Mingari, L., Gutierrez, N., Hanzich, M., Costa, A., Macedonio, G., FALL3D-8.0: a computational model for atmospheric transport and deposition of particles and aerosols. Part I: model physics and numerics, geoscientific model development, https://doi.org/10.5194/gmd-2019-311, 2020.
- Osores, S., Ruiz, J., Folch, A., Collini, E., Volcanic ash forecast using ensemble-based data assimilation: the Ensemble Transform Kalman Filter coupled with FALL3D-7.2 model (ETKF-FALL3D, version 1.0), Geoscientific Model Development, https://doi.org/10.5194/gmd-2019-95, 2020.
2019
- Poulidis, A., Takemi, T., and Iguchi, M.: Experimental High-Resolution Forecasting of Volcanic Ash Hazard at Sakurajima, Japan, Journal of Disaster Research, 14, 786–797, https://doi.org/10.20965/jdr.2019.p0786, 2019.
- Vázquez, R., Bonasia, R., Folch, A., Arce, J.L:, Macías, L., Tephra fallout hazard assessment at Tacaná volcano (Mexico), Journal of South American Earth Sciences, v.91, https://doi.org/10.1016/j.jsames.2019.02.013, 2019.
2018
- Poret, M. and Corradini, S. and Merucci, L. and Costa, A. and Andronico, D. and Montopoli, M. and Vulpiani, G. and Freret-Lorgeril, V., Reconstructing volcanic plume evolution integrating satellite and ground-based data: application to the 23 November 2013 Etna eruption, Atmospheric Chemistry and Physics, 18, 4695-4714, https://www.atmos-chem-phys.net/18/4695/2018, 2018.
2017
- Geyer, A., Martí, A., Giralt, S., Folch, A., Potential ash impact from Antarctic volcanoes: Insights from Deception Island’s most recent eruption, Nature Scientific Reports, 7, https://doi.org/10.1038/s41598-017-16630-9, 2017.
- Mingari, L. A., Collini, E. A., Folch, A., Báez, W., Bustos, E., Osores, M. S., Reckziegel, F., Alexander, P., and Viramonte, J. G.: Numerical simulations of windblown dust over complex terrain: the Fiambalá Basin episode in June 2015, Atmospheric Chemistry and Physics, 17, 6759–6778, https://doi.org/10.5194/acp-17-6759-2017, 2017.
- Poret, M., Costa, A., Folch, A., and Martí, A.: Modelling tephra dispersal and ash aggregation: The 26th April 1979 eruption, La Soufriere St. Vincent, Journal of Volcanology and Geothermal Research, 347, 207 – 220, https://doi.org/https://doi.org/10.1016/j.jvolgeores.2017.09.012, 2017.
2016
- Costa, A., Pioli, L., and Bonadonna, C.: Assessing tephra total grain-size distribution: Insights from field data analysis, Earth and Planetary Science Letters, 443, 90–107, https://www.sciencedirect.com/science/article/pii/S0012821X16300577, 2016.
- de la Cruz, R., Folch, A., Farre, P., Cabezas, J., Navarro, N., and Cela, J.: Optimization of atmospheric transport models on HPC platforms, Computers and Geosciences, 97, 30–39, https://doi.org/10.1016/j.cageo.2016.08.019, 2016.
- Martí, A., Folch, A., Costa, A., and Engwell, S.: Reconstructing the phases of the Campanian Ignimbrite super-eruption, Nature Scientific Reports, 6, https://doi.org/10.1038/srep21220, 2016.
- Parra, R., Bernard, B., Narvaez, D., Le Pennec, J.-L., Hasselle, N., and Folch, A.: Eruption Source Parameters for forecasting ash dispersion and deposition from vulcanian eruptions at Tungurahua volcano: Insights from field data from the July 2013 eruption, Journal of Volcanology and Geothermal Research, 309, 1 – 13, https://doi.org/10.1016/j.jvolgeores.2015.11.001, 2016.
- Sandri, L., Costa, A., Selva, J., Tonini, R., Macedonio, G., Folch, A., and Sulpizio, R.: Beyond eruptive scenarios: assessing tephra fallout hazard from Neapolitan volcanoes, Scientific Reports, 6, https://doi.org/10.1038/srep24271, 2016.
- Reckziegel, F., Bustos, E., Mingari, L., Baez, W., Villarosa, G., Folch, A., Collini, E., Viramonte, J., Romero. J.E., Osores, S., Forecasting volcanic ash dispersal and coeval resuspension during the April-May 2015 Calbuco eruption, Journal of Volcanology and Geothermal Research, https://doi.org/10.1016/j.jvolgeores.2016.04.033, 2016.
2015
- Watt, S., J. S. Gilbert, A. Folch, J.C. Phillips, X-M. Cai, Enhanced tephra fallout driven by topographically induced atmospheric turbulence, Bulletin of Volcanology, https://doi.org/10.1007/s00445-015-0927-x, 2015.
2014
- Biass, S., Scaini, C., Bonadonna, C., Folch, A., Smith, K., and Höskuldsson, A.: A multi-scale risk assessment for tephra fallout and airborne concentration from multiple Icelandic volcanoes. Part 1: Hazard assessment, Natural Hazards and Earth System Sciences, 14, 2265–2287, https://doi.org/10.5194/nhess-14-2265-2014, 2014.
- Costa, A., Smith, V., Macedonio, G., and Matthews, N. E.: The magnitude and impact of the Youngest Toba Tuff super-eruption, Frontiers in Earth Science, 2, 16, https://doi.org/10.3389/feart.2014.00016, 2014.
- Folch, A., Mingari, L., Osores, M. S., and Collini, E.: Modeling volcanic ash resuspension - application to the 14-18 October 2011 outbreak episode in Central Patagonia, Argentina, Nat. Hazards Earth Syst. Sci., 14, 119–133, https://doi.org/10.5194/nhess-14-119-2014, 2014.
- Scaini, C., S. Biass, A. Galderisi, C. Bonadonna, K. Smith, A. Folch, A. Hoskuldsson, A multi-scale risk assessment to tephra fallout and dispersal at 4 Icelandic volcanoes – Part II: vulnerability and impact assessment, Nat. Hazards Earth Syst. Sci., 14, 2289-2312, https://doi.org/10.5194/nhess-14-2289-2014, 2014
2013
- Bonasia, R., Scaini, C., Capra, L., Nathenson, M., Siebe, C., Arana-Salinas, L., and Folch, A.: Long-range hazard assessment of volcanic ash dispersal for a Plinian eruptive scenario at Popocatépetl volcano (Mexico): implications for civil aviation safety, Bulletin of Volcanology, 76, 789, https://doi.org/10.1007/s00445-013-0789-z, 2013.
- Collini, E., Osores, S., Folch, A., Viramonte, J., Villarosa, G., and Salmuni, G.: Volcanic ash forecast during the June 2011 Cordón Caulle eruption, Natural Hazards, 66, 389–412, https://doi.org/10.1007/s11069-012-0492-y, 2013.
- Costa, A., Folch, A., and Macedonio, G.: Density-driven transport in the umbrella region of volcanic clouds: Implications for tephra dispersion models, Geophys. Res. Lett., 40, 1–5, https://doi.org/10.1002/grl.50942, corrected on 17 June 2019, 2013.
- Osores, M., Folch, A., Collini, E., Villarosa, G., Durant, A., Pujol, G., and Viramonte, J.: Validation of the FALL3D model for the 2008 Chaiten eruption using field, laboratory and satellite data„ Andean Geology, 40, 262–276, https://doi.org/10.5027/andgeoV40n2-a05, 2013.
2012
- Bear-Crozier, A., Kartadinata, N., Heriwaseso, A., and Møller Nielsen, O.: Development of python-FALL3D: a modified procedure for modelling volcanic ash dispersal in the Asia-Pacific region, Natural Hazards, 64, 821–838, 2012.
- Bonasia, R., A. Costa, A. Folch, G. Macedonio, Numerical simulation of tephra transport and deposition of the 1982 El Chichón eruption and implications for hazard assessment, Journal of Volcanology and Geothermal Research, 231, 39-49, doi:10.1016/j.jvolgeores.2012.04.006, 2012.
- Costa, A., Folch, A., Macedonio, G., Giaccio, B., Isaia, R., and Smith, V. C.: Quantifying volcanic ash dispersal and impact of the Campanian Ignimbrite super-eruption, Geophysical Research Letters, 39, https://doi.org/10.1029/2012GL051605, 2012.
- Folch, A., Costa, A., and Basart, S.: Validation of the FALL3D ash dispersion model using observations of the 2010 Eyjafjallajökull volcanic ash clouds, Atmospheric Environment, 48, 165–183, https://doi.org/https://doi.org/10.1016/j.atmosenv.2011.06.072, 2012.
- Scaini, C., Folch, A., and Navarro, M.: Tephra hazard assessment at Concepción Volcano, Nicaragua, Journal of Volcanology and Geothermal Research, 219, 41–51, https://doi.org/10.1016/j.jvolgeores.2012.01.007, 2012.
- Sulpizio, R., Folch, A., Costa, A., Scaini, C., and Dellino, P.: Hazard assessment of far-range volcanic ash dispersal from a violent Strombolian eruption at Somma-Vesuvius volcano, Naples, Italy: implications on civil aviation, Bulletin of Volcanology, 74, 2205–2218, https://doi.org/10.1007/s00445-012-0656-3, 2012.
2011
- Corradini, S., Merucci, L., and Folch, A.: Volcanic Ash Cloud Properties: Comparison Between MODIS Satellite Retrievals and FALL3D Transport Model, IEEE Geoscience and Remote Sensing Letters, 8, 248–252, https://doi.org/10.1109/LGRS.2010.2064156, 2011.
2010
- Costa, A., Folch, A., and Macedonio, G.: A model for wet aggregation of ash particles in volcanic plumes and clouds: I.Theoretical formulation, Journal of Geophysical Research, 115, https://doi.org/10.1029/2009JB007175, 2010.
- Folch, A., Costa, A., Durant, A., and Macedonio, G.: A model for wet aggregation of ash particles in volcanic plumes and clouds: II. Model application, Journal of Geophysical Research, 115, https://doi.org/10.1029/2009JB007176, 2010.
- Folch A., R. Sulpizio, Evaluating the long-range volcanic ash hazard using supercomputing facilities. Application to Somma-Vesuvius (Italy), and consequences on civil aviation over the Central Mediterranean Area, Bulletin of Volcanology, https://doi.org/10.1007/s00445-010-0386-3, 2010.
- Scollo, S., Folch, A., Coltelli, M., and Realmuto, V. J.: Threedimensional volcanic aerosol dispersal: A comparison between Multiangle Imaging Spectroradiometer (MISR) data and numerical simulations, Journal of Geophysical Research: Atmospheres, 115, https://doi.org/10.1029/2009JD013162, 2010.
2009
- Folch, A., Costa, A., and Macedonio, G.: FALL3D: A Computational Model for Transport and Deposition of Volcanic Ash, Comput. Geosci., 35, 1334–1342, https://doi.org/10.1016/j.cageo.2008.08.008, 2009.
2008
- Folch, A., Cavazzoni, C., Costa, A., and Macedonio, G.: An automatic procedure to forecast tephra fallout, J. Volcanol. Geotherm. Res., 177, 767–777, 2008.
- Folch, A., Jorba, O., Viramonte, J., Volcanic ash forecast – application to the May 2008 Chaiten eruption, Nat. Hazards Earth Syst. Sci., 8, 927–940, 2008.
- Macedonio, G., A. Costa, A. Folch, Ash fallout scenarios at Vesuvius: numerical simulations and implications for hazard assessment, Journal of Volcanology and Geothermal Research, 178, 366-377, https://doi.org/10.1016/j.jvolgeores.2008.08.014, 2008.
- Scollo, S., Folch, A., Costa, A., A parametric and comparative study of different tephra fallout models, Journal of Volcanology and Geothermal Research, 176(2), 199-211, doi:10.1016/j.jvolgeores.2008.04.002, 2008.
2006
- Costa, A., Macedonio, G., and Folch, A.: A three-dimensional Eulerian model for transport and deposition of volcanic ashes, Earth and Planetary Science Letters, 241, 634 – 647, https://doi.org/http://dx.doi.org/10.1016/j.epsl.2005.11.019, 2006.
2000
- Kurganov, A. and Tadmor, E.: New High–Resolution Cen- tral Schemes for Nonlinear Conservation Laws and Convection–Diffusion Equations, J Comp Phys, 160, 241– 110 282, https://doi.org/http://dx.doi.org/10.1006/jcph.2000.6459, 2000.
Acknowledgments
This work has been partly funded by the H2020 Center of Excellence for Exascale in Solid Earth (ChEESE) under the Grant Agreement 823844.