Skip to main content

Software Environment

All software and numerical libraries available at the cluster can be found at /apps/. If you need something that is not there please contact us to get it installed.

Compilation for the architecture

To generate code that is optimized for the target architecture and the supported features such as SSE, MMX, AVX instruction sets you will have to use the corresponding compile flags. For compilations of MPI applications an MPI installation needs to be loaded in your session as well. For example Intel MPI via module load impi/2017.4

Intel Compilers

The latest Intel compilers provide the best possible optimizations for the Xeon Platinum architecture. By default, when starting a new session on the system the basic modules for the Intel suite will be automatically loaded. That is the compilers (intel/2017.4), the Intel MPI software stack (impi/2017.4) and the math kernel libraries MKL (mkl/2017.4) in their latest versions. We highly recommend linking against MKL where supported to achieve the best performance results.

To separately load the Intel compilers, please use:

module load intel/2017.4

The corresponding optimization flags for icc are CFLAGS="-xCORE-AVX512 -mtune=skylake". As the login nodes are of the exact same architecture as the compute node you can also use the flag -xHost which enables all possible optimizations available on the compile host.

Intel Compiler Licences

For reasons of licensing we recommend compiling using either login1 or the partition interactive. We currently have node locked licences installed that allow unlimited compilations in the machines login1 and the ones available via the queue interactive (logins 4 and 5). To compile in the rest of the compute and login nodes we have a limited amount of floating licences available. Should all of them be in use when trying to compile, you will experience a delay when the compiler starts and tries to checkout a licence.

In this case an error message like the one below will appear. In this case please switch to Login 1, Login 4 or Login 5 to compile without limitations and licencing issues.

ifort: error #10052: could not checkout FLEXlm license

Error: A license for Comp-CL is not available (-9,57).

License file(s) used were (in this order):
** 1. Trusted Storage
** 2. /gpfs/apps/MN4/INTEL/2017.4/compilers_and_libraries_2017.4.196/linux/bin/intel64/../../Licenses
** 3. /home/bsc18/bsc1888//Licenses
** 4. /opt/intel/licenses
** 5. /Users/Shared/Library/Application Support/Intel/Licenses
** 6. /gpfs/apps/MN4/INTEL/2017.4/compilers_and_libraries_2017.4.196/linux/bin/intel64/license.lic
** 7. /gpfs/apps/MN4/INTEL/2017.4/compilers_and_libraries_2017.4.196/linux/bin/intel64/login1_COM_L___1.lic
** 8. /gpfs/apps/MN4/INTEL/2017.4/compilers_and_libraries_2017.4.196/linux/bin/intel64/login4_COM_L___1.lic
** 9. /gpfs/apps/MN4/INTEL/2017.4/compilers_and_libraries_2017.4.196/linux/bin/intel64/login5_COM_L___1.lic
Please refer http://software.intel.com/sites/support/ for more information..

GCC

The GCC provided by the system is version 4.8.5. For better support of new and old hardware features we have different versions that can be loaded via the provided modules. For example in MareNostrum you can find GCC 9.2.0

module load gcc/9.2.0

The corresponding flags are CFLAGS="-march=skylake-avx512"

C Compilers

In the cluster you can find these C/C++ compilers :

icc / icpc -> Intel C/C++ Compilers

man icc
man icpc
note

In case you are planning on using Intel compilers, we strongly recommend using the ones from our "intel" modules with a version higher than 2017.4.

gcc /g++ -> GNU Compilers for C/C++

man gcc
man g++

All invocations of the C or C++ compilers follow these suffix conventions for input files:

.C, .cc, .cpp, or .cxx -> C++ source file.
.c -> C source file
.i -> preprocessed C source file
.so -> shared object file
.o -> object file for ld command
.s -> assembler source file

By default, the preprocessor is run on both C and C++ source files.

These are the default sizes of the standard C/C++ datatypes on the machine

Default datatype sizes on the machine

TypeLength (bytes)
bool (c++ only)1
char1
wchar_t4
short2
int4
long8
float4
double8
long double16

Distributed Memory Parallelism

To compile MPI programs it is recommended to use the following handy wrappers: mpicc, mpicxx for C and C++ source code. You need to choose the Parallel environment first: module load openmpi / module load impi / module load poe. These wrappers will include all the necessary libraries to build MPI applications without having to specify all the details by hand.

mpicc a.c -o a.exe
mpicxx a.C -o a.exe

Shared Memory Parallelism

OpenMP directives are fully supported by the Intel C and C++ compilers. To use it, the flag -qopenmp must be added to the compile line.

icc -qopenmp -o exename filename.c
icpc -qopenmp -o exename filename.C

You can also mix MPI + OPENMP code using -openmp with the mpi wrappers mentioned above.

Automatic Parallelization

The Intel C and C++ compilers are able to automatically parallelize simple loop constructs, using the option "-parallel" :

icc -parallel a.c

FORTRAN Compilers

In the cluster you can find these compilers :

ifort -> Intel Fortran Compilers

man ifort

gfortran -> GNU Compilers for FORTRAN

man gfortran

By default, the compilers expect all FORTRAN source files to have the extension ".f", and all FORTRAN source files that require preprocessing to have the extension ".F". The same applies to FORTRAN 90 source files with extensions ".f90" and ".F90".

Distributed Memory Parallelism

In order to use MPI, again you can use the wrappers mpif77 or mpif90 depending on the source code type. You can always man mpif77 to see a detailed list of options to configure the wrappers, ie: change the default compiler.

mpif77 a.f -o a.exe

Shared Memory Parallelism

OpenMP directives are fully supported by the Intel Fortran compiler when the option "-qopenmp" is set:

ifort -qopenmp 

Automatic Parallelization

The Intel Fortran compiler will attempt to automatically parallelize simple loop constructs using the option "-parallel":

ifort -parallel

Modules Environment

The Environment Modules package (http://modules.sourceforge.net/) provides a dynamic modification of a user's environment via modulefiles. Each modulefile contains the information needed to configure the shell for an application or a compilation. Modules can be loaded and unloaded dynamically, in a clean fashion. All popular shells are supported, including bash, ksh, zsh, sh, csh, tcsh, as well as some scripting languages such as perl.

Installed software packages are divided into five categories:

  • Environment: modulefiles dedicated to prepare the environment, for example, get all necessary variables to use openmpi to compile or run programs
  • Tools: useful tools which can be used at any time (php, perl, ...)
  • Applications: High Performance Computers programs (GROMACS, ...)
  • Libraries: Those are tipycally loaded at a compilation time, they load into the environment the correct compiler and linker flags (FFTW, LAPACK, ...)
  • Compilers: Compiler suites available for the system (intel, gcc, ...)

Modules tool usage

Modules can be invoked in two ways: by name alone or by name and version. Invoking them by name implies loading the default module version. This is usually the most recent version that has been tested to be stable (recommended) or the only version available.

module load intel

Invoking by version loads the version specified of the application. As of this writing, the previous command and the following one load the same module.

module load intel/2017.4

The most important commands for modules are these:

  • module list shows all the loaded modules
  • module avail shows all the modules the user is able to load
  • module purge removes all the loaded modules
  • module load \<modulename> loads the necessary environment variables for the selected modulefile (PATH, MANPATH, LD_LIBRARY_PATH...)
  • module unload \<modulename> removes all environment changes made by module load command
  • module switch \<oldmodule> \<newmodule> unloads the first module (oldmodule) and loads the second module (newmodule)

You can run "module help" any time to check the command's usage and options or check the module(1) manpage for further information.

Module custom stack size

The stack size is 2GB by default, but there are modules that can change that value when they are loaded. That's to prevent errors that would happen otherwise (for example, while using python threads). Here's a list of the affected modules:

  • python/3-intel-2018.2 (64 MB)
  • python/2-intel-2018.2 (64 MB)
  • python/2.7.13 (10 MB)
  • python/2.7.13_ML (64 MB)
  • python/2.7.14 (10 MB)
  • python/3.6.1 (10 MB)
  • python/3.6.4_ML (64 MB)
  • paraview/5.4.0 (64 MB)
  • paraview/5.5.2 (64 MB)
  • vmd/1.9.3 (64 MB)
  • vmd/1.9.3-python (64 MB)
  • igv/2.3.94 (64 MB)

You can check your stack size at any time using the command:

ulimit -s

If by any chance this stack size doesn't fit your needs, you can add to your job script a command to have a custom set value after the module load commands. This way, you can have an appropiate stack size while using the compute nodes:

ulimit -Ss <new_size_in_KB>