Skip to main content

Software Environment

This cluster has OpenHPC available for software management.

All the software and numerical libraries installed by Support at the cluster can be found at /apps/.

The software installed using OpenHPC can be found at /opt/ohpc/pub/.

If you need something that is not there please contact us to get it installed (see Getting Help).

OpenHPC

OpenHPC provides an integrated and tested collection of software components that can be used on the compute cluster.

To use the software installed with OpenHPC the module "openhpc" needs to be loaded

module load openhpc

OpenHPC uses a module hierarchy strategy. Therefore, on the "module available" output is shown the modules that can be loaded with the modules already loaded. Once a compiler is loaded, the modules that depend on that compiler will appear on the "module available" output. After choosing an MPI implementation, the modules that depend on that compiler-MPI pairing will be available. If a compiler is swapped, then Lmod automatically unloads any modules that depend on the old compiler and reloads those modules that are dependent on the new compiler.

Compilation for the architecture

To generate code that is optimized for the target architecture and supported features, you will have to use the corresponding compile flags. For compilations of MPI applications an MPI installation needs to be loaded in your session as well. For example OpenMPI via module load openmpi3/3.1.4

C Compilers

The GCC provided by the system is version 4.8.5. For better support of new and old hardware features we have different versions that can be loaded via the provided modules. For example in the Huawei cluster you can find GCC 8.3.0 and GCC 11.2:

    module load gnu8/8.3.0
module load gcc/11.2

Distributed Memory Parallelism

To compile MPI programs it is recommended to use the following handy wrappers: mpicc, mpicxx for C and C++ source code. You need to choose the Parallel environment first: module load openmpi3. These wrappers will include all the necessary libraries to build MPI applications without having to specify all the details by hand.

    % mpicc a.c -o a.exe
% mpicxx a.C -o a.exe

Shared Memory Parallelism

OpenMP directives are fully supported by the GCC C and C++ compilers. To use it, the flag -fopenmp must be added to the compile line.

    % gcc -fopenmp -o exename filename.c
% g++ -fopenmp -o exename filename.C

FORTRAN Compilers

In the cluster you can find these compilers :

gfortran -> GNU Compilers for FORTRAN

    % man gfortran

By default, the compilers expect all FORTRAN source files to have the extension ".f", and all FORTRAN source files that require preprocessing to have the extension ".F". The same applies to FORTRAN 90 source files with extensions ".f90" and ".F90".

Distributed Memory Parallelism

In order to use MPI, again you can use the wrappers mpif77 or mpif90 depending on the source code type. You can always man mpif77 to see a detailed list of options to configure the wrappers, ie: change the default compiler.

    % mpif77 a.f -o a.exe

Shared Memory Parallelism

OpenMP directives are fully supported by the GCC Fortran compiler when the option "-fopenmp" is set:

    % gfortran -fopenmp 

Modules Environment

The Environment Modules package (http://modules.sourceforge.net/) provides a dynamic modification of a user's environment via modulefiles. Each modulefile contains the information needed to configure the shell for an application or a compilation. Modules can be loaded and unloaded dynamically, in a clean fashion. All popular shells are supported, including bash, ksh, zsh, sh, csh, tcsh, as well as some scripting languages such as perl.

Installed software packages are divided into five categories:

  • Environment: modulefiles dedicated to prepare the environment, for example, get all necessary variables to use openmpi to compile or run programs
  • Tools: useful tools which can be used at any time (php, perl, ...)
  • Applications: High Performance Computers programs (GROMACS, ...)
  • Libraries: Those are tipycally loaded at a compilation time, they load into the environment the correct compiler and linker flags (FFTW, LAPACK, ...)
  • Compilers: Compiler suites available for the system (intel, gcc, ...)

Modules tool usage

Modules can be invoked in two ways: by name alone or by name and version. Invoking them by name implies loading the default module version. This is usually the most recent version that has been tested to be stable (recommended) or the only version available.

    % module load gnu8

Invoking by version loads the version specified of the application. As of this writing, the previous command and the following one load the same module.

    % module load openmpi3/3.1.4

The most important commands for modules are these:

  • module list shows all the loaded modules
  • module avail shows all the modules the user is able to load
  • module purge removes all the loaded modules
  • module load \<modulename> loads the necessary environment variables for the selected modulefile (PATH, MANPATH, LD_LIBRARY_PATH...)
  • module unload \<modulename> removes all environment changes made by module load command
  • module switch \<oldmodule> \<newmodule> unloads the first module (oldmodule) and loads the second module (newmodule)

You can run "module help" any time to check the command's usage and options or check the module(1) manpage for further information.

Module custom stack size

The stack size is 10 MB by default. You can check your stack size at any time using the command:

    % ulimit -s

If by any chance this stack size doesn't fit your needs, you can add to your job script a command to have a custom set value after the module load commands. This way, you can have an appropiate stack size while using the compute nodes:

    % ulimit -Ss <new_size_in_KB>