Skip to main content

· 3 min read
Wenceslao Chiodi Calo

Running MPI programs in singularity is fairly simple, the only pre-requisite is that a version of MPI compatible with the host's is installed within the container. Doing this is easy, since you can stage an MPI installation in a Singularity definition file. Singularity supports both OpenMPI and MPICH; in the following example, based on the official documentation, we will install OpenMPI 4.1.2 in a container:

Bootstrap: docker
From: ubuntu:18.04

%files
mpitest.c /opt

%environment
# Point to OPENMPI binaries, libraries man pages
export OPENMPI_DIR=/opt/openmpi-4.1.2
export PATH="$OPENMPI_DIR/bin:$PATH"
export LD_LIBRARY_PATH="$OPENMPI_DIR/lib:$LD_LIBRARY_PATH"
export MANPATH=$OPENMPI_DIR/share/man:$MANPATH

%post
echo "Installing required packages..."
export DEBIAN_FRONTEND=noninteractive
apt-get update && apt-get install -y wget git bash gcc gfortran g++ make

# Information about the version of OPENMPI to use
export OPENMPI_VERSION=4.1.2
export OPENMPI_MINOR=4.1
export OPENMPI_URL="https://download.open-mpi.org/release/open-mpi/v$OPENMPI_MINOR/openmpi-$OPENMPI_VERSION.tar.gz"
export OPENMPI_DIR=/opt/openmpi-$OPENMPI_VERSION

echo "Installing OPENMPI..."
mkdir -p /tmp/openmpi
mkdir -p /opt
# Download
cd /tmp/openmpi && wget -O openmpi-$OPENMPI_VERSION.tar.gz $OPENMPI_URL && tar xzf openmpi-$OPENMPI_VERSION.tar.gz
# Compile and install
cd /tmp/openmpi/openmpi-$OPENMPI_VERSION && ./configure --prefix=$OPENMPI_DIR && make install

# Set env variables so we can compile our application
export PATH=$OPENMPI_DIR/bin:$PATH
export LD_LIBRARY_PATH=$OPENMPI_DIR/lib:$LD_LIBRARY_PATH

echo "Compiling the MPI application..."
cd /opt && mpicc -o mpitest mpitest.c

As you can see, the image is taken from the docker hub, and uses the Ubuntu 18.04 distribution. This definition file has three separate sections, the first one being the %files section, which copies the specified file from the host system into the container's file system. Then the %environment section, which sets environment variables at runtime, in this case it adds the corresponding paths to the MPI installation to the PATH, LD_LIBRARY_PATH and MANPATH variables. Finally the %post section, where most of the stuff is done, since it's where you usually install programs.

The OpenMPI installation is a fairly straight-forward source code installation, you may see that a lot of environment variables are declared here, these help make the rest of the commands more generic, to facilitate a possible future installation using a different version, and will not be exported at runtime. The rest of the installation consists on the usual steps: download and untar the source code, create a build directory, run the configure script and set the prefix to the desired directory, and finally run make install.

username@laptop$ sudo singularity build mpi-image1.sif mpi-image1.def

The singularity build command must be executed using root privileges, and may take a short while, but after the installation is done the final step will be to compile the provided program and voila! The image has been built.

Now to actually run this container in an HPC cluster you should copy to the desired cluster, in this case Marenostrum 4:

username@laptop$ scp mpi-image1.sif username@mn1.bsc.es:/desired/directory/

Now it's time to load the required modules. Any version of Singularity should do the job, so the lastest one installed works just fine. As for OpenMPI, it should, as mentioned before, be a version compatible with the one installed in the container. Since OpenMPI 4.1.2 is available, it will be the best choice:

username@login1$ module load openmpi/4.1.2 singularity/3.7.3

Finally, it's time to run the container with the MPI application. Singularity uses the hybrid approach for MPI executions, meaning you should execute mpirun or the equivalent command to the singularity command, and then both MPI will work in tandem to instantiate the corresponding jobs. The final command should look something like this:

username@login1$ mpirun -np 4 singularity exec mpi-image1.sif /opt/mpitest

For more general information about Singularity don't forget to check the user guide.

· One min read
Ricard Zarco Badia

Today we present you our new and improved documentation for HPC clusters (and others). You will find that the navigability has improved, the design has been modernized and in general it is a more pleasant experience than before. Feel free to try them. You can send us a message with any suggestion! We hope you enjoy them.

Regards, the HPC User Support Team.