Running MPI programs in singularity is fairly simple, the only pre-requisite is that a version of MPI compatible with the host's is installed within the container. Doing this is easy, since you can stage an MPI installation in a Singularity definition file. Singularity supports both OpenMPI and MPICH; in the following example, based on the official documentation, we will install OpenMPI 4.1.2 in a container:
# Point to OPENMPI binaries, libraries man pages
echo "Installing required packages..."
apt-get update && apt-get install -y wget git bash gcc gfortran g++ make
# Information about the version of OPENMPI to use
echo "Installing OPENMPI..."
mkdir -p /tmp/openmpi
mkdir -p /opt
cd /tmp/openmpi && wget -O openmpi-$OPENMPI_VERSION.tar.gz $OPENMPI_URL && tar xzf openmpi-$OPENMPI_VERSION.tar.gz
# Compile and install
cd /tmp/openmpi/openmpi-$OPENMPI_VERSION && ./configure --prefix=$OPENMPI_DIR && make install
# Set env variables so we can compile our application
echo "Compiling the MPI application..."
cd /opt && mpicc -o mpitest mpitest.c
As you can see, the image is taken from the docker hub, and uses the Ubuntu 18.04 distribution. This definition file has three separate sections, the first one being the %files section, which copies the specified file from the host system into the container's file system. Then the %environment section, which sets environment variables at runtime, in this case it adds the corresponding paths to the MPI installation to the PATH, LD_LIBRARY_PATH and MANPATH variables. Finally the %post section, where most of the stuff is done, since it's where you usually install programs.
The OpenMPI installation is a fairly straight-forward source code installation, you may see that a lot of environment variables are declared here, these help make the rest of the commands more generic, to facilitate a possible future installation using a different version, and will not be exported at runtime. The rest of the installation consists on the usual steps: download and untar the source code, create a build directory, run the configure script and set the prefix to the desired directory, and finally run make install.
username@laptop$ sudo singularity build mpi-image1.sif mpi-image1.def
The singularity build command must be executed using root privileges, and may take a short while, but after the installation is done the final step will be to compile the provided program and voila! The image has been built.
Now to actually run this container in an HPC cluster you should copy to the desired cluster, in this case Marenostrum 4:
username@laptop$ scp mpi-image1.sif email@example.com:/desired/directory/
Now it's time to load the required modules. Any version of Singularity should do the job, so the lastest one installed works just fine. As for OpenMPI, it should, as mentioned before, be a version compatible with the one installed in the container. Since OpenMPI 4.1.2 is available, it will be the best choice:
username@login1$ module load openmpi/4.1.2 singularity/3.7.3
Finally, it's time to run the container with the MPI application. Singularity uses the hybrid approach for MPI executions, meaning you should execute mpirun or the equivalent command to the singularity command, and then both MPI will work in tandem to instantiate the corresponding jobs. The final command should look something like this:
username@login1$ mpirun -np 4 singularity exec mpi-image1.sif /opt/mpitest
For more general information about Singularity don't forget to check the user guide.