Skip to main content

Singularity User Guide

Introduction

Singularity is a container platform designed to be portable and reproductible. Its main appeal is that the images are contained in a single file, which can be built oustside an HPC cluster to then be used in one. It favors integration over isolation, so it offers different tools to make the communication between the container and the host system easier.

Singularity version 3.7.4 is the last stable release before the project was forked into Apptainer and SingularityCE. These both programs fulfill the same purpose as Singularity, and are very similar in their capabilities. SingularityCE went on with the original program versions, meaning SingularityCE version 3.8.0 is actually its first release. While Apptainer decided to go on its separate way and start over, so Apptainer's first release is 1.0.0.

At the BSC we have Singularity installed in most of the clusters, all installed versions are >3.0.0 (release where the .sif file type was first introduced), so if you want to install it in your local machine pick a recent version (many of the features in this doc only work for releases 3.0.0 and newer, since it had a major code revamp). Versions newer than the fork are SingularityCE as of now, but work just fine with containers created with older, pre-fork releases.

Containers

How to create a Singularity container

There are different ways of creating a container, you can build one from the container library or from the docker hub, alternatively you can create a sandbox container (directory based) or one from a Singularity configuration file.

Singularity uses the build tool to assemble containers, which may require root permissions, plus to access image libraries you will need internet access, so we encourage building these containers in your personal laptop before moving them to an HPC cluster.

caution

Bear in mind that Singularity is architecture dependent, meaning that the machine used to create the container and the one running it should have the same architecture.

Build a container from the container library:

user@laptop$ singularity build image_name.sif library://container_name

Build a container from docker hub:

user@laptop$ singularity build image_name.sif docker://sylabsio/container_name

Build a container from an already existing container:

singularity build --sandbox image_name image_name2.sif
or
singularity build image_name.sif path_to_conainer/

The first example builds a writable, directory based container from an existing container image, while the second one does the opposite.

Build a container from a Singularity definition file:

This is the only build method that always requires sudo to execute.

user@laptop$ sudo singularity build image_name.sif definition_file.def

More information, along with the full build option list, can be found in the official documentation.

Convert from Docker to Singularity

info

If you're coming from docker you should have in mind that Singularity containers are files created in the working directory where you run the build command.

From Docker Hub

If the docker image you want already exists in the docker hub you can build it as a singularity image, as stated before:

user@laptop$ singularity build image_name.sif docker://sylabsio/container_name

Using a .tar and docker-archive

Otherwise, if you want to convert an already existing docker container to singularity (or one created from a Dockerfile), you can do so doing the following:

  1. Find the ID of the Docker image you wish to convert, in the host where you run Docker, using the docker images command.

  2. Create a .tar from the docker image:

user@laptop$ docker save docker_img_ID -o image_name.tar
  1. Copy the tar file to an HPC cluster, or the host where you want to build the singularity container. You can use SCP or any other available data transfer commands.

  2. Create a Singularity container from the tar file:

user@cluster$ singularity build image_name.sif docker-archive://path/to/tar/file

This also works with sandbox containers, as seen below. Have in mind that docker-archive:// should always be specified, look at a couple of examples:

# tar file is in the current directory and is called lolcow.tar
singularity build --sandbox lolcow docker-archive://lolcow.tar

# tar file is in /home/harold/Documents/ and is called lolcow.tar
singularity build --sandbox lolcow docker-archive:///home/harold/Documents/lolcow.tar

More information on this link.

Using a docker2singularity Docker image

Alternatively, you can also run this command on your machine:

sudo docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /where/to/store/the/image:/output \
--privileged -t --rm \
quay.io/singularity/docker2singularity [--name image_name.sif] docker_img_ID

It may look a bit lengthy at first, but it's not really complicated, the idea is pulling a docker image capable of transforming a Docker container into a Singularity one. All it does is bind two volumes: /var/run/docker.sock, which contains the Unix socket the Docker daemon listens to by default, and a user-specified path to the directory where the Singularity image should be stored; then grants privileges to the container, allocates a pseudo TTY and marks the pulled image to be deleted after executing. Finally, the image to pull, docker2singularity, is specified alongside the parameters it receives, which are the Docker ID and the image name in this case.

caution

This command may not work on architectures other than x86 and its variants, so we encourage using the .tar method.

More information at Quay.io.

Singularity Definition Files

All Singularity definition files consist of two parts, the Header and the Sections, the first one specifies the operating system used to build the container, while the sections are in charge of executing commands at different stages of the building process, allowing to move files into the container, installing applications, or testing it to validate the resulting image.

Here you can find an example of a very simple definition file using some of the different options available:

BootStrap: library
From: ubuntu:14.04

%post
apt-get -y update
apt-get -y install netcat

%environment
export LISTEN_PORT=12345
export LC_ALL=C
export PATH=/usr/local:$PATH

%runscript
echo "Container was created"

%startscript
nc -lp $LISTEN_PORT

%labels
Author user@bsc.es
Version v0.0.1

You can check more detailed information in the official documentation.

Singularity Usage

Running Singularity on different clusters

Singularity images don't have their own kernel, and instead use the system's, so if an application requires a specific kernel it might not work in certain clusters. That said, it is available in most of BSC's HPC clusters, you can check the availability by using:

user@cluster$ module avail singularity

The singularity run command runs all the commands specified by the %runscript section in the definition file:

user@cluster$ singularity run image_name.sif

The singularity exec command runs any command or application available within the container:

user@cluster$ singularity exec image_name.sif /home/user/hello_world.sh

Finally, singularity shell allows you to run a shell inside the container:

user@cluster$ singularity shell image_name.sif
# Singularity> ./hello_world.sh

If you want your container to be more insulated from the base system, these three commands support the -c (--contain) and -C (--containall) options, which avoid using the base file system, or even contain environment, PID and IPC.

See this link for a full list of supported options.

BSC-specific commands

Here at the BSC we offer a simple wrapper that allows users to list the images built by the support team, and run exec/shell commands on them, called bsc_singularity. It's included in all Singularity >3.0 modules.

user@cluster$ bsc_singularity ls
image1.sif
image2.sif
image3.simg

user@cluster$ bsc_singularity exec <options> <container> <command>
user@cluster$ bsc_singularity shell <options> <container>

Additionally, there's an option to print an information file that contains basic information about the container. But it may not be available for all of them.

user@cluster$ bsc_singularity info <container>

Running Singularity with MPI

MPI executions in Singularity should always be done calling mpirun before singularity:

user@cluster$ mpirun -np 4 singularity exec image_name.sif ./mpi_app

Doing so the other way round may result in errors.

Do bear in mind that the MPI version installed within the container should be compatible with the version installed in the cluster. A more in-depth guide on how to build and run MPI with Singularity can be found here.

Contact the support team if any problem arises.

Running Singularity with GPUs

GPUs are supported by default, using the --nv option:

user@cluster$ singularity exec --nv image_name.sif ./gpu_app

For AMD architecture, an experimental option for ROCm is also supported:

user@amdlogin$ singularity exec --rocm image_name.sif ./gpu_app

Editing a container

If you are using a sandbox container you can edit its contents using the --writable option, which allows it to be accessed as read/write instead of read-only, since you may want to edit a file within the container, or install new tools or apps.

These commands are best executed in your local machine, since editing images should be preferably done with root permission:

user@laptop$ singularity shell --writable image_name
# Singularity> apt-get update && apt-get install sl
# Singularity> echo "echo Hello World!" > /bin/hello_world.sh
# Singularity> exit
user@laptop$ singularity exec image_name /bin/hello_world.sh
# > Hello World!

If your container is a .sif or similar file you can always convert it to a directory based container, edit it, and then convert it back.

Binding paths

You may want your container to have access to a directory outside of it. For that purpose Singulrity has a --bind/-B option. All action commands (run, exec, shell and instance start) accept that option, using the format source[:destination[:options]], where source is the directory outside the container, destination is the directory within the container (same as aource if left empty) and options may be specified if desired (ro or rw):

user@cluster$ singularity run --bind /dir/in/host image_name.sif
or
user@cluster$ singularity run --bind /dir/in/host:/dir/in/container image_name.sif
or
user@cluster$ singularity run --bind /dir/in/host:/dir/in/container:ro image_name.sif

It is also possible to specify binds in the SINGULARITY_BIND environment variable, which will bind the paths even when the container is run from a jobscript. This will also avoid having to write the --bind command every time the container is executed:

user@cluster$ export SINGULARITY_BIND="/dir/in/host"
user@cluster$ singularity shell image_name.sif

When binding paths in sandbox containers with the --writable option it is highly encouraged to create the corresponding directories within the container beforehand, otherwise it could result in unexpected behavior.