COMP Superscalar

Big Data Distributed Computing Programming Models

COMP Superscalar (COMPSs) is a framework which aims to ease the development and execution of parallel applications for distributed infrastructures, such as Clusters, Clouds and containerized platforms.

Software Author: 

Workflows and Distributed Computing Group

Contact:

Jorge Ejarque (jorge [dot] ejarque [at] bsc [dot] es)

Rosa M. Badia (rosa [dot] m [dot] badia [at] bsc [dot] es)

Support mailing list (support-compss [at] bsc [dot] es)

Software Cost: 

COMP Superscalar is distributed under Apache License version 2

Primary tabs

COMPSs Download Form
Please, fill the following form in order to access this download:
7 + 13 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
COMPSs VM Download Form
Please, fill the following form in order to access this download:
2 + 12 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

3.0 (Latest Version)

COMP Superscalar version 3.0 (Lavender) Release date: June-2022

Release Notes

New features

  • CLI to unify executions of application in different environments.
  • Automatic creation of Data Provenance information from PyCOMPSs executions.
  • Transparent task-based checkpointing support.
  • Support for MPMD MPI applications as tasks.
  • Support for task epilog and prolog.
  • Generic support for reusable descriptions of external software execution inside a COMPSs task (@Software).
  • Mypy compilation of python binding.
  • Integration with DLB DROM for improving affinity in OpenMP tasks.
  • RISC-V 64bit support.

Deprecated Features:

  • Python 2 support.
  • Autoparallel module (requires python2)
  • SOAP Service tasks.

Improvements:

  • wait_on and wait_on_file API homogenization.
  • Improvements in the support for task nesting.
  • Improvements in plugable schedulers.
  • Improvements in memory profiling reports.
  • Improvements in tracing system: Offline tracing generation, and support for changes of working directory.
  • Configuration files for Nord3v2 and LaPalma system.
  • Several Bug fixes.

Known Limitations

  • Issues when using tracing with Java 14+.
  • Collections are not supported in http tasks.
  • OSX support is limited to Java and Python2 without CPU affinity (require to execute with --cpu_affinity=disable). We have also detected issues when several python3 versions are installed in the system. Tracing is not available.
  • Reduce operations can consume more disk space than the manually programmed n-ary reduction
  • Objects used as task parameters must be serializable.
  • Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses  threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP.  To fix these issues use the DLB option for in the cpu_affinity flag.
  • C++ Objects declared as arguments in a coarse-grain tasks must be passed in the task methods as object pointers in order to have a proper dependency management.
  • Master as worker is not working for executions with persistent worker in C++.
  • Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlaying distributed storage system.
  • Delete file calls for files used as input can produce a significant synchronization of the main code.
  • Defining a parameter as OUT is only allowed for files and collection files.
  • There is an issue with hwloc and Docker which could affect to python mpi workers. Fixing it require to upgrade the hwloc version used by the MPI runtime.

For further information please refer to COMPSs Documentation

Check Installation manual for details about how to install from the repository

Read this document before downloading the VM image: COMPSs VM Instructions

Docker Image pull command:

docker pull compss/compss:3.0
docker pull compss/compss-tutorial:3.0

Old Versions

2.10

2.9

2.8

2.7

2.6

2.5

2.4

2.3

2.2

2.1

2.0