COMP Superscalar

Big Data Distributed Computing Programming Models

COMP Superscalar (COMPSs) is a framework which aims to ease the development and execution of parallel applications for distributed infrastructures, such as Clusters, Clouds and containerized platforms.

Software Author: 

Workflows and Distributed Computing Group

Contact:

Jorge Ejarque (jorge [dot] ejarque [at] bsc [dot] es)

Rosa M. Badia (rosa [dot] m [dot] badia [at] bsc [dot] es)

Support mailing list (support-compss [at] bsc [dot] es)

License: 

COMP Superscalar is distributed under Apache License version 2

Primary tabs

COMPSs Download Form
Please, fill the following form in order to access this download:
8 + 10 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
COMPSs VM Download Form
Please, fill the following form in order to access this download:
5 + 6 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

3.3 (Latest Version)

COMP Superscalar version 3.3 (Orchid) Release date: November-2023

Release Notes

New features

  • New Jupyter kernel and JupyterLab extension to manage PyCOMPSs in the Jupyter ecosystem (https://github.com/bsc-wdc/jupyter-extension).
  • Integration with Energy Aware Runtime (EAR) to obtain energy profiles in python-based applications (https://www.bsc.es/research-and-development/software-and-apps/software-list/ear-energy-management-framework-hpc).
  • Support for users-defined dynamic constraints based on of task parameters values.
  • GPU cache for PyTorch tensors.

Improvements:

  • The support of interactive Python and Jupyter notebooks has been extended to work in non shared disk environment.
  • Data transformations are supporting the data conversion to directory types.
  • Workflow Provenance: new data persistence feature, new inputs and outputs terms to define data assets by hand, new sources term, improved common paths detection, and minimal YAML support.
  • Configuration files for Leonardo and Galileo HPC systems.
  • Several Bug fixes.

Known Limitations

  • Dynamic constrains are limited to task parameters declared as IN which are not future objects (generated by previous tasks).
  • Issues when using tracing with Java 14+. For Java 17+ require to include this jvm flag "-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true"
  • Collections are not supported in http tasks.
  • OSX support is limited to Java and Python without CPU affinity (require to execute with --cpu_affinity=disable).  Tracing is not available.
  • Reduce operations can consume more disk space than the manually programmed n-ary reduction
  • Objects used as task parameters must be serializable.
  • Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses  threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP.  To fix these issues use the DLB option for in the cpu_affinity flag.
  • C++ Objects declared as arguments in a coarse-grain tasks must be passed in the task methods as object pointers in order to have a proper dependency management.
  • Master as worker is not working for executions with persistent worker in C++.
  • Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlaying distributed storage system.
  • Delete file calls for files used as input can produce a significant synchronization of the main code.
  • Defining a parameter as OUT is only allowed for files and collection files.
  • There is an issue with hwloc and Docker which could affect to python mpi workers. Fixing it require to upgrade the hwloc version used by the MPI runtime.

For further information please refer to COMPSs Documentation

Check Installation manual for details about how to install from the repository

Docker Image pull command:

docker pull compss/compss:3.3
docker pull compss/compss-tutorial:3.3

Old Versions

3.2

3.1

3.0

2.10

2.9

2.8

2.7

2.6

2.5

2.4

2.3

2.2

2.1

2.0