BSC releases PyCOMPSs/COMPSs version 2.10 and dislib version 0.7.0

16 November 2021

The Barcelona Supercomputing Center offers COMPSs to the HPC community, a set of tools that helps developers efficiently program and execute their applications on distributed computational infrastructures. In particular, PyCOMPSs is the Python binding of PyCOMPSs, which is used as a means for convergence between HPC, AI and big data.

This COMPSs release includes support for REST services, http requests tasks that will enable a new type of workflows that combine traditional tasks and http tasks. The mechanism can also be used with pre-deployed Function-as-a-Service (FaaS) tasks.

The Python binding (PyCOMPSs) comes with a new cache profiling feature that enables to decide which parameters have a higher reuse rate and get benefit from being stored in the workers' cache.

dislib new release includes between others a blocked QR decomposition with three modes, parallel random forest regressors, and a matrix multiplication with in-situ transposed arguments.


The Workflows and Distributed Computing team at the Barcelona Supercomputing Center is proud to announce a new release, version 2.10 (codename Kumquat), of the programming environment COMPSs

This version of COMPSs updates the result of the team’s work in the last years on the provision of a set of tools that helps developers to program and execute their applications efficiently on distributed computational infrastructures such as clusters, clouds and container managed platforms. COMPSs is a task-based programming model known for notably improving the performance of large-scale applications by automatically parallelizing their execution.

COMPSs has been available for the last years for the MareNostrum supercomputer and Spanish Supercomputing Network users, and it has been adopted in several research projects such as EUBra-BIGSEA, MUG, EGI, ASCETIC, TANGO, NEXTGenIO, I-BiDaaS, mF2C and CLASS. In these projects, COMPSs has been applied to implement use cases provided by different communities across diverse disciplines as biomedicine, engineering, biodiversity, chemistry, astrophysics, financial, telecommunications, manufacturing and earth sciences. Currently, it is also under extension and adoption in applications in the projects AI-SPRINT, ExaQUte, LANDSUPPORT, the BioExcel CoE, PerMedCoE, i ELASTIC and in the Edge Twins HPC FET Innovation Launchpad project. It has also been applied in sample use cases of the ChEESE CoE. A special mention is the eFlows4HPC project coordinated by the group, started in January 2021, that aims to develop a workflow software stack where one of the main components is the PyCOMPSs/COMPSs environment.

The new release includes support for REST services in the form of http requests tasks, both for Python and Java applications. The mechanism can also be used with pre-deployed Function-as-a-Service (FaaS) tasks. This support enhances the previous support of COMPSs to web services that was limited to SOAP web services and Java COMPSs applications. The syntax has been extended with annotations for http tasks as well as the corresponding COMPSs runtime support. This functionality enables a new type of workflows that can combine traditional tasks with the code provided by the application developer, with invocations to external http tasks.

Previous release 2.9 provided support for a Python workers cache to overcome serialization and deserialization overheads necessary in Python tasks. The current release extends this functionality with a profiling mechanism to support the developers' decision about which tasks' parameters should be stored in the cache and which not. Since storing in the cache comes with additional overhead, the profiler provides information about the parameters reuse in terms of cache hits enabling the developer to infer which ones will benefit from being cached.

Other enhancements are the extension of the support for MPI tasks with the flag processes_per_node to help on the distribution of processes in the nodes, support for Java versions above 8, partial support for macOS systems and PyArrow object serialization support.

COMPSs 2.10 comes with other minor new features, extensions and bug fixes.  COMPSs had around 1000 downloads last year and is used by around 20 groups in real applications. COMPSs has recently attracted interest from areas such as engineering, image recognition, genomics and seismology, where specific courses and dissemination actions have been performed. 

The packages and the complete list of features are available in the Downloads page. A virtual appliance is also available to test the functionalities of COMPSs through a step-by-step tutorial that guides the user to develop and execute a set of example applications.  Additionally, a user guide and papers published in relevant conferences and journals are available.

For more information on COMPSs please visit our webpage:

The group is also proud to announce the new release of dislib 0.7.0. The Distributed Computing Library (dislib) provides distributed algorithms ready to use as a library. So far, dislib focuses on machine learning algorithms, and with an interface inspired by scikit-learn. The main objective of dislib is to facilitate the execution of big data analytics algorithms in distributed platforms, such as clusters, clouds, and supercomputers. Dislib has been implemented on top of PyCOMPSs programming model, Python binding of COMPSs.

Dislib is based on a distributed data structure, ds-array, that enables the parallel and distributed execution of the machine learning methods. The dislib library code is implemented as a PyCOMPSs application, where the different methods are annotated as PyCOMPSs tasks. At execution time, PyCOMPSs takes care of all the parallelization and data distribution aspects. However, the final dislib user code is unaware of the parallelization and distribution aspects, and is written as simple Python scripts, with an interface very similar to scikit-learn interface. Dislib includes methods for clustering, classification, regression, decomposition, model selection and data management. A research contract with FUJITSU had partially funded the dislib library and was used to evaluate the A64FX processor. Currently, the dislib developments are co-funded with 50% by the European Regional Development Fund under the framework of the ERFD Operative Programme for Catalunya 2014-2020, by the H2020 AI-Sprint project and by the EuroHPC eFlows4HPC project.

Since its recent creation, dislib has been applied in use cases of astrophysics (DBSCAN, with data of the GAIA mission), molecular dynamic workflows (Daura and PCA, BioExcel CoE). In the eFlows4HPC project, it is being applied in two use cases: in urgent computing for natural hazards (random forest regressors) and in digital twins for manufacturing (QR).

The new release 0.7.0 includes a parallel implementation of the random forest regressors and blocked QR decomposition with three modes: full, economic, and r. The implementation of QR is based on Householder reflectors with the approach inspired by Givens rotations providing better parallelism. The new version also contains an implementation of MinMax Scaler for preprocessing the input data, new utility functions to pad matrices and remove last rows or columns, and matrix multiplication with transposed arguments. Several performance improvements in other algorithms have been also made. In addition to that, all tasks implemented in dislib from now on can be specified to use a number of core processors.

Dislib 0.7.0 comes with other extensions and with a new user guide. The code is open source and available for download.


The Workflow and Distributed Computing team at the Barcelona Supercomputing Center aims to offer tools and mechanisms that enable the sharing, selection, and aggregation of a wide variety of geographically distributed computational resources in a transparent way. The research done in this team is based in the former expertise of the group, and extending it towards the aspects of distributed computing that can benefit from this expertise. The team at BSC has a strong focus on programming models and resource management and scheduling in distributed computing environments.