Autonomic Systems and e-Business Platforms


The goal of our research group is to explore the future of computing by performing high-level research in today’s eBusiness applications that have to deal with critical IT challenges in areas such as Cognitive Computing, Big Data, Cloud Computing, Business Analytics, High Performance Computing (Supercomputing) and Sustainable Computing.


The group is conducting research on autonomic and intelligent resource management policies based on Self-Management strategies as the way to improve the computer middleware layers.

In these research fields the team produces top research publications as well as software components and resource management policies that can be applied at middleware level in order to improve their adaptability, efficiency and productivity. 

The group targets execution platforms composed of high-productivity heterogeneous multi-core systems with accelerators and advanced storage architectures deployed in large-scale distributed environments. The group is also working in the design of novel algorithms to enable massive scale data and video analytics in large-scale clusters running several middleware. 

Research Lines: 

In order to manage the overwhelming scientific agenda of the group we have organized in 3 Activities:

Data-centric Computing Activity:

The focus of this area is to accelerate the processing of data-driven workloads, including large analytics as well as stream processing, in heterogeneous execution frameworks. The work will focus on the following research topics:

  • Exploring systems and software strategies for leveraging high-performance in-memory key/value databases to accelerate data intensive tasks. The work will adopt the Scalabe Key Value Store (SKV) projects as the key/value store.
  • Building clouds which via their programmability at multiple layers and the embracing of hardware heterogeneity can host a variety workloads and can optimize resource configuration for these workloads. The core platform used for prototyping is OpenStack.
  • Develop mechanisms for an automated characterization of cost-effectiveness of Big Data deployments, such as Hadoop, to explore how runtime performance, and therefore its price, are critically affected by relatively simple software and hardware configuration choices. The group architected and maintains the Aloja portal.
  • Explore novel architectures of the emerging IoT stream processing platforms, that provide the capabilities of data stream composition, transformation and filtering in real time. The group architected and maintains the servIoTicy platform.
  • Building hardware prototypes (Minerva) as a group platform for running BigData workloads, exploring how to accelerate computation while keeping the cost of the prototype low, leveraging commodity hardware on the back end and high-end components in the front end.

For more information you can contact the activity leader:  CARRERA, DAVID

Involved Team:


Energy-aware Computing Activity: 

The goal of this area is to develop management algorithms for virtualised Data Centres in a large-scale distributed ecosystem running heterogeneous workloads that optimize their operation with respect to energy and ecological efficiency.  The work in this area is grouped in the following main lines:

  • Models for the assessment and forecasting of energy and ecological efficiency in a virtualised Data Centre at different levels
  • Policies for the optimization of the scheduling and placement of Virtual Machines (VMs) in physical nodes considering the energy and ecological efficiency factors
  • Policies for the selection of Data Centre for remote placement of Virtual Machines (VMs) in a Data Centre ecosystem considering the energy and ecological efficiency factors
  • Integration of the cooling and power supply subsystems in the energy management strategy of Data Centres
  • Integration of renewable energy sources in the energy management strategy of Data Centres       

For more information you can contact the activity leader: GUITART, JORDI 

Involved Team:


Data-driven Scientific Computing Activity: 

The goal of this area is to design resource management strategies for Big Data applications, defining policies that enable distributed data stores to meet high-level performance goals. We focus on scientific applications, like those from life science domain, which data generation and accesses bound both precision and performance. During next years the main work of this research activity will be developed as part of these four main threads:

  • Propose novelty resource management strategies as query-driven data model, which focus on adapting the data model to the particular type of accesses implemented by the applications. We also aim to consider the intrinsic of continuous data streaming with real time requirements. This kind of environment also raises the challenge of defining an execution framework that is able to digest this kind of input data streams.
  • Create a set of plugin modules based on our research results in order to be added to state-of-the-art open source NoSQL platforms. After this integration the comprehensive software package will be integrated in the BSC Big Data tools that the BSC department Computer Science will develop with the conjoint work of our research group and the groups of Storage and Grid. 
  • Hecuba: A project that aims to design and develop strategies to facilitate programmers the efficient usage of data stores for big data applications. For example, we will provide programmers with a software layer that will decouple data models from data layouts.

For more information you can contact the activity leader: BECERRA, YOLANDA 

Involved Team:


Big Data Analytics Computing Activity: 

The work in this activity has two main areas:

  • The goal of Data Analytics Algorithms area is to develop new algorithms for big data analytics in a large-scale clusters running several middlewares, such as spark.  The work in this area is grouped in 2 main research lines:​  (1) Large-scale Bayesian Learning on Location-Based Social Networks, (2) New deep neural network algorithms.​   
  • The goal of Multimedia Big Data Computing area is to design novel big data distributed computing systems to enable massive scale image and video analytics. The work in this area is grouped in 3 main lines: (1) Large scale visual concept detection and annotation, (2) High-performance and scalable indexing of massive scale image and video and (3)  Multimedia big data analytics (recommendation, trend detection and latent user attribute inference). The coordinator of this area is  the associate professor RUBEN TOUS.

Involved Team:



Visiting researchers & Master/PhD students: 

If you are a researcher interested in spend a sabbatical leave period at BSC collaborating with our research group,  or you are a student with funding/grant that needs a group that welcomes you for PhD or master thesis, you can contact any of the area lead scientist or our research group manager  that will be delighted to help you.


Current involved projects:

  • COMPOSE (Collaborative Open Market to Place Objects at your Service) (2012-2015). COMPOSE is a FP7-ICT-2011.1.2 (ref.  317862) EU Funded Project, coordinated by IBM Haifa (IL) with the following partners: CREATE-NET (IT), Fraunhofer Institute FOKUS (DE), The Open University (UK), Barcelona Supercomputing Center (ES), INNOVA S.p.A (IT), University of Passau (DE), U-Hopper (IT), GEIE ERCIM (W3C) (FR), Fundació Privada Barcelona Digital Centre Tecnològic (Bdigital) (ES), Abertis Telecom (ES), and EVRYTHNG (CH). COMPOSE  aims at enabling new services that can seamlessly integrate real and virtual worlds through the convergence of the Internet of Services (IoS) with the Internet of Things (IoT). COMPOSE will achieve this through the provisioning of an open and scalable marketplace infrastructure, in which smart objects are associated to services that can be combined, managed, and integrated in a standardised way to easily and quickly build innovative applications.


  • BSC - IBM BGAS SoW (2013-2016) is a joint research project between researchers at Barcelona Supercomputing Center (BSC) and the  "Scalable Data Centric Computing" group at IBM Research - Watson Lab. This project aims at exploring systems and software strategies for leveraging in-memory key/value databases to accelerate data intensive tasks, with particular attention to the IBM BlueGene Active Storage (BGAS) architecture and the Scalable Key/value Store (SKV) as the key/value store.


  • BSC - IBM Heterogeneous Clouds SoW (2013-2016) is a joint research project between researchers at Barcelona Supercomputing Center (BSC) and the  "Middleware and Virtualization Management" group at IBM Research - Watson Lab. This is a project focused on building clouds which via their programmability at multiple layers and the embracing of hardware heterogeneity can host a variety workloads and can optimize resource configuration for these workloads. The project will explore the applicability of the so-called "Software Defined Environments (SDE)" to HPC workloads as it has been previously done with transactional and data analytics workloads.


  • Aloja (2014-2016) ALOJA is project funded by Microsoft Research through the BSC- Microsoft Research Center ( that aims to provide automated optimization to Hadoop's performance under different hardware deployments options and software parameters.  We are also exploring new hardware architectures, both on-premise or in the cloud (either IaaS or PaaS) and what is the best configuration option for a given Hadoop job (or job type).  Part of the project includes a public (vendor neutral) Web platform with a repository of Hadoop benchmarks and data analysis tool.  We currently have over 4500 Hadoop benchmark executions both from our local clusters as well as in Azure (IaaS), on which we base our research.


  • EuroServer (FP7-ICT-2013-10 European Project, Grant Agreement no: 610456): Green Computing Node for European Micro-servers. Goal: Design and build a drastically improved energy- and cost-efficient solution suitable across both cloud data-centres and embedded application workloads. Our contribution: Optimise the local placement of Virtual Machines (VMs) within a physical node in a single Data Centre aiming for energy efficiency, by exploiting ARM low-power architectures


  • RenewIT (FP7-SMARTCITIES-2013 European Project, Grant Agreement no: 608679): Advanced concepts and tools for renewable energy supply of IT Data Centres. Goal: Develop a simulation tool to evaluate energy performance of different technical solutions that integrate renewable energy supply in IT Data Centres. Our contribution: Optimise both the local placement of Virtual Machines (VMs) in physical nodes and the selection of Data Centre for remote placement of VMs aiming for ecological efficiency, by exploiting the usage of green energy and the interaction with energy supply and cooling systems


  • ASCETiC  (FP7-ICT-2013-10 European Project, Grant Agreement no: 610874): Adapting Service lifeCycle towards EfficienT Clouds. Goal: Definition and integration of explicit measures of energy and ecological requirements into the design and development process for software. Our contribution: Optimise both the local placement of Virtual Machines (VMs) in physical nodes and the selection of Data Centre for remote placement of VMs aiming for energy efficiency, by focusing on the interaction and information exchange among Cloud layers during the whole service lifecycle for better optimization


  • Severo Ochoa Distinction (January 2012- January 2016): The Barcelona-Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS) has been accredited as Severo Ochoa Centre of Excellence, the award with which the Spanish Ministry recognizes leading research centres in Spain and international reference organisations in their respective areas. The award  will enable the execution of an ambitious research project which involves designing the hardware, software and applications to provide future solutions to the social challenges arising in health and climate change. Our contribution: provide the applications with resource management strategies according to their data management requirements and contribute to the design and development of an integrated stack of software to support the execution of the applications


  • Lightness: (FP7- Future Networks – 2012) Low latency and hIGH Throughput dynamic NEtwork infraStructures for high performance datacentre interconnects. Goal: design, implementation and experimental evaluation of a high-performance network infrastructure for data centres, where innovative photonic switching and transmission solutions are deployed. Our contribution: of the network usage of Big Data applications and detecting how optical networks can benefit the performance of this kind of applications


  • BSC-CA: The main goal of this project is to provide methods, a decision support system, an open source IDE and run-time environment for the high-level design, early prototyping, semi-automatic code generation, and automatic deployment of applications on multi-Clouds with guaranteed QoS. Our contribution: Create an automatic text analysing tool able to extract QoS information about different cloud providers using public information obtained from different websites, such as stack overflow


Previous involved projects:

  • IBM SOW-Active Storage Fabrics (ASF) is a collection of components that surround a parallel in-memory database (PIMD). PIMD is a parallel client, parallel server, key/value object store. This research is part of the MareIncognito research framework between IBM and BSC.
  • OPTIMIS aims at optimizing cloud services using techniques that take advantage of an architectural framework and a development toolkit that take trust, risk, eco-efficiency, cost and legal issues into account. Our group contributes in the self-management of Cloud infrastructures using business information.
  • Barrelfish project, which is a new research operating system being built from scratch to explore how to structure an OS for future multi- and many-core systems. The design principles of Barrelfish are motivated by two closely related trends in hardware design: first, the rapidly growing number of cores, which leads to a scalability challenge, and second, the increasing diversity in computer hardware, requiring the OS to manage and exploit heterogeneous hardware resources.
  • VENUS-C is focused on developing and deploying a Cloud Computing service for research and industry communities in Europe by offering an industrial-quality service-oriented platform based on virtualization technologies. Our group contributes with tools that allow user scenarios to exploit the facilities of Cloud infrastructures.
  • NUBA project (Normalized Usage of Business-oriented Architectures) (2009-2012). NUBA is a strategic research  program (MITyC TSI-020301-2009-30) funded by the Avanza2  R&D Plan of the Spanish Ministry of Industry, Tourism and Trade and coordinated by  Telefonica I+D with 8 partners. The aim of NUBA is to advance the state-of-the-art in business models and technology for the real-time deployment  of federated Cloud platforms, integrating infrastructure from different providers, to execute elastic  business services with  the required QoS and minimizing the energy consumption.





Carrera D, Guitart J, Torres J, Ayguadé E, Labarta J. Complete Instrumentation Requirements for Performance Analysis of Web based Technologies. IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS'03) [Internet]. 2003 :166-175. Available from:



Guitart J, Torres J, Ayguadé E, Bull M. Performance Analysis of Parallel Java Applications on Shared-memory Systems. 30th International Conference on Parallel Processing (ICPP'01) [Internet]. 2001 :357-364. Available from:
Guitart J, Martorell X, Torres J, Ayguadé E. Efficient Execution of Parallel Java Applications. 3rd Annual Workshop on Java for High Performance Computing [Internet]. 2001 :31-35. Available from:


Guitart J, Torres J, Ayguadé E, Oliver J, Labarta J. Java Instrumentation Suite: Accurate Analysis of Java Threaded Applications. 2nd Annual Workshop on Java for High Performance Computing [Internet]. 2000 :15-25. Available from: