IBM - BSC Collaboration

Primary tabs


The collaboration between IBM and the Barcelona Supercomputing Center (BSC) dates back to 2000, and many milestones have been achieved since then, such as the installation of MareNostum, and the official creation of the Barcelona Supercomputing Center in 2005, and most recently the creation of the Technology Center for Supercomputing between BSC and IBM. The collaboration between researchers from IBM Research and the Barcelona Supercomputing Center has resulted in a number of projects, joint developments and papers in international conferences and journals, and has contributed to clear advances in hardware and software solutions at both sites. This relationship will continue during the coming years as new initiatives and projects will be developed.

  • 2000-2004: CEPBA-IBM Research Institute (CIRI)
  • 2004: Installation of first MareNostrum, ranked 4 in world Top500 list, first in European Union
  • 2005: Official creation of the Barcelona Supercomputing Center and start of activities
  • 2005-2011: Research collaboration agreement - IBM Innovation Initiative at BSC
  • 2006: Upgrade of MareNostrum and creation of the Spanish Network for Supercomputing (RES)
  • 2007-2011: Research collaboration activities focused on the MareIncognito Project
  • 2012-2013: Installation of MareNostrum III
  • 2013-2015: Technology Center for Supercomputing, a BSC-IBM initiative

IBM - BSC successful collaboration since its creation

IBM and BSC have a 14-year collaboration history with a strong partnership even before the creation of BSC. Collaboration history includes three main agreements. The first collaboration agreement ran from 2000 to 2004. BSC was created in 2005 but partnership included IBM and the European Center for Parallelism of Barcelona (CEPBA). CEPBA was a research and supercomputing center that belonged to the Technical University of Catalonia (UPC) and became the core of the Barcelona Supercomputing Center once created. Such collaboration agreement was named the CEPBA-IBM Research Institute (CIRI).

The mission was to contribute to the community through R&D in Information Technology with four main objectives: perform Research & Development, support external R&D, technology transfer, and education. Main focus areas were: Deep Computing (performance tools, parallel programming, grid, code optimization), Computer Architecture (vector and network processors), and Data Bases.

The second collaboration agreement ran from 2005 to 2011 and was established after BSC creation. Collaboration was named the “IBM Innovation Initiative at BSC” and extended for 6 years with the following goals:

  • Establish BSC-CNS as the first European center for open Linux application scaling
  • Collaborative work on basic research that is of mutual interest for BSC and IBM in the areas of HPC and computer architectures
  • Collaborative work on application scaling and benchmarking to maximize utilization of MareNostrum and expand its use in the commercial arena
  • BSC participates in in order to more effectively drive future design and standards based on Power Architecture™ technologies
  • Enhance the collaboration to help accelerate technology advances
  • Transfer the education and skills output of the collaboration activities to facilitate technology transfer to BSC, IBM, and IBM’s customers

In 2007, research collaboration projects focused to enable BSC to the research on the design and development of new generation of Petascale Supercomputers. The code name for the project was MareIncognito.

2013 – 2015 Collaboration agreement


The collaboration agreement was initiated in 2013. This new collaboration, named Technology Center for Supercomputing, was a three year collaboration agreement with the following main goals:

  • Conduct basic research of mutual interest for BSC and IBM in HPC valuable in the future for IBM products, solutions and services
  • Educate and transfer skills on HPC to academia, IBM, and IBM's customers through different kind of activities supported by scholarships and grants
  • Finalize, disseminate and maintain a BSC HPC Service Catalog to develop the marked for the BSC capabilities, aligned with the interests of IBM, in Spain and World Wide

Main collaboration goals focused on the research of technologies to build the next generation of supercomputers, on new technologies for the Smarter Cities field and on Biotechnology. Both institutes provided resources to the collaboration activities to fulfill those goals.

IBM – BSC Technology Center for Supercomputing projects (current projects)


There are presently a number of active collaboration projects between both institutions. Please find below a description of each one of them:

- High-performance In-memory Databases

The project aims to explore the use of high performance in-memory databases for the Blue Gene Active Storage (BGAS) architecture to accelerate existing workloads. Active storage, implemented in prototype form in the BGAS project, could dramatically change the way workflows in scientific applications manage data:

  • in the Bio-Informatics domain, eliminating the bottleneck created by the use flat files to store data and making data pipelining easier
  • in the data analytics domain, introducing scalable active data repositories accessible locally or remotely.

In both cases, new algorithms and approaches to large scale data management problems will be studied and proposed based on the large all-to-all bandwidth (and corresponding bandwidth to local solid-state storage) available in BGAS prototypes combined with a very large in-memory storage capacity.

- Software Defined Environments for HPC Workloads

The project will explore the applicability of the so-called "Software Defined Environments (SDE)" to HPC workloads. The project extends the work that has been previously done with transactional and data analytics workloads but for HPC workloads.

Workloads are described in a declarative way to capture their requirements andresource abstractions and management technologies are being developed to enable such workloads to be deployed and optimized in an SDE cloud environment. The need for SDEs is justified by the fact that the use of special hardware (e.g. FPGAs, and GPUs) and fast network interconnects is more and more common in all kinds of workloads, wither HPC or not traditionally considered HPC. So a related angle of exploration is how HPC technologies can impact modern workloads, and how HPC architectures can be made available to these workloads in the context of SDE. Resource abstractions will be evaluated and developed to enable the optimized deployment of HPC workloads. These abstractions will be implemented within OpenStack platform.

- Cognitive deep learning with HPC tools

Deep learning technology has recently achieved human-like perception on visual inputs. The goal of this project is to allow other fields to benefit from these capabilities by extracting and exploring the internal representations of deep learning networks. The knowledge captured within deep learning networks is vast, and its analysis requires of high-performance computing (HPC) tools and infrastructure. Processing thousands of images, extracting the millions of features defining each image, and mining their patterns are among the main tasks of this project, as well as doing it efficiently and in parallel. A parallelization using the PyCOMPSs programming model will be produced within this project, and evaluated in the MareNostrum supercomputer.

Additionally, various knowledge representation solutions will be explored, like sparse vectors and large scale graphs. Each of these representations will allow different exploiting algorithms, which may produce different types of reasoning (e.g., vector arithmetics, graph mining algorithms).

- OmpSs for Asynchronous Algorithms

The project aims at the porting of a set of parallel scientific applications provided by the Computational Sciences group of IBM Research Zurich Lab to the OmpSs/Nanos++ programming model. The project target to utilize the particular H/W features of the Blue Gene/Q platform (such as and not limited to RDMA, transactional memory) as well as S/W features of the compute node light weight operating system (CNK). Main goals are to reduce the overhead of the OmpSs runtime, and eventually incorporated fully to implement prefetch, or for overlapping data transfers and computation. The applications are likely to benefit from the locality awareness of OmpSs, and the irregular or asynchronous forms of parallelism it supports. These characteristics allow for additional asynchronicity in the execution of parallel tasks (compared to OpenMP) and lower bandwidth requirements. As a result, an application's tolerance for network or memory latency increases, which is an interesting property for the target platform.

Collaboration envisage as a co-design exercise between IBM and BSC, where the experiences and the quantitative analysis gathered while developing the MPI+OmpSs applications drives the incorporation of the BG/Q's hardware features into OmpSs, and the project as a whole. Ultimately, these efforts should deliver an improved version of OmpSs, tailored for the BG/Q, OmpSs implementations of the aforementioned algorithms, a thorough analysis of their performance, and a comparison with the original runtime and applications.

- OmpSs for Power8/Nvidia platforms

The project aims at leveraging the OmpSs programming model developed at BSC for platforms based on the Power8 architecture with Nvidia GPUs to build extremely powerful nodes. Language extensions on top of the OpenMP 4.0, compiler optimizations (demosntarted on the MACC compiler) and intelligent runtime systems (demonstrated on the Nanos runtime) will be combined to get the most out of these heterogeneous architectures. Proposed extensions and new features will be evaluated with some relevant applications.

- Adaptive resource management for Power architectures

This research will focus on adaptive resource management for improvement of power-performance metrics associated with current and future POWER-series microprocessors. Both hardware-only and runtime-aided adaptive control systems will be pursued.

The collaboration will pursue the development of new adaptive algorithms to exploit prefetching enhancements in current and future POWER architectures and generalized concepts in cross-layer co-optimization for improving power-performance metrics in future POWER systems. Proposals on new harware requirements to support the development of a new generation of hardware-software co-managed adaptative systems will be pursued in this collaboration.

- Urban Planning Tool

The project aims at the creation a declarative semantic data model that captures the physical and information flows of the urban systems including landcover, mobility and services, public space, urban organization, management of metabolic fluxes (energy, water, waste and materials, food), biodiversity and social cohesion.

This new model, known as Urban Model and Organization (SUMO), has the goal to represent, not constraint, the domain, and to serve as a reference and common vocabulary to be used in Smart Cities applications.

IBM – BSC Technology Center for Supercomputing projects (past projects)

The following projects were also done under the 2013 - 2015 collaboration agreement:

- Evaluation and Integration of IBM OMPT into BSC performance tools

In 2012 the OpenMP language committee created a working group focused on tools. Such working group was composed of compiler, debugging, and performance tools developers. The aim of the working group is to define a unique and standard mechanism for the capture of the OpenMP runtime activity that may be used by performance analysis tools, which is somewhat equivalent to the PMPI layer of the MPI parallel programming model.

The target of this project was to study the applicability of the OpenMP proposal into Extrae, prototype it, and compare it with the current Extrae instrumentation approach. This work allowed IBM to improve the OpenMP performance tooling interface in their IBM XL compilers. The work allowed BSC to prototype their performance monitoring tool for this new performance interface. The experiences in implementing the interface and applying to real cases, when the proposal was still in design stage, helped to validate that the proposed interface was sufficient for and would require no further changes. 

Both IBM and BSC have as long-term goal in this area the development of better performance tools for the large base of IBM customers. Partnering with BSC to target their tool for the new performance interface enabled IBM to grow the pool of performance tools ready for a broad base of users.

- Applicable research to Interconnection Networks

This project performed research on interconnection networks for high-performance computing (HPC) systems targeted at scientific/technical computing on one hand and data-intensive workloads on the other. The project explored various research directions at the system level focused on reducing inter-application contention of parallel applications on capacity supercomputers and also to increase performance of large-scale parallel capability systems. The most relevant achievements are listed below:

  • Proposal of architectural advances, including topologies, routing, flow control, deadlock avoidance and related topics.
  • Evaluation of the impact of the OmpSs programming model on interconnection networks.
  • Design, development and testing of network and system software mechanisms to reduce inter-application contention on real machines based on IBM IDataPlex and Infiniband technologies. Exploring the use of real-time analysis of network utilization of production machines.

Resilience compiler support

This projects aimed at the development of compiler technologies to support application resilience characterization. New developments in the LLVM compiler were done to dynamically control fault injection mechanisms, to identify resilient regions and for resilience-aware code generation. This was a coordinated approach between the compiler and runtime system to avoid unacceptable overheads and allow satisfactory dynamic behavior. The implementation was evaluated using microbenchmarks.