In 2014, the collaboration with the European Space Agency (ESA) was structured around four projects: 1) ESA NPI (architectural solutions for the timing predictability of next-generation multi-core processors), in which a number of memory controller and bus architectures were trialled to enable a Worst Case Execution Time (WCET) analysis of time-critical space applications in a multi-core execution environment such as the New Generation Multi-Core Processor (NGMP) while attaining accurate WCET estimates for critical tasks and high average performance for non-critical ones; 2) ESA PROARTIS for SPACE, integrating software randomisation techniques onto real space setups (i.e. operating system, application and hardware) used by ESA and its system providers; 3) ESA HAIR, developing several timing models that will be integrated as part of a virtual machine for the NGMP; and 4) ESA PMCs (Multi-Core Architectures – Cache Structure Optimisation for better RT Performance), focusing on the analysis and proposal of a new performance monitoring counter support for the NGMP with the goal of better capturing how tasks interact and are delayed when accessing NGMP’s hardware shared resources
The Botín Foundation is helping to establish a spin-off company, NOSTRUM DRUG DISCOVERY, to commercialise technologies developed by the Life Sciences department.
The company aims to develop a drug-design simulation platform to reduce the need for clinical trials of new drugs.
IBERDROLA and BSC-CNS are jointly developing a major R&D&I initiative known as the 'SEDAR Project (High Resolution Wind Simulation)'. SEDAR is an innovative project aimed at developing a new computer model to improve estimates of electrical energy production in wind farms before their construction. Current models have a significant limitation in their calculation times and in the resolution of physical models, and this project seeks to overcome these shortcomings through the use of supercomputing techniques. The software developments in SEDAR are based on the Alya software platform developed at BSC-CNS. Current work focuses on introducing more complexity in the physical models simulated by Alya, with the objective being to obtain a robust short-term power production forecast tool.
During 2014, the 3-year collaboration with IBM established in 2013 was continued.
A number of Joint Study Agreements were executed with the Watson Research laboratory: high-performance in-memory databases; software-defined environments for HPC workloads; adaptive resource management for power; OmpSs @ P8/GPU; resilience compiler support and performance API for OpenMP; and smart cities. Further JSAs with the Zurich Research Laboratory were also conducted: OmpSs programming model for asynchronous applications and applicable research to interconnection networks.
The main objective of the Intel-BSC Exascale Laboratory is to conduct research activities on novel programming models and prediction tools that will be needed to exploit extraordinary levels of parallelism in future Intel-architecture based supercomputers, consisting of millions of cores. During 2014 the collaboration mainly focused on performance analysis and prediction for HPC code targeting these future exascale systems; transparent support for heterogeneity in the OmpSs programming model; dynamic load balancing (DLB) in hybrid MPI/OmpSs applications; and fault tolerance transparently managed by highly scalable parallel run-time systems (OmpSs).
Since 2014, the Microsoft- BSC Research Centre targets BigData topics and, in particular, the development of performance models for large scale data analytics frameworks, initially focusing on Hadoop ecosystems.
With this objective in mind, researchers at BSC-CNS have teamed up with computer scientists at Microsoft Corporation and Microsoft Research in Redmond (US) to develop automated optimisation for the performance of Hadoop infrastructure deployments. The goal is to explore upcoming hardware architectures for Big Data processing and to reduce the TCO of running Hadoop clusters, by creating the most comprehensive open public Hadoop benchmarking repository. The research compares not only software configuration parameters, but also contrasts current and newly available hardware including SSDs, InfiniBand networks, and Cloud services, while at the same time evaluating the TCO of each possible setup along with the running time to offer a recommendation. This analysis serves as a reference guide for designing new Hadoop clusters, exploring parameter relationships as well as reducing the TCO for existing data processing infrastructures. Ultimately, the Centre will develop automated learning mechanisms for providing cost-effective characterisation of Hadoop workloads.
In other activities in 2014, centre researchers worked on lowpower vector architectures and finalising research on Transactional Memory.
BSC-CNS, in association with the Universitat Politècnica de Catalunya (UPC), was awarded by NVIDIA the title of CUDA Center of Excellence (CCoE) in 2011. The Center acknowledges the broad-based research success of BSC-CNS in leveraging the NVIDIA CUDA technology and GPU computing. As part of the CCoE training, during 2014 several courses were offered in graduate and master's programmes at UPC, and as part of the PRACE Advanced Training Center (PATC). In addition, the renowned Programming and Tuning Massively Parallel Systems (PUMPS) Summer School has been held each year in Barcelona since 2010. During 2014 the research activities at the CCoE focused in the following areas: 1) use of low-power GPUs in platforms oriented to high performance computing; 2) optimisation of applications in different domains in conjunction with the CASE, Life and Earth Sciences departments; 3) facial recognition and security video surveillance with the UPC startup company HERTA Security; 4) development of software infrastructures to ease the development of multi-GPU systems, and mechanisms and policies for scheduling multiprogrammed workloads; and 5) task-parallel simulation and visualisation of crowds in hybrid GPU/CPU platforms.
In 2010, and following the success of the Kaleidoscope project, both BSC-CNS and Repsol decide to create a joint research centre: the Repsol- BSC Research Center (RBRC). The aim of the Center is to tackle geophysical problems and a broad spectrum of other HPC challenges of interest for Repsol. RBRC is an interdisciplinary group of engineers and researchers from the geophysics, IT and telecommunication fields from the CASE Department.
The geophysical and computational developments at the RBRC have resulted in a unique software platform called Barcelona Subsurface Imaging Tools (BSIT). BSIT has enabled the development of a whole set of imaging applications which include state-of-the-art solutions for the most challenging problems in exploration geophysics. The platform includes different packages for processing seismic data: Forward Modelling, Reverse Time Migration and Full Waveform Inversion. In addition, the software supports different rheologies including: acoustic, acoustic with variable density, elastic and viscoelastic. Moreover, several levels of anisotropy are supported: VTI/HTI, Orthorhombic, TTI and arbitrary anisotropy (for elastic and viscoelastic rheologies). In recent years new capabilities have been added to simulate electromagnetic wave problems, including modelling and inversion. In 2014, the AURORA project was launched in order to obtain a 3D join full waveform inversion of elastic and electromagnetic waves able to be applied to real problems.
In 2014, the collaboration with Samsung Co., Ltd. focused on memory systems for highperformance computing. The collaboration targets three areas: 1) the analysis of application memory requirements in terms of capacity and bandwidth, analysing the impact of main memory latency on the overall performance; 2) the study of DRAM errors in production HPC workloads running on the MareNostrum supercomputer. In addition to the detection of DRAM errors, the system logs and correlates a number of statistics of interest such as the error type, time-stamp, physical position of errors, and the DIMM manufacturer; 3) analysis of the suitability of STT-MRAM for main memory of HPC systems, simulating HPC systems with the STT-MRAM main memory and the conventional DRAM, and comparing their performance on a set of production HPC applications.
During 2014, the Programming Models Group continued its collaboration with Xilinx towards easy programmability of the Xilinx Zynq platform. Using the OmpSs infrastructure ported to Zynq during the previous period, the Group evaluated the benchmarks Cholesky, Covariance and Matrix Multiplication. The results were jointly published in the FPGA conference. The Group also developed a performance estimator to overcome the large FPGA synthesis times. The performance estimator is based on traces obtained from the serial execution of the applications, annotated with OmpSs tasks. The tool does a design space exploration by mapping the tasks onto the FPGA or the SMP cores, and it uses simulation to estimate which mapping will deliver better performance. Following the indications of the tool, the user can then select the proper tasks to be synthesized for the FPGA for the final application binary generation.