Computer Sciences

Overview: 
The scientific mission of the Computer Sciences Department is to influence the way future computing machines are built, programmed and used, bridging what computer technology offers and application requirements, in strong collaboration with companies leading the field.
 
The combination of broad coverage of all facets of computer systems design and programming along with in-depth expertise in each area are somewhat unique amongst supercomputing centres. This unique strength of the BSC-CNS has attracted leading computing companies to invest heavily in collaborative systems design R&D projects despite the relative youth of the Centre.

COMPUTER SCIENCES DEPARTMENT DIRECTOR

COMPUTER SCIENCES DEPARTMENT ASSOCIATE DIRECTOR

Objectives: 

The main objective of the department is to make advance in the hardware and software technologies available to build and efficiently use supercomputing infrastructures, bridging the gap between computer architecture and application requirements. The department is proposing novel architectures for processor, memory hierarchy and their interconnection, programming models and their efficient implementation, tools for understanding and predicting performance and execution environments and resource managers at different levels in the system, from multicore architectures and accelerators, shared and distributed memory architectures, distributed systems and Cloud.

A second objective of the Department is to do research in collaboration with computing system providers. In this direction, the BSC-CNS has currently active collaboration agreements with IBM Research, Microsoft Research, Intel and NVIDIA, in areas related with computer architecture, tools for performance analysis, programming models and execution environments.

Finally, the Computer Sciences department wants to keep a very close interaction between the different departments of the BSC-CNS. The department will bring the technologies developed to the application departments and groups in the BSC-CNS and will favor its efficient use. This synergy, along with the available experience in the optimization of applications (both numeric and non-numeric) will reduce significantly the huge simulation times normally required. In addition, new applications for future Exascale architectures with several orders of magnitude more processors are investigated. On the other hand, the application groups at BSC-CNS will show their needs that will drive the research at hardware level for the future supercomputers (processors, memory, and their interconnection following performance, cost, and power consumption criteria), at base-software level (tools, compilers, and programming models that will ease the programming and optimization of applications), as well as the basic-algorithm level that is the building block for applications.

Research Lines: 

The Computer Sciences Department is structured in 10 research teams. Although each team has its own specialized lines of research, the teams come together to collaborate on larger projects (EU or with companies) that require vertical integration. This vertical interaction is considered critical to the quality and success of the research, as feedback between the different teams enables application programmers to influence the direction of future systems architecture while better knowledge of architectures improves the design and implementation of novel programming models, execution environments and applications.

  • COMPUTER ARCHITECTURE / OPERATING SYSTEM INTERFACE
    Francisco J. Cazorla

    The Computer Architecture and Operating System (CAOS) group focuses on real-time embedded systems and high-performance computing at both architecture and operating system level, analyzing the interaction between hardware and software. The group's research topics include design for low power and low temperature, high-performance power-efficient multicore/multithreaded processors, performance- and power-aware OS schedulers, load balancing for HPC applications, predictability and time analyzability of real-time systems.

  • HETEROGENEOUS ARCHITECTURES
    Àlex Ramírez

    The team explores different design alternatives of processor and system architectures for future generation supercomputer systems. At the lowest level the exploitation of different levels of parallelism in the processor (instruction level parallelism, data level parallelism, or thread level parallelism) needs to be investigated taking into account performance/cost trade-offs. At the multicore and multiprocessor level issues related with the heterogeneity of the cores, memory organization and communication protocols are considered.

  • COMPUTER ARCHITECTURE FOR PARALLEL PARADIGMS
    Osman Unsal and Adrian Cristal

    The team is doing research on architectural support to novel programming models and execution environments for future multicore architectures. The Group constitutes the core of the BSC-Microsoft Research Centre which focuses its research on lowering the programmability wall raised by new multicore architectures; research areas include Transactional Memory, hardware support for programming language runtimes, synchronization, low-power vector processors, and the use of Transactional Memory for other research domains, such as reliability.

  • PERFORMANCE TOOLS
    Judit Giménez

    The team is working on the design of tools to instrument, analyze and predict the behaviour of parallel applications on parallel systems, as well as methodologies and procedures. The main goal of the team is to provide technology to understand the issues that determine the actual performance of a parallel application or that contribute to its bottlenecks. This is extremely important both in novel homogeneous and heterogeneous multi-core architectures as well as in highly scalable cluster systems. Flexibility, simplicity and the appropriate combination of qualitative and quantitative information are some of the issues considered in the design of these tools. Scalability and ability of handling the high volumes of performance data are also two issues that need to be considered to handle long running applications that use hundred/thousand processors. 
     

  • PROGRAMMING MODELS
    Xavier Martorell

    The team explores new programming models and their efficient implementation for current and future architectures, ranging from multicore SMP architectures with support for accelerators (GPUs, FPGAs, ...) to clusters of SMPs, and exascale systems. This exploration is supported with the development of powerful compiler (Mercurium) and runtime (NANOS++) prototypes. The team also explores the usability of these programming models in different application scenarios, proposing extensions to the standards (e.g. OpenMP) to accommodate the requirements of novel applications for supercomputer systems.
     

  • GRID COMPUTING AND CLUSTERS
    Rosa Mª Badia

    The team is researching new programming and execution models, and resource management for distributed computing. The team explores solutions in order to simplify application development, enable dynamic exploitation of parallelism at runtime and perform combined scheduling decisions at different levels. The team is developing COMPSs and StarSs programming models.
     

  • AUTONOMIC SYSTEMS AND EBUSINESS PLATFORMS
    Jordi Torres

    The team performs high-level research in eBusiness applications and platforms executing on high-productivity multiprocessor architectures as well as distributed environments and new architectural proposals, with the objective of becoming a research group of excellence in Autonomic Computing.
     

  • STORAGE SYSTEMS
    Toni Cortés

    The team focuses on finding appropriate solutions to the scalability of parallel file systems in large installations (in which very large volumes od data need to be generated and accessed) and file systems for the grid that solve the problems currently found (data location, replication and striping) and that will make these environments more efficient.

 
  • UNCONVENTIONAL COMPUTER ARCHITECTURE AND NETWORKS
    Mario Nemirovsky

    The team is conducting research on the massive multithreaded architectures focused on network applications. Networks and their applications are a fundamental part of the Internet from its core to its edge. Additionally, the Network and its applications play a critical role in today’s data centres and High Performance Systems (HPS). In these two directions, the Network Group concentrates in the study of these systems and the definition of new network architectures.

 

          Vassil Alexandrov

The group was created in 2011 and the research focuses  on  novel  mathematical methods and algorithms for extreme scale computing, especially solving problems with uncertainty on large scale computing systems. The main expertise is in the area of Computational Science, scalable algorithms for advanced computer architectures, Monte Carlo methods  and  algorithms. In particular, scalable Monte Carlo algorithms are developed for Linear Algebra, Computational Finance, Environmental Models, Computational Biology, etc. In addition the research focuses on scalable and fault-tolerant and resilient algorithms for extreme scale (peta and exa scale) architectures.

 

 

PUBLICATIONS AND COMMUNICATIONS

2014

Macías M, Guitart J. Trust-aware Operation of Providers in Cloud Markets. Lecture Notes on Computer Science (LNCS), 14th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems (DAIS'14) (Short Paper). 2014 ;Vol. 8460:31-37.
Macías M, Guitart J. A Risk-based Model for Service Level Agreement Differentiation in Cloud Market Providers. Lecture Notes on Computer Science (LNCS), 14th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems (DAIS'14). 2014 ;Vol. 8460:1-15.
Fitó O, Guitart J. Business-driven Management of Infrastructure-level Risks in Cloud Providers. Future Generation Computer Systems. 2014 ;Vol. 32:41-53.
Filgueras A, Gil E, Jiménez-González D, Alvarez C, Martorell X, Langer J, Noguera J, Vissers K. OmpSs@Zynq All-Programmable SoC Ecosystem. 22nd ACM/SIGDA International Symposium on Field-Programmable Gate Arrays [Internet]. 2014 . Available from: http://www.eecg.utoronto.ca/FPGA2014/
Casas M, Bronevetsky G. Active Measurement of Memory Resource Consumption. 28th IEEE International Parallel & Distributed Processing Symposium (IPDPS) . 2014 .
Lezzi D, Lordan F, Rafanell R, Badia RM. Execution of scientific workflows on federated multi-cloud infrastructures. Euro-Par 2013: Parallel Processing Workshops [Internet]. 2014 ;8374:136-145. Available from: http://hpc.ac.upc.edu/PDFs/dir28/file004262.pdf

Pages