Computer Architecture for Parallel Paradigms

Overview: 

For processor manufacturers, the traditional approach of increasing performance through exploiting Instruction Level Parallelism (ILP) has hit the power wall; so they are shifting to the less complex approach of utilizing Thread Level Parallelism (TLP). By including more processing cores on chip, total processor throughput is increased through exploiting TLP and parallel computing. However, substantial challenges lay ahead on proper hardware and architecture support for the system stack and the parallel programmed ecosystem of the future. The research group conducts research in developing hardware support to fully utilize future many-cores and to make them easier to program and debug.

The group is involved in running the BSC-Microsoft Research Centre and collaborates with Microsoft researchers.

Objectives: 

We believe that in the era of many-core chips, the software community (OS, Compiler, Programming Model, Applications) must be in the driver seat. In tandem with this new reality, the overall objective of the group is to conduct research in top-down Computer Architecture by designing hardware for software. Our overall objectives is making many-core processors easirt to program. More specifically we conduct research on:
•    Transactional Memory (TM) is a technology which promises to make shared-memory programming easier. The team proposes hardware support for accelerating Software Transactional Memory (STM), designs scalable Hardware Transactional Memory (HTM) implementations, produces TM applications and benchmarks, investigates TM use in system libraries, proposes power/aware TM heuristics and develops TM debuggers.
•    Hardware support for providing easier to use and fair locking implementations.    
•    Hardware support for managed language runtimes such as Haskell or C#.
•    Developing power and complexity aware architectures for small form-factor high-performance computing systems.

Projects/Areas: 

The Computer Architecture for Parallel Paradigms Group currently coordinates the FP7 ParaDIME Project, which employs radical software-hardware techniques for dramatic energy savings.  It is a participant in the energy-related ICT Energy Coordination Support Action and the Big Data-related AXLE Project in addition to contributing to PRACE and Mont Blanc Projects.  The group also forms a part of the BSC-Microsoft Research Centre.  Finally, it coordinated VELOX, an FP7 research project on Transactional Memory which successfully concluded in 2010.

PEOPLE

PUBLICATIONS AND COMMUNICATIONS

2013

King, M., Khan, A., Agarwal, A., Arcas, O. & Arvind, Generating Infrastructure for FPGA-Accelerated Applications. 23rd International Conference on Field Programmable Logic and Applications 1–6 (2013).
Seyedi, A., Yalcin, G., Unsal, O. & Cristal, A. Circuit Design of a Novel Adaptable and Reliable L1 Data Cache. 23rd ACM international conference on Great lakes symposium on VLSI (2013).
Smiljkovic, V., Nowack, M., Miletic, N., Harris, T., Unsal, O., Cristal, A. & Valero, M. TM-dietlibc: A TM-aware Real-world System Library. The 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2013) (2013).
Smiljkovic, V., Nowack, M., Miletic, N., Harris, T., Unsal, O., Cristal, A. & Valero, M. TM-dietlibc: A TM-aware Real-world System Library. The 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2013) (2013).
Yalcin, G., Unsal, O. & Cristal, A. FaulTM: Error Detection and Recovery Using Hardware Transactional Memory. ACM/IEEE Design, Automation, and Test in Europe (DATE) (2013).

Pages