The PUMPS is aimed at enriching the skills of researchers, graduate students and teachers with cutting-edge technique and hands-on experience in developing applications for many-core processors.
The fifth edition of the Programming and Tuning Massively Parallel Systems summer school (PUMPS) is aimed at enriching the skills of researchers, graduate students and teachers with cutting-edge technique and hands-on experience in developing applications for many-core processors with massively parallel computing resources like GPU accelerators.
Summer School Co-Directors: Mateo Valero (BSC and UPC) and Wen-mei Hwu (University of Illinois at Urbana-Champaign)
Organized by:
Barcelona Supercomputing Center (BSC)
University of Illinois at Urbana-Champaign (University of Illinois)
Universitat Politecnica de Catalunya (UPC)
HiPEAC Network of Excellence (HiPEAC)
PUMPS is part of this year PRACE Advanced Training Centre program
Date: Monday, 7 July, 2014 - 09:00 to Friday, 11 July, 2014 - 18:00
Objectives:
- Instructors Wen-mei Hwu (University of Illinois) and David B. Kirk (NVIDIA), co-authors of “Programming Massively Parallel Processors, A Hands-on Approach”, will provide students with knowledge and hands-on experience in developing applications software for many-core processors, such as general purpose graphics processing units (GPUs).
-
By the end of the summer school, participants will:
- Be able to design algorithms that are suitable for accelerators.
- Understand the most important architectural performance considerations for developing parallel applications.
- Be exposed to computational thinking skills for accelerating applications in science and engineering.
- Engage computing accelerators on science and engineering breakthroughs.
Agenda:
-
Topics:
The following is a list of some of the topics that will be covered during the course. The updated full program will soon be available- CUDA Parallel Execution Model
- CUDA Performance Considerations
- CUDA Algorithmic Optimization Strategies
- Data Locality Issues
- Dealing with Sparse and Dynamic data
- Efficiency in Large Data Traversal
- Reducing Output Interference
- Debugging and Profiling CUDA Code
- GMAC Runtime
- Multi-GPU Execution
- Introduction to OmpSs
- OmpSs: Leveraging GPU/CUDA Programming
- Hands-on Labs: CUDA Optimizations and OmpSs Programming
The programme is available here