Projects
High Performance Computing is becoming a fundamental tool for the progress of science and engineering and as such for economic competitiveness. The growing complexity of parallel computers is leading to a situation where code owners and users are not aware of the detailed issues affecting the performance of their applications. The result is often an inefficient use of the...
The project involved the development of mathematical models and their implementation as software code for high performance computing clusters. The physical problem studied involves two related topics: particle deposition and solute absorption in respiratory airways, and tumour metastasis in arterioles and capillaries. The aim was to couple micro-scale phenomena to large 3D...
The most common interpretation of Moore's Law is that the number of components on a chip and accordingly the computer performance doubles every two years. This experimental law has been holding from its first statement in 1965 until today. At the end of the 20th century, when clock frequencies stagnated at ~3 GHz, and instruction level parallelism reached the phase of...
With top systems reaching the PFlop barrier, the next challenge is to understand how applications have to be implemented and be prepared for the ExaFlop target. Multicore chips are already here but will grow in the next decade to several hundreds of cores. Hundreds of thousands of nodes based on them will constitute the future exascale systems.
TEXT...
Design complexity and power density implications stopped the trend towards faster single-core processors. The current trend is to double the core count every 18 months, leading to chips with 100+ cores in 10-15 years. Developing parallel applications to harness such multicores is the key challenge for scalable computing systems.
The ENCORE project...