Projects

Showing 61 - 70 results of 79

Data-centres form the central brains and store for the Information Society and are a key resource for innovation and leadership. The key challenge has recently moved from just delivering the required performance, to include consuming reduced energy and lowering cost of ownership. Together, these create an inflection point that provides a big opportunity for Europe, which...

The most common interpretation of Moore's Law is that the number of components on a chip and accordingly the computer performance doubles every two years. This experimental law has been holding from its first statement in 1965 until today. At the end of the 20th century, when clock frequencies stagnated at ~3 GHz, and instruction level parallelism reached the phase of...

AXLE focused on automatic scaling of complex analytics, while addressing the full requirements of real data sets. Real data sources have many difficult characteristics. Sources often start small and can grow extremely large as business/initiatives succeed, so the ability to grow seamlessly and automatically is at least as important as managing large data volumes once you know...

The increasing power and energy consumption of modern computing devices is perhaps the largest threat to technology minimization and associated gains in performance and productivity. On the one hand, we expect technology scaling to face the problem of “dark silicon” (only segments of a chip can function concurrently due to power restrictions) in the near future...

The use of High Performance Computing (HPC) is commonly recognized a key strategic element both in research and industry for an improvement of the understanding of complex phenomena. The constant growth of generated data -Big Data- and computing capabilities of extreme systems lead to a new generation of computers composed of millions of heterogeneous cores which will provide...

New safety standards, such as ISO 26262, present a challenge for companies producing safety-relevant embedded systems. Safety verification today is often ad-hoc and manual; it is done differently for digital and analogue, hardware and software.

The VeTeSS project developed standardized tools and methods for verification of the robustness of...

The grand challenge of Exascale computing, a critical pillar for global scientific progress, requires co-designed architectures, system software and applications. Massive worldwide collaboration of leading centres, already underway, is crucial to achieve pragmatic, effective solutions. Existing funding programs do not support this complex multidisciplinary effort. Severo...

DEEP developed a novel, Exascale-enabling supercomputing platform along with the optimisation of a set of grand-challenge codes simulating applications highly relevant for Europe's science, industry and society.

The DEEP System realised a Cluster Booster Architecture that can cope with the limitations purported by Amdahl's Law. It served as...

There is a continued need for higher compute performance: scientific grand challenges, engineering, geophysics, bioinformatics, etc. However, energy is increasingly becoming one of the most expensive resources and the dominant cost item for running a large supercomputing facility. In fact the total energy cost of a few years of operation can almost equal the cost of the...

The main goal of EUBrazilOpenBio was to deploy an e-Infrastructure of open access resources (data, tools, services), to make significant strides towards supporting the needs and requirements of the biodiversity scientific community. This data e-Infrastructure resulted from the federation and integration of substantial individual existing data, cloud, and grid EU and Brazilian...

Pages