BSC to facilitate the safety certification of critical autonomous AI-based systems for more competitive EU industry

14 February 2023

Coordinated by BSC, SAFEXPLAIN aims to provide scientific and technical solutions to the European industry that enable fully-autonomous Critical Systems for cars, trains, or satellites.

The EU-funded SAFEXPLAIN (Safe and Explainable Critical Embedded Systems based on AI) project, seeks to lay the foundation for Critical Autonomous AI-based Systems (CAIS) applications that are smarter and safer by ensuring that they adhere to functional safety requirements in environments that require quick and real-time response times that are increasingly run on the edge. This three-year project, coordinated by the Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS) brings together a six-partner consortium representing academia and industry.

AI technology offers the potential to improve the competitiveness of European companies and the AI market itself is expected to reach $191 billion by 2024 in response to companies’ growing demand for mature autonomous and intelligent systems. CAIS are becoming especially ubiquitous in industries like rail, automotive and space, where the digitization of CAIS offers huge benefits to society, including safer roads, skies and airports through the prevention 90% of collisions per year and the reduction of up to 80% of the CO2 profile of different types of vehicles.

Deep Learning (DL) technology that supports AI is key for most future advanced software functions in CAIS, however, there is a fundamental gap between its Functional Safety (FUSA) requirements and the nature of DL solutions. The lack of transparency (mainly explainability and traceability) and the data-dependent and stochastic nature of DL software clash with the need for clear, verifiable and pass/fail test-based software solutions for CAIS. SAFEXPLAIN tackles this challenge by providing a novel and flexible approach for the certification – and hence adoption – of DL-based solutions in CAIS.

Jaume Abella, Safexplain coordinator, highlights that “this project aims to rethink FUSA certification processes and DL software design to set the groundwork for how to certify DL-based fully autonomous systems of any type beyond very specific and non-generalizable cases existing today.”

In addition to its role as project coordinator, the BSC’s Computer Architecture Operating Systems group will apply its expertise towards sustaining the emerging requirements from FUSA-aware solutions in terms of software and hardware support for DL components and libraries. They will be responsible for selecting and developing the DL execution platform and appropriate analysis methods.

BSC will promote platform observability and use performance monitors to provide evidence of the correctness of performance analysis results. This work will address the challenges of 1) providing platform level predictability focusing on mixed criticality execution for DL-related software, 2) performing efficient deployments of DL-based mixed-criticality applications on the platform, and 3) devising effective timing analysis methods to obtain real-time guarantees. This mixed-criticality paradigm will be key for allowing each application to be certified according to its associated integrity level instead of certifying all software for highest integrity.

BSC will also ensure the refinement and integration of tasks to ensure a smooth integration process with the three cases studies. These case studies from automotive, railway and space domains will illustrate the benefits of SAFEXPLAIN technology as each domain has its own stringent safety requirements set by their respective safety standards. The project will tailor automotive and railway certification systems and space qualification approaches to enable the use of new FUSA-aware DL solutions.


The SAFEXPLAIN (Safe and Explainable Critical Embedded Systems based on AI) is a HORIZON Research and Innovation Action financed under grant agreement 101069595. The project began on 1 October 2022 and will end in September 2025. The project is formed by an inter-disciplinary consortium of six partners coordinated by the Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS). The consortium is composed of three research centres, RISE (Sweden; AI expertise), IKERLAN (Spain; FUSA and railway expertise) and BSC (Spain; platform expertise) and three CAIS industries, NAVINFO (Netherlands; automotive), AIKO (Italy; space), and EXIDA DEV (Italy; FUSA and automotive).


Figure 1: SAFEXPLAIN consortium kick-off meeting in Barcelona

Figure 2:Overview of SAFEXPLAIN's vision