Proceedings of the

The 33rd European Safety and Reliability Conference (ESREL 2023)
3 – 8 September 2023, Southampton, UK

A Comprehensive Framework for Ensuring the Trustworthiness of AI Systems

Stefan Brunnera, Carmen Mei-Ling Frischknecht-Gruberb, Monika Reifc and Joanna Wengd

Institute of Applied Mathematics and Physics, Zurich University of Applied Sciences, Switzerland.


Legislators and authorities are working to establish a high level of trust in AI applications as they become more prevalent in our daily lives. As AI systems evolve and enter critical domains like healthcare and transportation, trust becomes essential, necessitating consideration of multiple aspects. AI systems must ensure fairness and impartiality in their decision-making processes to align with ethical standards. Autonomy and control are necessary to ensure the system remains aligned with societal values while being efficient and effective. Transparency in AI systems facilitates understanding decision-making processes, while reliability is paramount in diverse conditions, including errors, bias, or malicious attacks. Safety is of utmost importance in critical AI applications to prevent harm and adverse outcomes. This paper proposes a framework that utilizes various approaches to establish qualitative requirements and quantitative metrics for the entire application, employing a risk-based approach. These measures are then utilized to evaluate the AI system. To meet the requirements, various means (such as processes, methods, and documentation) are established at system level and then detailed and supplemented for different dimensions to achieve sufficient trust in the AI system. The results of the measures are evaluated individually and across dimensions to assess the extent to which the AI system meets the trustworthiness requirements.

Keywords: Artificial intelligence, Trustworthiness of AI systems, AI standards, AI safety.

Download PDF