Proceedings of the
35th European Safety and Reliability Conference (ESREL2025) and
the 33rd Society for Risk Analysis Europe Conference (SRA-E 2025)
15 – 19 June 2025, Stavanger, Norway
AI Security Assurance: Developing Framework for Secure and Resilient AI
1Department of Risk and Security, Institute for Energy Technology, Norway.
2Department of Information Security and Communication Technology, Norwegian University of Science and Technology, Norway.
ABSTRACT
The rapid advancement of Artificial Intelligence technologies has delivered considerable transformative benefits across various industries but has also brought significant security risks. Security assurance of AI systems is critical, particularly as these systems are increasingly integrated into critical infrastructures, healthcare, financial services, and autonomous systems. This paper discusses the challenges, risks, and opportunities related to AI systems, covering various aspects such as data preprocessing, model training, and deployment. It presents a conceptual framework for AI security assurance, focusing on evaluating the overall security level based on security requirements, threats, vulnerabilities, and ethical considerations. The framework leverages established security standards, regulations, and acts to identify security requirements and provide a structured approach to identifying and addressing AI-specific risks. The paper aims to provide insights into security risks related to AI and highlight the importance of incorporating security assurance measures throughout the AI system lifecycle.
Keywords: Artificial intelligence, AI security assurance, Security risks, Trustworthiness, Risk management.