Proceedings of the
35th European Safety and Reliability Conference (ESREL2025) and
the 33rd Society for Risk Analysis Europe Conference (SRA-E 2025)
15 – 19 June 2025, Stavanger, Norway

Safe and Unsafe Information: Managing Risks in the Era of Generative Artificial Intelligence

Paolo Spagnoletti1 and Richard Baskerville2

1Department of Information Systems, University of Agder, Norway.

2Computer Information Systems, Georgia State University, USA.

ABSTRACT

The transformative impact of digitalization on organizations has significantly increased the availability of organizational information to the public. This shift amplifies the responsibility of organizations to ensure the safety of their digital products and services, as unsafe information can cause harm to society or the environment. Generative artificial intelligence (GenAI) introduces unique risks by enabling the effortless production of ungrounded and potentially harmful content, such as hallucinations, which can propagate misinformation when uncritically used. These challenges necessitate a departure from traditional corporate social responsibility (CSR) frameworks towards more robust risk management strategies. This paper develops a taxonomy of characteristics of safe versus unsafe information from GenAI, characterized by three dimensions: correct, open, and benignant for safe information; and incorrect, protected, and dangerous for unsafe information. Drawing on empirical data from Italian organizations we validate and verify the alignment of established risk taxonomies and derive practical recommendations for mitigating these risks. These include implementing rigorous data validation pipelines, restricting inputs to trusted and verified sources, and employing robust processing and oversight mechanisms. By embedding these strategies into governance frameworks, organizations can mitigate the risks of unsafe information while ensuring that GenAI contributes positively to societal and environmental well-being.

Keywords: Data governance, Cybersecurity, Large language Models, Botshit, Fake news.



Download PDF