Proceedings of the
35th European Safety and Reliability Conference (ESREL2025) and
the 33rd Society for Risk Analysis Europe Conference (SRA-E 2025)
15 – 19 June 2025, Stavanger, Norway
Large Language Models for Extracting Failed Components from Consumer Electronics Feedback
Laboratoire de Génie Industriel, CentraleSupélec, Université Paris-Saclay, France.
ABSTRACT
Large Language Models (LLMs) have demonstrated over the past few years a strong capability in natural language understanding, opening new opportunities in reliability analysis based on text data. In the meantime, customer review data offer valuable insights into system failures, but the unstructured nature of natural language makes failure information extraction challenging. In this study, we address the problem of failed component extraction from customer reviews of tablet computers, aiming to detect failures at a component level to assess both system and component reliability. We propose a novel approach using LLMs for this task and frame it as a multi-label classification problem. Our method combines the design of a prompting strategy with the use of pre-trained lightweight LLMs to automatically extract the desired information. We conduct a comparative evaluation of state-of- the-art non-proprietary LLMs on this task. To support this work, we introduce a newly annotated dataset of 1,215 customer reviews, of which 356 mention at least one failure, annotated specifically for component failure detection. This fine-grained failure detection framework aims at enabling more accurate reliability assessments by pinpointing individual component failures within the broader system context. Our preliminary results show the potential of LLMs to leverage unstructured textual data for component-level reliability analysis. Code and data available here: https://github.com/jmpion/FaCET-ESREL2025
Keywords: Large language models, Failed components, System reliability, Customer feedback, Natural language processing, Multilabel classification, Consumer electronics.