An Approach to Establishing Operator Trust Through an Explainable Diagnostic Model
Main Article Content
Abstract
If an abnormal state caused by a problem with a specific component within a nuclear power plant is not addressed, it can result in a reactor shutdown, leading to significant economic losses. Operators identify such issues and implement appropriate abnormal operating procedures to prevent accidents. Recently, there has been research on artificial intelligence models to assist operators in diagnosing these abnormal states more efficiently. However, it is essential to provide additional information for operators, who are the final decision-makers, enabling them to make independent judgments even in cases where the model provides incorrect prediction information. This study utilizes explainable artificial intelligence to offer operators the trustworthiness information of the diagnosis results. When the neural network model diagnoses the abnormal state, the parameter relevance calculated about the model diagnosing each abnormal state. The trustworthiness of the diagnosis result is further evaluated by employing another classification model using this parameter relevance. This dual-step process establishes the overall trust of the model for operators. By emphasizing transparency and providing operators with detailed insights into the causes behind diagnosis results, our study aims not only to increase operators' trust in the model but also to contribute to the operation of a stable nuclear power plant.
Article Details
Issue
Section
Articles