CiRLExplainer: Causality-Inspired Explainer for Graph Neural Networks via Reinforcement Learning.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Wenya Hu, Quan Qian, Jia Wu

Ngôn ngữ: eng

Ký hiệu phân loại: 133.594 Types or schools of astrology originating in or associated with a

Thông tin xuất bản: United States : IEEE transactions on neural networks and learning systems , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 706989

In this article, we propose a new graph neural network (GNN) explainability model, CiRLExplainer, which elucidates GNN predictions from a causal attribution perspective. Initially, a causal graph is constructed to analyze the causal relationships between the graph structure and GNN predicted values, identifying node attributes as confounding factors between the two. Subsequently, a backdoor adjustment strategy is employed to circumvent these confounders. Additionally, since the edges within the graph structure are not independent, reinforcement learning is incorporated. Through a sequential selection process, each step evaluates the combined effects of an edge and the previous structure to generate an explanatory subgraph. Specifically, a policy network predicts the probability of each candidate edge being selected and adds a new edge through sampling. The causal effect of this action is quantified as a reward, reflecting the interactivity among edges. By maximizing the policy gradient during training, the reward stream of the edge sequence is optimized. The CiRLExplainer is versatile and can be applied to any GNN model. A series of experiments was conducted, including accuracy (ACC) analysis of the explanation results, visualization of the explanatory subgraph, and ablation studies considering node attributes as confounding factors. The experimental results demonstrate that our model not only outperforms current state-of-the-art explanation techniques, but also provides precise semantic explanations from a causal perspective. Additionally, the experiments validate the rationale for considering node attributes as confounding factors, thereby enhancing the explanatory power and ACC of the model. Notably, across different datasets, our explainer achieved improvements over the best baseline models in the ACC-area under the curve (AUC) metrics by 5.89%, 5.69%, and 4.87%, respectively.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH