In this article, we propose a new graph neural network (GNN) explainability model, CiRLExplainer, which elucidates GNN predictions from a causal attribution perspective. Initially, a causal graph is constructed to analyze the causal relationships between the graph structure and GNN predicted values, identifying node attributes as confounding factors between the two. Subsequently, a backdoor adjustment strategy is employed to circumvent these confounders. Additionally, since the edges within the graph structure are not independent, reinforcement learning is incorporated. Through a sequential selection process, each step evaluates the combined effects of an edge and the previous structure to generate an explanatory subgraph. Specifically, a policy network predicts the probability of each candidate edge being selected and adds a new edge through sampling. The causal effect of this action is quantified as a reward, reflecting the interactivity among edges. By maximizing the policy gradient during training, the reward stream of the edge sequence is optimized. The CiRLExplainer is versatile and can be applied to any GNN model. A series of experiments was conducted, including accuracy (ACC) analysis of the explanation results, visualization of the explanatory subgraph, and ablation studies considering node attributes as confounding factors. The experimental results demonstrate that our model not only outperforms current state-of-the-art explanation techniques, but also provides precise semantic explanations from a causal perspective. Additionally, the experiments validate the rationale for considering node attributes as confounding factors, thereby enhancing the explanatory power and ACC of the model. Notably, across different datasets, our explainer achieved improvements over the best baseline models in the ACC-area under the curve (AUC) metrics by 5.89%, 5.69%, and 4.87%, respectively.