Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples, significantly hindering the development of deep learning technologies in high-security domains. A key challenge is that current defense methods often lack universality, as they are effective only against certain types of adversarial attacks. This study addresses this challenge by focusing on analyzing adversarial examples through changes in model attention, and classifying attack algorithms into attention-shifting and attention-attenuation categories. Our main novelty lies in proposing two defense modules: the Feature Pyramid-based Attention Space-guided (FPAS) module to counter attention-shifting attacks, and the Attention-based Non-Local (ANL) module to mitigate attention-attenuation attacks. These modules enhance the model's defense capability with minimal intrusion into the original model. By integrating FPAS and ANL into the Wide-ResNet model within a boosting framework, we demonstrate their synergistic defense capability. Even when adversarial examples are embedded with patches, our models showed significant improvements over the baseline, enhancing the average defense rate by 5.47% and 7.74%, respectively. Extensive experiments confirm that this universal defense strategy offers comprehensive protection against adversarial attacks at a lower implementation cost compared to current mainstream defense methods, and is also adaptable for integration with existing defense strategies to further enhance adversarial robustness.