Monocular 3D object detection is rapidly emerging as a key research direction in autonomous driving, owing to its resource efficiency and ease of implementation. However, existing methods face certain limitations in cross-dimensional feature attention mechanisms and multi-order contextual information modeling, which constrain their detection performance in complex scenes. Thus, we propose MonoAMP, an adaptive multi-order perceptual aggregation algorithm for monocular 3D object detection. We first introduce triplet attention to enhance the interaction of cross-dimensional feature attention. Second, we design an adaptive multi-order perceptual aggregation module. It dynamically captures multi-order contextual information and employs an adaptive aggregation strategy to enhance target perception. Finally, we propose an uncertainty-guided adaptive depth ensemble strategy, which models the uncertainty distribution in depth estimation and effectively fuses multiple depth predictions. Experiments demonstrate that MonoAMP significantly enhances performance on the KITTI dataset at the moderate difficulty level, achieving 16.80% AP3D and 24.47% APBEV. Additionally, the ablation study shows a 3.78% improvement in object detection accuracy over the baseline method. Compared to other advanced methods, MonoAMP demonstrates superior detection capabilities, especially in complex scenarios.