Foundation models, such as the Segment Anything Model (SAM), have demonstrated impressive generalization across various image segmentation tasks. However, they encounter challenges when applied to medical imaging, primarily due to the lack of domain-specific expertise and the limited availability of annotated data. Existing methods for adapting SAM typically rely on expert-driven prompt design and extensive fine-tuning, which hinder their effectiveness in medical imaging, particularly for rare and complex anatomical structures. To overcome these challenges, we propose MASG-SAM, an innovative framework designed for efficient few-shot medical image segmentation. MASG-SAM integrates three key innovations: the Hierarchical Attention Enhancement (HAE), Boundary Feature Enhancement (BFE), and Dynamic Semantic Fusion (DSF) modules. The HAE module optimizes attention distribution across hierarchical feature maps, enhancing feature diversity and reducing feature drift, thereby improving segmentation of both global and local features in complex medical images. The BFE module introduces a boundary-sensitive mechanism that enhances edge detection, enabling precise segmentation of overlapping or difficult-to-delineate anatomical structures. Finally, the DSF module leverages Contrastive Language-Image Pretraining (CLIP) to inject domain-specific medical semantic knowledge. By adaptively refining feature fusion during training, DSF combines semantic guidance with spatial adjustments, progressively improving segmentation accuracy, particularly in data-scarce scenarios. Experiments conducted on four publicly available medical datasets show that MASG-SAM outperforms state-of-the-art methods, achieving high segmentation accuracy with minimal labeled data. Our framework significantly enhances the adaptability and accuracy of SAM in complex medical imaging tasks. The code for MASG-SAM will be made publicly available at https://github.com/ggllllll/MASG-SAM.git.