BACKGROUND: Breast tumor segmentation is a critical aspect of magnetic resonance imaging (MRI)-based breast disease diagnosis. Numerous networks and algorithms, including U-Net and its enhancements, have been proposed for breast tumor segmentation. However, existing methods have certain shortcomings and limitations, including insufficient extraction of multi-scale contextual information, which poses challenges in adapting to tumors of different sizes and distinguishing tumor boundaries from surrounding tissues. Additionally, the feature extraction process lacks specificity and is prone to interference from irrelevant information outside the tumor region. This study aimed to address these challenges, and achieve the accurate and automated segmentation of breast tumors in MRI scans. METHODS: A new three-dimensional (3D) breast tumor segmentation network named the multi-scale hybrid attention U-shaped network (MHAU-Net) was designed. The network used four sets of atrous convolutions with different dilated ratios to extract multi-scale context information. Global pooling and single-channel convolution structures were employed to construct channel and spatial blocks. Subsequently, the network integrated four sets of atrous convolutions with spatial and channel attention blocks to extract hybrid attention features. Compared to existing MRI segmentation networks for breast tumors, the MHAU-Net demonstrated superior performance in extracting informative features and adapting to tumors of diverse sizes and shapes. RESULTS: To evaluate the proposed approach, we curated a large-scale breast MRI dataset comprising 906 3D images. A comparative analysis with seven commonly used segmentation networks revealed the superior performance of our method. Our network had a dice similarity coefficient (DSC) and intersection over union (IoU) of 84.1%±2.1% and 74.2%±3.4%, respectively, representing a 6.0% and 7.1% improvement over the baseline 3D U-Net. Additionally, our method had DSC values of 85.7%±1.6%, 84.3%±2.8%, 86.7%±1.7%, and 86.3%±1.5% for single, small, large, and mass tumors, respectively. CONCLUSIONS: Our results highlight the superior overall performance of the proposed method, and show its ability to adapt to various types of tumor images. This study establishes a solid foundation for further exploring the application value of deep learning in breast cancer diagnosis.