Medical image segmentation is a critical aspect of medical image analysis, particularly in the realm of medical image processing. While the UNet architecture is widely acknowledged for its effectiveness in medical image segmentation, it falls short in fully harnessing inherent advantages and utilising contextual data efficiently. In response, this research introduces an architecture named Deep Atrous Attention UNet (DAA-UNet), incorporating the attention module and Atrous Spatial Pyramid Pooling (ASPP) module in UNet. The primary objective is to enhance both efficiency and accuracy in the segmentation of medical images, with a specific focus on chest X-ray (CXR) images. DAA-UNet combines the integral features of UNet, ASPP, and attention mechanisms. The addition of an attention block improves the segmentation process by prioritising features from the encoding layer to the decoding layers. Our evaluation employs a tuberculosis dataset to assess the performance of the proposed model. The validation results demonstrate an average accuracy of 97.15%, an average Intersection over Union (IoU) value of 92.37%, and an average Dice Coefficient (DC) value of 93.25%. Notably, both qualitative and quantitative assessments for lung segmentation produce better outcomes than UNet and other relevant selected architectures.