Segmenting skin lesions from dermatoscopic images is crucial for improving the quantitative analysis of skin cancer. However, automatic segmentation of skin lesions remains a challenging task due to the presence of unclear boundaries, artifacts, and obstacles such as hair and veins, all of which complicate the segmentation process. Transformers have demonstrated superior capabilities in capturing long-range dependencies through self-attention mechanisms and are gradually replacing CNNs in this domain. However, one of their primary limitations is the inability to effectively capture local details, which is crucial for handling unclear boundaries and significantly affects segmentation accuracy. To address this issue, we propose a novel boundary-aware dual-decoder transformer that employs a single encoder and dual-decoder framework for both skin lesion segmentation and dilated boundary segmentation. Within this model, we introduce a shifted window cross-attention block to build the dual-decoder structure and apply multi-task distillation to enable efficient interaction of inter-task information. Additionally, we propose a multi-scale aggregation strategy to refine the extracted features, ensuring optimal predictions. To further enhance boundary details, we incorporate a dilated boundary loss function, which expands the single-pixel boundary mask into planar information. We also introduce a task-wise consistency loss to promote consistency across tasks. Our method is evaluated on three datasets: ISIC2018, ISIC2017, and PH