BDFormer: Boundary-aware dual-decoder transformer for skin lesion segmentation.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Zexuan Ji, Xiao Ma, Yuxuan Ye

Ngôn ngữ: eng

Ký hiệu phân loại:

Thông tin xuất bản: Netherlands : Artificial intelligence in medicine , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 474898

Segmenting skin lesions from dermatoscopic images is crucial for improving the quantitative analysis of skin cancer. However, automatic segmentation of skin lesions remains a challenging task due to the presence of unclear boundaries, artifacts, and obstacles such as hair and veins, all of which complicate the segmentation process. Transformers have demonstrated superior capabilities in capturing long-range dependencies through self-attention mechanisms and are gradually replacing CNNs in this domain. However, one of their primary limitations is the inability to effectively capture local details, which is crucial for handling unclear boundaries and significantly affects segmentation accuracy. To address this issue, we propose a novel boundary-aware dual-decoder transformer that employs a single encoder and dual-decoder framework for both skin lesion segmentation and dilated boundary segmentation. Within this model, we introduce a shifted window cross-attention block to build the dual-decoder structure and apply multi-task distillation to enable efficient interaction of inter-task information. Additionally, we propose a multi-scale aggregation strategy to refine the extracted features, ensuring optimal predictions. To further enhance boundary details, we incorporate a dilated boundary loss function, which expands the single-pixel boundary mask into planar information. We also introduce a task-wise consistency loss to promote consistency across tasks. Our method is evaluated on three datasets: ISIC2018, ISIC2017, and PH
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH