Deformable Graph Transformer.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Jung-Woo Ha, Jisu Jeong, Jaewoo Kang, Hyunwoo J Kim, Kyung-Min Kim, Hyeonjin Park, Jinyoung Park, Seongjun Yun

Ngôn ngữ: eng

Ký hiệu phân loại: 627.12 Rivers and streams

Thông tin xuất bản: United States : IEEE transactions on pattern analysis and machine intelligence , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 707158

Transformer-based models have recently shown success in representation learning on graph-structured data beyond natural language processing and computer vision. However, the success is limited to small-scale graphs due to the drawbacks of full dot-product attention on graphs such as the quadratic complexity with respect to the number of nodes and message aggregation from enormous irrelevant nodes. To address these issues, we propose Deformable Graph Transformer (DGT) that performs sparse attention via dynamically selected relevant nodes for efficiently handling large-scale graphs with a linear complexity in the number of nodes. Specifically, our framework first constructs multiple node sequences with various criteria to consider both structural and semantic proximity. Then, combining with our learnable Katz Positional Encodings, the sparse attention is applied to the node sequences for learning node representations with a significantly reduced computational cost. Extensive experiments demonstrate that our DGT achieves superior performance on 7 graph benchmark datasets with 2.5 ∼ 449 times less computational cost compared to transformer-based graph models with full attention.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH