Cancer segmentation in whole-slide images is a fundamental step for estimating tumor burden, which is crucial for cancer assessment. However, challenges such as vague boundaries and small regions dissociated from viable tumor areas make it a complex task. Considering the usefulness of multi-scale features in various vision-related tasks, we present a structure-aware, scale-adaptive feature selection method for efficient and accurate cancer segmentation. Built on a segmentation network with a popular encoder-decoder architecture, a scale-adaptive module is proposed to select more robust features that better represent vague, non-rigid boundaries. Furthermore, a structural similarity metric is introduced to enhance tissue structure awareness and improve small region segmentation. Additionally, advanced designs, including several attention mechanisms and selective-kernel convolutions, are incorporated into the baseline network for comparative study purposes. Extensive experimental results demonstrate that the proposed structure-aware, scale-adaptive network achieves outstanding performance in liver cancer segmentation compared to the top submitted results in the PAIP 2019 challenge. Further evaluation of colorectal cancer segmentation shows that the scale-adaptive module either improves the baseline network or outperforms other advanced attention mechanism designs, particularly when considering the trade-off between efficiency and accuracy. The source code is publicly available at https://github.com/IMOP-lab/Scale-Adaptive-Net.