Ensuring strict medical data privacy standards while delivering efficient and accurate breast cancer segmentation is a critical challenge. This paper addresses this challenge by proposing a lightweight solution capable of running directly in the user's browser, ensuring that medical data never leave the user's computer. Our proposed solution consists of a two-stage model: the pre-trained nano YoloV5 variation handles the task of mass detection, while a lightweight neural network model of just 20k parameters and an inference time of 21 ms per image addresses the segmentation problem. This highly efficient model in terms of inference speed and memory consumption was created by combining well-known techniques, such as the SegNet architecture and depthwise separable convolutions. The detection model manages an mAP@50 equal to 50.3% on the CBIS-DDSM dataset and 68.2% on the INbreast dataset. Despite its size, our segmentation model produces high-performance levels on the CBIS-DDSM (81.0% IoU, 89.4% Dice) and INbreast (77.3% IoU, 87.0% Dice) dataset.