Occludedperson re-identification (ReID) tasks pose a significant challenge in matching occluded pedestrians to their holistic counterparts across diverse camera views and scenarios. Robust representational learning is crucial in this context, given the unique challenges introduced by occlusions. Firstly, occlusions often result in missing or distorted appearance information, making accurate feature extraction difficult. Secondly, most existing methods focus on learning representations from isolated images, overlooking the potential relational information within image batches. To address these challenges, we propose a