Harassment in Social Virtual Reality (SVR) is a growing concern. The current SVR landscape features inconsistent access to non-standardised safety features, with minimal empirical evidence on their real-world effectiveness, usage and impact. We examine the use and effectiveness of safety tools across 12 popular SVR platforms by surveying 100 users about their experiences of different types of harassment and their use of features like muting, blocking, personal spaces and safety gestures. While harassment remained common-including hate speech, virtual stalking, and physical harassment-many find safety features insufficient or inconsistently applied. Reactive tools like muting and blocking are widely used, largely driven by users' familiarity from other platforms. Safety tools are also used to proactively curate individual virtual experiences, protecting users from harassment, but inadvertently leading to fragmented social spaces. We advocate for standardising proactive, rather than reactive, anti-harassment tools across platforms, and present insights into future safety feature development.