Vision-language pre-training models have achieved significant success in the field of medical imaging but have exhibited vulnerability to adversarial examples. Although adversarial attacks are harmful, they are valuable in revealing the weaknesses of VLP models and enhancing their robustness. However, due to the under-utilization of modal differences and consistent features in existing methods, the attack effectiveness and migration of adversarial samples are not satisfactory. To address this issue and enhance attack effectiveness and transferability, we propose the multimodal feature heterogeneous attack framework. To enhance the adversarial capability, we propose a feature heterogenization method based on triplet contrastive learning, utilizing data augmentation and cross-modal global contrastive learning, intra-modal contrastive learning, and cross-modal global-local mutual information contrastive learning. This further heterogenizes the consistent features between modalities into distinct features, thereby improving the adversarial capability. To improve transferability, we propose a cross-modal variance aggregation-based multi-domain feature perturbation method, using text-guided image attacks to perturb consistent spatial and frequency features while combining previous gradient momentum, achieving better transferability. Extensive experiments demonstrate MFHA's significant advantage in transferable attack capability, with an average improvement of 16.05%, and outstanding attack performance on multimodal large language models like MiniGPT4 and LLaVA. The work we did has been open-sourced on GitHub: https://github.com/doyoudooo/MFHA .