Immune thrombocytopenia (ITP) is a typically self-limiting and immune-mediated bleeding disorder in children. Approximately 20% of children with ITP experience chronicity, leading to reduced quality of life and increased treatment burden. The accurate prediction of chronicity would enable clinicians to make personalized treatment plans at an early stage. However, due to the self-limiting nature of ITP and the scarcity of available children patients, the data presents two prominent issues: small data and imbalanced class, which are unfavorable for effectively training a deep learning model. To handle these issues concurrently, we proposed a novel method that integrates contrastive learning with the Transformer. First, we adopt the FT-Transformer as our backbone, which allows our model to flexibly process heterogeneous tabular data. Second, we amplify and balance the original data via random masking and oversampling, respectively. Lastly, we build contrastive pairs according to the latent representations generated by the FT-Transformer encoder, such that the amplified and oversampled synthetic data can be utilized thoroughly. The experimental results on real-world ITP children data show that our proposal outperforms the state-of-the-art methods, and demonstrate the significant advantages of dealing with insufficient and imbalanced problems.