In modern telehealth and healthcare information systems medical image analysis is essential to understand the context of the images and its complex structure from large, inconsistent-quality, and distributed datasets. Achieving desired results faces a few challenges for deep learning. Examples of these challenges are date size, labeling, balancing, training, and feature extraction. These challenges made the AI model complex and expensive to be built and difficult to understand which made it a black box and produce hysteresis and irrelevant, illegal, and unethical output in some cases. In this article, lingual ultrasound is studied to extract tongue contour to understand language behavior and language signature and utilize it as biofeedback for different applications. This article introduces a design strategy that can work effectively using a well-managed dynamic-size dataset. It includes a hybrid architecture using UNet, Vision Transformer (ViT), and contrastive loss in latent space to build a foundation model cumulatively. The process starts with building a reference representation in the embedding space using human experts to validate any new input for training data. UNet and ViT encoders are used to extract the input feature representations. The contrastive loss was then compared to the new feature embedding with the reference in the embedding space. The UNet-based decoder is used to reconstruct the image to its original size. Before releasing the final results, quality control is used to assess the segmented contour, and if rejected, the algorithm requests an action from a human expert to annotate it manually. The results show an improved accuracy over the traditional techniques as it contains only high quality and relevant features.