Recent advancements in food image recognition have underscored its importance in dietary monitoring, which promotes a healthy lifestyle and aids in the prevention of diseases such as diabetes and obesity. While mainstream food recognition methods excel in scenarios with large-scale annotated datasets, they falter in few-shot regimes where data is limited. This paper addresses this challenge by introducing a variational generative method, the Multivariate Knowledge-guided Variational AutoEncoder (MK-VAE), for few-shot food recognition. MK-VAE leverages handcrafted features and semantic embeddings as multivariate prior knowledge to strengthen feature learning and feature generation in different phases. Specifically, we design a lightweight and flexible feature distillation module that distills handcrafted features to enhance the feature learning network for capturing the salient visual information in few-shot samples. During the feature generation phase, we utilize a variational autoencoder to learn the difference distribution of food data and explicitly boost the latent representation with category-level semantic embeddings to pull homogeneous features closer together while pushing inhomogeneous features apart. Experimental results demonstrate that our proposed MK-VAE significantly outperforms state-of-the-art few-shot food recognition methods in both 5-way 1-shot and 5-way 5-shot settings on three widely-used benchmark datasets: Food-101, VIREO Food-172, and UECFood-256.