Systematic reviews in food safety research are vital but hindered by the amount of required human labor. The objective of this study was to evaluate the effectiveness of semi-automated active learning models, as an alternative to manual screening, in screening articles by title and abstract for subsequent full-text review and inclusion in a systematic review of food safety literature. We used a dataset of 3,738 articles, which were previously manually screened in a systematic scoping review for studies about digital food safety tools, of which 214 articles were selected (labeled) via title-abstract screening for further full-text review. On this dataset, we compared three models: (i) Naive Bayes/Term Frequency - Inverse Document Frequency (TF-IDF), (ii) Logistic Regression/Doc2Vec, and (iii) Regression/TF-IDF under two scenarios: 1) screening an unlabeled dataset, and 2) screening a labeled benchmark dataset. We show that screening with active learning models offers a significant improvement over manual (random) screening across all models. In the first scenario, given a stopping criterion of 5% of total records consecutively without having labeled an article relevant, the three models respectively achieve recalls of (mean ± standard deviation) 99.2±0.8%, 97.9± 2.7%, and 98.8± 0.4% while having viewed only 62.6±3.2%, 58.9±2.9%, and 57.6±3.2% of total records. In general, there was a tradeoff between recall and the number of articles that needed to be screened. In the second scenario, we observe that all models perform similarly overall, including similar Work Saved Over Sampling values at the 90% and 95% recall criteria, but models using the TF-IDF feature extractor typically outperform the model using Doc2Vec at finding relevant articles early in screening. In particular, all models outperformed random screening at any recall level. This study demonstrates the promise of incorporating active learning models to facilitate literature synthesis in digital food safety.