Quantitative structure-retention relationships (QSRRs) have been a popular modeling approach in ion chromatography to predict retention time from molecular structures. It is often coupled with solvent strength models to extend it to other isocratic chromatographic conditions. While this approach has achieved reasonable success, potential inconsistencies from the solvent strength model may propagate to the QSRR models, thereby amplifying their errors. In this work, we aim to incorporate information on the isocratic conditions directly into the QSRR model to reduce error propagation and build global models. Four machine learning approaches that can account for both global and local sources of variability in chromatographic retention, random forest regression, gradient boosting regression (GBR), extreme gradient boosting (xgBoost), and adaptive boosting (AdaBoost), were evaluated and compared. The partial least-squares model was built as a baseline to compare against. GBR and xgBoost have shown superior predictive ability among the evaluated models with root-mean-square errors (RMSEs) of isocratic retention of 0.025 (+0.009, -0.006) and 0.025 (+0.008, -0.006), respectively. Developed QSRR models were further incorporated into the isocratic-to-gradient model to predict gradient retention. GBR and xgBoost QSRR models have outperformed the other models with RMSEs of gradient retention of 0.358 (+0.199, -0.107) and 0.385 (+0.387, -0.139) min, respectively. Such an approach demonstrates the benefits of incorporating the eluent composition into prediction models, with the potential to extend to other chromatographic techniques.