BACKGROUND: Post-induction hypotension (PIH) increases surgical complications including myocardial injury, acute kidney injury, delirium, stroke, prolonged hospitalization, and endangerment of the patient's life. Machine learning is an effective tool to analyze large amounts of data and identify perioperative complication factors. This study aims to identify risk factors for PIH and develop predictive models to support anesthesia management. METHODS: A dataset of 5406 patients was analyzed using machine learning methods. Logistic regression, random forest, XGBoost, and neural network models were compared. Model performance was evaluated using the area under the receiver operating characteristic curve (AUROC), calibration curves, and decision curve analysis (DCA). RESULTS: The logistic regression model achieved an AUROC of 0.74 (95% CI: 0.71-0.77), outperforming the random forest (AUROC: 0.71), XGBoost (AUROC: 0.72), and neural network (AUROC: 0.72) models. In terms of calibration, logistic regression demonstrated superior performance, as reflected by Brier Scores and calibration curves, followed by XGBoost, random forest, and neural network. Decision curve analysis indicated that the logistic regression model provided the greatest clinical utility among all models. Baseline blood pressure, age, sex, type of surgery, platelet count, and certain anesthesia-inducing drugs were identified as important features. CONCLUSIONS: This study provides a valuable tool for personalized preoperative risk assessment and customized anesthesia management, allowing for early intervention and improved patient outcomes. Integration of machine learning models into electronic medical record systems can facilitate real-time risk assessment and prediction.