Policy learning for many outcomes of interest: Combining optimal policy trees with multi-objective Bayesian optimisation

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Nicholas Biddle, Patrick Rehill

Ngôn ngữ: eng

Ký hiệu phân loại: 339.5 Macroeconomic policy

Thông tin xuất bản: 2022

Mô tả vật lý:

Bộ sưu tập: Metadata

ID: 196216

Comment: 24 pages, 7 figuresMethods for learning optimal policies use causal machine learning models to create human-interpretable rules for making choices around the allocation of different policy interventions. However, in realistic policy-making contexts, decision-makers often care about trade-offs between outcomes, not just single-mindedly maximising utility for one outcome. This paper proposes an approach termed Multi-Objective Policy Learning (MOPoL) which combines optimal decision trees for policy learning with a multi-objective Bayesian optimisation approach to explore the trade-off between multiple outcomes. It does this by building a Pareto frontier of non-dominated models for different hyperparameter settings which govern outcome weighting. The key here is that a low-cost greedy tree can be an accurate proxy for the very computationally costly optimal tree for the purposes of making decisions which means models can be repeatedly fit to learn a Pareto frontier. The method is applied to a real-world case-study of non-price rationing of anti-malarial medication in Kenya.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH