Policy Learning with Rare Outcomes

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Julia Hatamyar, Noemi Kreif

Ngôn ngữ: eng

Ký hiệu phân loại: 174.932 Occupational ethics

Thông tin xuất bản: 2023

Mô tả vật lý:

Bộ sưu tập: Metadata

ID: 196454

Machine learning (ML) estimates of conditional average treatment effects (CATE) can guide policy decisions, either by allowing targeting of individuals with beneficial CATE estimates, or as inputs to decision trees that optimise overall outcomes. There is limited information available regarding how well these algorithms perform in real-world policy evaluation scenarios. Using synthetic data, we compare the finite sample performance of different policy learning algorithms, machine learning techniques employed during their learning phases, and methods for presenting estimated policy values. For each algorithm, we assess the resulting treatment allocation by measuring deviation from the ideal ("oracle") policy. Our main finding is that policy trees based on estimated CATEs outperform trees learned from doubly-robust scores. Across settings, Causal Forests and the Normalised Double-Robust Learner perform consistently well, while Bayesian Additive Regression Trees perform poorly. These methods are then applied to a case study targeting optimal allocation of subsidised health insurance, with the goal of reducing infant mortality in Indonesia.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH