A Practical Guide of Off-Policy Evaluation for Bandit Problems

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Kenshi Abe, Kaito Ariu, Masahiro Kato, Shota Yasui

Ngôn ngữ: eng

Ký hiệu phân loại: 519.7 Programming

Thông tin xuất bản: 2020

Mô tả vật lý:

Bộ sưu tập: Metadata

ID: 165448

Off-policy evaluation (OPE) is the problem of estimating the value of a target policy from samples obtained via different policies. Recently, applying OPE methods for bandit problems has garnered attention. For the theoretical guarantees of an estimator of the policy value, the OPE methods require various conditions on the target policy and policy used for generating the samples. However, existing studies did not carefully discuss the practical situation where such conditions hold, and the gap between them remains. This paper aims to show new results for bridging the gap. Based on the properties of the evaluation policy, we categorize OPE situations. Then, among practical applications, we mainly discuss the best policy selection. For the situation, we propose a meta-algorithm based on existing OPE estimators. We investigate the proposed concepts using synthetic and open real-world datasets in experiments.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH