A data-driven approach to beating SAA out-of-sample

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Jun-ya Gotoh, Michael Jong Kim, Andrew E. B Lim

Ngôn ngữ: eng

Ký hiệu phân loại: 006.31 Machine learning

Thông tin xuất bản: 2021

Mô tả vật lý:

Bộ sưu tập: Metadata

ID: 167052

Comment: 25 pages, 2 page bibliography, 2 Figures, 12 page AppendixWhile solutions of Distributionally Robust Optimization (DRO) problems can sometimes have a higher out-of-sample expected reward than the Sample Average Approximation (SAA), there is no guarantee. In this paper, we introduce a class of Distributionally Optimistic Optimization (DOO) models, and show that it is always possible to ``beat" SAA out-of-sample if we consider not just worst-case (DRO) models but also best-case (DOO) ones. We also show, however, that this comes at a cost: Optimistic solutions are more sensitive to model error than either worst-case or SAA optimizers, and hence are less robust and calibrating the worst- or best-case model to outperform SAA may be difficult when data is limited.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH