Feature quantization for parsimonious and interpretable predictive models

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Christophe Biernacki, Adrien Ehrhardt, Philippe Heinrich, Vincent Vandewalle

Ngôn ngữ: eng

Ký hiệu phân loại: 688.1 Models and miniatures

Thông tin xuất bản: 2019

Mô tả vật lý:

Bộ sưu tập: Metadata

ID: 162724

 Comment: 9 pages, 2 figures, 3 tablesFor regulatory and interpretability reasons, logistic regression is still widely used. To improve prediction accuracy and interpretability, a preprocessing step quantizing both continuous and categorical data is usually performed: continuous features are discretized and, if numerous, levels of categorical features are grouped. An even better predictive accuracy can be reached by embedding this quantization estimation step directly into the predictive estimation step itself. But doing so, the predictive loss has to be optimized on a huge set. To overcome this difficulty, we introduce a specific two-step optimization strategy: first, the optimization problem is relaxed by approximating discontinuous quantization functions by smooth functions
  second, the resulting relaxed optimization problem is solved via a particular neural network. The good performances of this approach, which we call glmdisc, are illustrated on simulated and real data from the UCI library and Cr\'edit Agricole Consumer Finance (a major European historic player in the consumer credit market).
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH