Fairness identification of large language models in recommendation.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Weiming Huang, Baisong Liu, Wei Liu, Jiangcheng Qin, Yangyang Wang, Xueyuan Zhang

Ngôn ngữ: eng

Ký hiệu phân loại: 340 Law

Thông tin xuất bản: England : Scientific reports , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 105453

Fairness in recommendation systems is crucial for ensuring equitable treatment of all users. Inspired by research on human-like behavior in large language models (LLMs), we investigate whether LLMs can serve as "fairness recognizers" in recommendation systems and explore harnessing the inherent fairness awareness in LLMs to construct fair recommendations. Using the MovieLens and LastFM datasets, we compare recommendations produced by Variational Autoencoders (VAE) with and without fairness strategies, and use ChatGLM3-6B and Llama2-13B to identify the fairness of VAE-generated results. Evaluation reveals that LLMs can indeed recognize fair recommendations by recognizing the correlation between users' sensitive attributes and their recommendation results. We then propose a method for incorporating LLMs into the recommendation process by replacing unfair recommendations identified as unfair by LLMs with those generated by a fair VAE. Our evaluation demonstrates that this approach improves fairness significantly with minimal loss in utility. For instance, the fairness-to-utility ratio for gender-based groups shows that VAEgan's results are 6.0159 and 5.0658, while ChatGLM's results achieve 30.9289 and 50.4312, respectively. These findings demonstrate that integrating LLMs' fairness recognition capabilities leads to a more favorable trade-off between fairness and utility.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH