Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Matthew A Baum, Adam J Berinsky, Allison J B Chaney, Justin de Benedictis-Kessner, Andrew M Guess, Xinlan Emily Hu, Dean Knox, Naijia Liu, Christopher Lucas, Rei Mariman, Yasemin Savas, Brandon M Stewart

Ngôn ngữ: eng

Ký hiệu phân loại:

Thông tin xuất bản: United States : Proceedings of the National Academy of Sciences of the United States of America , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 187581

An enormous body of literature argues that recommendation algorithms drive political polarization by creating "filter bubbles" and "rabbit holes." Using four experiments with nearly 9,000 participants, we show that manipulating algorithmic recommendations to create these conditions has limited effects on opinions. Our experiments employ a custom-built video platform with a naturalistic, YouTube-like interface presenting real YouTube videos and recommendations. We experimentally manipulate YouTube's actual recommendation algorithm to simulate filter bubbles and rabbit holes by presenting ideologically balanced and slanted choices. Our design allows us to intervene in a feedback loop that has confounded the study of algorithmic polarization-the complex interplay between supply of recommendations and user demand for content-to examine downstream effects on policy attitudes. We use over 130,000 experimentally manipulated recommendations and 31,000 platform interactions to estimate how recommendation algorithms alter users' media consumption decisions and, indirectly, their political attitudes. Our results cast doubt on widely circulating theories of algorithmic polarization by showing that even heavy-handed (although short-term) perturbations of real-world recommendations have limited causal effects on policy attitudes. Given our inability to detect consistent evidence for algorithmic effects, we argue the burden of proof for claims about algorithm-induced polarization has shifted. Our methodology, which captures and modifies the output of real-world recommendation algorithms, offers a path forward for future investigations of black-box artificial intelligence systems. Our findings reveal practical limits to effect sizes that are feasibly detectable in academic experiments.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH