Perceptual learning of modulation filtered speech.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Ediz Sohoglu, James M Webb

Ngôn ngữ: eng

Ký hiệu phân loại:

Thông tin xuất bản: United States : Journal of experimental psychology. Human perception and performance , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 737273

Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal. We systematically filtered speech to control modulation content during training and test blocks. Perceptual learning was highly specific to the modulation filter heard during training, consistent with the hypothesis that learning involves changes to representations of speech modulations. In further experiments, we used modulation filtering and different feedback regimes (clear speech vs. written feedback) to investigate the role of talker-specific cues for cross-talker generalization of learning. Our results suggest that learning partially generalizes to speech from novel (untrained) talkers but that talker-specific cues can enhance generalization. These findings are consistent with the proposal that perceptual learning entails the adjustment of internal models that map acoustic features to phonological categories. These models can be applied to degraded speech from novel talkers, particularly when listeners can account for talker-specific variability in the acoustic signal. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH