EEG analysis of speaking and quiet states during different emotional music stimuli.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Zhengting Cai, Lianxin Hu, Xianwei Lin, Laurent Peyrodie, Zefeng Wang, Xinyue Wu, Guangdong Xie, Zihan Zhang

Ngôn ngữ: eng

Ký hiệu phân loại: 616.07563 Diseases

Thông tin xuất bản: Switzerland : Frontiers in neuroscience , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 185382

INTRODUCTION: Music has a profound impact on human emotions, capable of eliciting a wide range of emotional responses, a phenomenon that has been effectively harnessed in the field of music therapy. Given the close relationship between music and language, researchers have begun to explore how music influences brain activity and cognitive processes by integrating artificial intelligence with advancements in neuroscience. METHODS: In this study, a total of 120 subjects were recruited, all of whom were students aged between 19 and 26 years. Each subject is required to listen to six 1-minute music segments expressing different emotions and speak at the 40-second mark. In terms of constructing the classification model, this study compares the classification performance of deep neural networks with other machine learning algorithms. RESULTS: The differences in EEG signals between different emotions during speech are more pronounced compared to those in a quiet state. In the classification of EEG signals for speaking and quiet states, using deep neural network algorithms can achieve accuracies of 95.84% and 96.55%, respectively. DISCUSSION: Under the stimulation of music with different emotions, there are certain differences in EEG between speaking and resting states. In the construction of EEG classification models, the classification performance of deep neural network algorithms is superior to other machine learning algorithms.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH