In order to process multiview multilabel, multilabel, and multiview data, current learning algorithms are designed on the basis of data characteristics, correlations, etc. While these algorithms cannot express correlations among different features, instances, labels in within-view, cross-view, and consensus-view representations self-adaptively and relative accurately. To this end, this study takes the classical multiple correlations-based model as the basis and explores some laws of self-adaptive change for those correlations in multiple representations. The proposed algorithm is called multiple self-adaptive correlation-based multiview multilabel learning (MuSC-MVML). Extensive experiments on 38 datasets demonstrate the superiority of MuSC-MVML and some conclusions are addressed. 1) MuSC-MVML outperforms most compared algorithms in statistical in terms of AUC and its performance is also stable
2) the computational cost of MuSC-MVML is moderate and on most datasets, MuSC-MVML has a relatively fast convergence
and 3) introducing some laws of self-adaptive change for those correlations can improve the ability of MuSC-MVML to process multiview multilabel datasets effectively and express correlations in multiple representations better. Furthermore, this study explains the reason that why we use alternating optimization strategy to optimize the model of MuSC-MVML and provides some suggestions that how to modify the model of MuSC-MVML to process incomplete multiview multilabel datasets with noise.