750
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

A novel brain inception neural network model using EEG graphic structure for emotion recognition

, , , , , , , , & ORCID Icon show all
Article: 2222159 | Received 07 Nov 2022, Accepted 01 Jun 2023, Published online: 16 Jun 2023

Abstract

Purpose

EEG analysis of emotions is greatly significant for the diagnosis of psychological diseases and brain-computer interface (BCI) applications. However, the applications of EEG brain neural network for emotion classification are rarely reported and the accuracy of emotion recognition for cross-subject tasks remains a challenge. Thus, this paper proposes to design a domain invariant model for EEG-network based emotion identification.

Methods

A novel brain-inception-network based deep learning model is proposed to extract discriminative graph features from EEG brain networks. To verify its efficiency, we compared our proposed method with some commonly used methods and three types of brain networks. In addition, we also compared the performance difference between the EEG brain network and EEG energy distribution for emotion recognition.

Result

One public EEG-based emotion dataset (SEED) was utilized in this paper, and the classification accuracy of leave-one-subject-out cross-validation was adopted as the comparison index. The classification results show that the performance of the proposed method is superior to those of the other methods mentioned in this paper.

Conclusion

The proposed method can capture discriminative structural features from the EEG network, which improves the emotion classification performance of brain neural networks.

1. Introduction

Emotions play important roles in our social interaction and daily life, and they also have a deep influence on other high-order cognitive functions such as decision-making, attention, and memory [Citation1]. In recent years, emotion analysis is rising popularity within the field of neuroscience study [Citation2,Citation3], which could provide a reference for evaluating some psychiatric diseases such as depression and autism [Citation4]. With the development of neuroimaging technologies such as electroencephalogram (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) [Citation5,Citation6], more and more available ways for deciphering neuroscience mechanisms have been provided. But how to effectively reveal the mechanism of emotions and accurately identify the emotional states remain unsolved. Recently, emotion recognition has become a hot topic in the field of emotion analysis, which plays a crucial role in developing effective human-computer interaction [Citation7]. In recent years, there have been a series of studies putting forward novel strategies to achieve more efficient emotion recognition performance. In these methods, deep learning has made great achievements in the field of image segmentation [Citation8], data classification [Citation9], and feature analysis [Citation10].

Due to its powerful capability of feature extraction and classification, it has earned much attention in emotion classification [Citation11]. Elham et al. used the 3-dimension convolutional neural networks (3D-CNN) for emotion recognition. They extracted the 3D data representation from multi-channel EEG signals and achieved higher recognition accuracies along both the valence and arousal dimensions of the DEAP dataset [Citation12]. Moreover, Zheng et al. compared the performance differences between power spectral density (PSD), differential entropy (DE), and energy distribution based asymmetric features, in which they observed activations of crucial brain regions for emotion processing [Citation13]. Zheng et al. used deep confidence network for emotion recognition of DE features and identified key frequency bands [Citation14]. Wang et al. proposed the DE-CNN model to further improve performance of DE features for emotion recognition [Citation15]. In addition, Hwang et al. generated topology-preserving differential entropy features and used a convolutional neural network (CNN) to identify three-class emotional states [Citation16]. Wang et al. utilized DE features as the input to their proposed broad dynamical graph learning system and obtained excellent classification results for emotion recognition [Citation17].

Basically, the methods mentioned above mainly utilized the features of brain activation for emotion classification. Comparing with these energy-distribution based features, network analysis could reflect the coupling relationship among various brain regions [Citation18], and reveal the synergistic effects of different regions for different cognitive states [Citation19]. Generally, the methods for brain network estimation can be divided into two categories, i.e. the functional brain network analysis and effective brain network analysis. The functional brain network analysis focuses on revealing the cooperative oscillation of different brain regions. Among these methods, the phase-locking value is one of the commonly used technologies, which measures the phase difference between two signals at different time points [Citation20]. Wang et al. used phase-locking value (PLV) as the features and designed graph convolutional neural networks (P-GCNN) to extract the implicitly emotional information more effectively [Citation21]. Gonuguntla et al. measured the synchronicity between different regions of the brain during emotional perception tasks using the PLV method [Citation22]. Different from functional brain network analysis. the effective brain network analysis focuses on revealing the driving relationship between any pair of brain regions. Among these methods, the Granger causality analysis is one of the most used. Chen et al. analyzed the causal relationship between EEG signals through Granger causality to determine the brain networks associated with specific emotional states. The results indicate that using this method can accurately identify the emotional state of participants with high accuracy [Citation23]. Chen et al. Propose a method of(Sparse Granger Causality Analysis, SGCA)method and apply it to emotion recognition tasks, improving classification accuracy and robustness [Citation24]. Gao et al. proposed a Student’s t-based Granger causality for brain network analysis. This method can improve the robustness of causal inference while considering outliers and noise [Citation3].

Though these brain-connection-based methods have been successfully applied, one challenge unsolved well is that there has significant differences between subjects even under the same emotional states [Citation25]. Therefore, some studies have been carried out to deal with this problem such as transfer learning strategies. In these methods, transfer component analysis (TCA) is one of the simplest methods for adaptive domain transfer [Citation26], which achieves domain adaptation by constructing a kernel matrix to minimize the maximum mean discrepancy (MMD) between the source-domain and target-domain distributions. In essence, the performance of these methods depends on the selection of kernels and their parameters. However, in real applications, it is hard to offer the prior optimal kernels [Citation27]. Alternatively, the deep learning-based domain invariant models have earned much attention in recent studies [Citation28]. Luo et al. proposed a novel transfer strategy by using Wasserstein generative adversarial network (GAN) to handle the domain shift problem, which alleviates the differences between subjects to some degree [Citation29]. Recently, Lin et al. found that the transform learning strategy showed excellent performance on classification problems in particular cross-subjects [Citation30]. Furthermore, Li et al. utilized this feature in EEG-based classification and obtained more significant improvement than others [Citation31]. In their study, they designed a multisource transfer learning method to reduce the EEG differences between the target and source domains.

Thus, in our current work, by considering the facts that 1) the directed brain network analysis has shown excellent performance in revealing the driving relationships among multiple brain regions [Citation23]; and 2) the merit of inception strategy for multi-scale structural feature extraction [Citation32], we designed a novel model for EEG based emotion recognition which consists of Granger causality analysis for directed brain network estimation and deep domain adaptation architecture for domain invariant network structural feature extraction.

2. Methods

In this study, we tried to design a novel deep learning framework with inception blocks to extract structural features from brain networks for emotion recognition. To evaluate its efficiency in capturing discriminative network features for emotion classification, both the functional and effective brain networks were estimated from EEG of emotions.

2.1. Functional brain network estimation

In this study, to characterize the synchronized activity of multiple brain regions, we adopted phase-locking value (PLV) which describes the non-linear phase synchronization to statistically measure the phase synchronization between two brain areas [Citation33]. Suppose there are two channels of EEG data xi(t) and xk(t), and their corresponding instantaneous phase could be calculated via Hilbert transformation as [Citation34] (1) φ(t)=H(x(t))=1π+x(τ)tτdτ(1)

Subsequently, the functional brain network could be descripted as (2) PLVik=1N|n=1Nej(Δφik(n))|(2) where N is the total number of time points. The Δφik(t)=φi(t)φk(t) represents the instantaneous phase difference between time series i and time series k, and j represents the imaginary unit.

2.2. Effective brain network estimation

2.2.1. Granger analysis

In current study, we adopted the Granger causality analysis (GCA) to estimate effective brain network, of which the kernel is to solve the following multivariate autoregressive (MVAR) model as (3) [x1,ixD,i]=r=1qAr[x1,irxD,ir]+[ε1,iεD,i],Ar=[a11,ra1D,raD1,raDD,r](3) where [x1,i, ,xD,i]T (1iN) denotes D-dimensional joint stationary time series at time point i, q is the model order, and εk,i is the residual for the kth time series. ArRD×D is the coefficient matrix at time lag r, and N indicates the length of time series. For the k-th time series, EquationEquation (3) can be further rewritten as (4) Yk=ugk+Ek,u=[x1,qx1,N1xD,qxD,N1x1,1x1,NqxD,1xD,Nq]R(Nq)×(D×q)(4) where uR(Nq)×(D×q) is the transform matrix and Yk=[xk,q+1,,xk,N]TR(Nq)×1 is the measurement value, gk=[ak1,1,,akD,1,,ak1,q,,akD,q]TR(D×q)×1 is the MVAR coefficient for k-th time series, Ek=[εk,q+1,,εk,N]TR(Nq)×1 represents the measurement noise. When the model coefficients in EquationEquation (4) have been estimated, the causal relationship between any pair of time series Yk and Yh can be further inferred. Theoretically, the causal interactions between Yk and Yh are measured in accordance with the following principle: if Yh improves the prediction performance when all the other processes are also included, we can state Yh G-cause Yk. Mathematically, this causal interaction can be further evaluated by the logarithmic ratio between unrestricted noise covariance and the restricted one as: (5) lhk|Δ=lnΣk,k*Σk,k,(Δ=[1,,D][h,k],ΔRD2)(5) where Σk,k* denotes the noise covariance of k-th equation from restricted model (which omits the influence of the k-th time series) and Σk,k is the noise covariance from unrestricted model (considering the influence of all the time series). Practically, the statistical significance for Yh to cause Yk can be determined via the F-statistic [Citation35,Citation36] based on the evaluated lhk|Δ. In our current work, if the p-value is less than 0.05, we then judge that there is a significant causal interaction [Citation37].

2.2.1.1. Traditional Granger causality analysis

Traditional Granger causality analysis is usually solved with the least square method, which holds the form as (6) argmingkfk(gk)=Ykugk22(6) where ·2 denotes the L2 norm space. In our current study, we name this method as LS-GCA to differentiate it with other GCAs.

2.2.1.2. Granger causality analysis solved by LASSO

In real applications, traditional Granger causality for EEG analysis is easily influenced by various noise and there are usually redundant linkages for network analysis. In fact, resent neuroscience research have revealed the high efficiency of brain for neural cognition, which also indicates the sparse brain network structure of information processing [Citation38]. Thus, to alleviate the influence of EEG noise and characterize the sparse structure of brain network for cognition processing, the least absolute shrinkage and selection operator (Lasso) based Granger causality analysis models (Lasso-GCA) has been proposed [Citation39]. Mathematically, Lasso-GCA characterizes the causal interactions among multiple time series in a sparse way, which effectively preserves the strong-causal data between EEG sensors and eliminates the data contain weak causality [Citation24]. To attach this goal, we can impose the LASSO penalty on the coefficients [Citation18]. Thus, EquationEquation (6) can be further rewritten as (7) argmingkfk(gk)=Ykugk22+λgk11(7) where λ0 is the regularization parameter. Theoretically, the larger the λ is, the sparser the network structure is. In fact, the above equation is a well-known convex optimization problem, which has the equability function as least angle repression’s method [Citation40].

2.3. Brain inception neural network (BiNN)

To extract the structural features of brain networks, we designed a novel brain inception neural network. Specifically, we used 2-D convolution blocks to extract the features of brain network matrix and used the inception blocks to ensure the diversity of features. illustrates the structure of our model and illustrates the process of constructing brain networks and utilizing deep learning models to extract features, we directly select the edge connection matrix of each brain network as the feature matrix input to the deep learning model, and use the model to complete the feature extraction of the brain network connection matrix.

Figure 1. The structure of brain inception network.

Figure 1. The structure of brain inception network.

Figure 2. The flow chart of deep learning model for brain network construction and brain network edge connection input.

Figure 2. The flow chart of deep learning model for brain network construction and brain network edge connection input.

In addition, we used three fully connected layers and SoftMax activation function to predict the probability of each emotional state, and the loss for emotional classification loss Lp is calculated by cross entropy, that is (8) Lp=1Nic=1Myiclog(pic)(8) where yic is the predefined emotional label of ith segment and pic is the probability of emotion prediction

The deep domain adaptation architecture is adapted to further alleviate the domain difference between participants [Citation41]. Therefore, the loss function of Domain Discriminator could be represented as (9) Ld=1Ni[yidlog(pid)+(1yid)log(1pid)](9) where yid is the domain label of itha segment and pid is the probability of domain prediction.

Generally, in our current study, we utilized the GRL layer for adversarial learning [Citation41]. Thus, the loss of brain inception network could be represented as (10) Lf=1Nic=1Myiclog(pic)+λ1Ni[yidlog(pid)+(1yid)log(1pid)](10) and the learning rate of the network learning process would be decreasing with the increase of training steps, which could be expressed as (11) ηp=η0(1+αp)ω(11) where ηp(0,0.0005) is the learning rate and η0 = 0.0005, α=10, ω= 0.75, respectively. The detailed information of the Brain Inception network deep learning model is integrated in . The . Shows the parameters of the Inception block, and our model training hyper-parameters are shown in .

Table 1. Parameters of the BINN.

Table 2. Parameters of the Inception block.

Table 3. Model training hyper-parameters.

2.4. EEG dataset in our current study

To verify the efficiency of our proposed model for EEG-based emotion recognition, we conduct an experiment on one public emotional database, i.e. SEED [Citation13]. This database includes both facial and EEG signals recorded from 15 subjects (7 males and 8 females; age: MEAN: 23.27, STD: 2.37). During experiments, the subjects were asked to watch film clips with different emotional tags i.e. positive, neutral and negative. Each emotion tag has five film clips in one experiment. The duration of each film clip is about 4 min. In total, there were 15 trials for each experiment. To explore the stable pattern of different emotions, all participants were required to perform the experiments for three sessions. The time interval between two sessions was one week or longer. The EEG signal was recorded using an ESI NeuroScan System2 at a sampling rate of 1000 Hz from a 62-channel active AgCl electrode cap according to the international 10–20 system. More detail could be referred in https://bcmi.sjtu.edu.cn/∼seed/seed.html.

2.5. Emotion EEG preprocessing

The EEG preprocessing mainly consists of the following steps: 1) Down sampling the original dataset to 200 Hz, 2) Segmenting the EEGs into subsegments by a window of 1-s length without overlap, 3) Correcting the baseline of each subsegment, 4) Re-referencing each subsegment with common average reference, and 5) Filtering the data with a 5-order Butterworth band-pass filter of 1–50 Hz.

3. Results

In current study, we mainly utilized classification accuracy to evaluate the performance of our proposed method for emotion recognition. Specifically, we compared the performance difference between EEG activation and EEG network features, and the performance difference between our proposed model and other widely used classifiers. Totally, there are four types of features, i.e. the feature from EEG functional network (PLV), the features from EEG efficient network (GCA and Lasso-GCA) and the features from EEG energy distribution (DE) [Citation13], and two types of traditional classifiers, i.e. the shallow classifiers including multilayer perceptron (MLP) [Citation42], support vector machine (SVM) [Citation43], deep learning models such as DANN [Citation41] and our proposed brain inception neural network (BiNN) for comparison. For each method, the leave-one-subject-out cross-validation (LOSOCV) procedure was adopted to evaluate the classification performance. In general, illustrates the classification results of different features and classifiers. It’s worth noting that the value with the bold symbol indicates the best result. "ψ", "ξ" and "τ" denote the significant improvement of BiNN (Lasso-GCA), BiNN (PLV) and BiNN (GCA) when compared with others under p < 0.05.

Table 4. The cross-subject classification accuracy of models.

In general, consistently illustrates that the performance of our proposed method for emotion recognition is superior to those of others, which verifies the powerful capability of our proposed method in capturing discriminative structural features from brain networks. As shown in the , we validated the accuracy of three brain network extraction algorithms under different numbers of inception blocks. We found that for the three brain network extraction algorithms, the appropriate number of inception blocks can effectively improve the performance of emotion recognition. However, an excessive number of inception block stacking will lead to model overfitting and seriously reduce the recognition accuracy. Notably, the graphic features extracted from Lasso-GCA-based brain network by our proposed BiNN hold the best classification performance, which further indicates the efficiency of efficient brain networks in offering discriminant features for emotion recognition as illustrated in . As shown in , compared with BiNN (PLV), BiNN (GCA), DANN (DE), the features extracted by BiNN (Lasso-GCA) are easier to distinguish between different emotions. On the contrary, other methods don’t have ability to reveal the differences among emotions, especially within neutral and negative emotions. illustrates the capability of different methods in correctly identifying each emotional state through confusion matrices. Obviously, the results of effective brain-network features extracted by BiNN show the best performance than others, where the diagonal values in the confusion matrix is predominant. Furthermore, compared with traditional classifiers, the transfer learning strategies such as DANN and BiNN further improve the classification performance for emotion recognition across subjects.

Figure 3. Average accuracy of SEED dataset on different methods.

Figure 3. Average accuracy of SEED dataset on different methods.

Figure 4. The accuracy of three brain networks under Different Inception Blocks.

Figure 4. The accuracy of three brain networks under Different Inception Blocks.

Figure 5. t-SNE visualization of different emotion features from the last fully-connection layer before classification. (a) The features of effective brain network extracted by BiNN(Lasso-GCA); (b) The features of functional brain network extracted by BiNN(PLV); The features of effective brain network extracted by BiNN(Lasso-GCA); (d) The EEG activation features extracted by DANN(DE).

Figure 5. t-SNE visualization of different emotion features from the last fully-connection layer before classification. (a) The features of effective brain network extracted by BiNN(Lasso-GCA); (b) The features of functional brain network extracted by BiNN(PLV); The features of effective brain network extracted by BiNN(Lasso-GCA); (d) The EEG activation features extracted by DANN(DE).

Figure 6. The confusion matrix of classification results for different methods.

Figure 6. The confusion matrix of classification results for different methods.

4. Discussion

In recent years, the effectiveness of network in EEG-based classification or mechanisms analysis has been verified in many works [Citation23]. For example, Wang et al. used PLV to construct a brain network to model multi-channel EEG features as graph signals for EEG emotion classification [Citation21]. However, how to extract discriminative structural features from brain networks remains a big challenge. Thus, in this paper, a novel deep learning structure was proposed to extract discriminative features from the EEG based brain networks for emotion recognition. To evaluate its effectiveness, we applied it to a publicly available emotional EEG dataset, i.e. SEED and compared its performance difference with a series of commonly used methods such as SVM and DANN. Generally, the proposed method shows the best performance in terms of classification accuracy and feature distribution, which indicates that the discriminative structural features extracted from brain networks could further improve the classification capability of EEG for emotion recognition. Furthermore, we analyzed the working mechanism of the inception block and found that multi-scale convolution can more effectively extract effective information from brain networks. Focusing on the performance difference between different GCAs, we also found that the Lasso-GCA is superior to traditional GCA in emotion recognition. In fact, many previous studies have revealed that the Lasso-GCA can effectively alleviate the problem of redundant linkages in network construction [Citation18]. which may result in discriminative network structures for emotion recognition.

Through the results of confusion matrix illustrated in , we found that the ratio of misclassification between the neutral and negative emotions is higher than that between positive and neutral emotions and that between positive and negative emotions, which indicates that the network structures of negative and neutral emotions may be close to each other. In fact, this observation is consistent with previous studies [Citation44,Citation45]. In the work conducted by Gao et al. the authors revealed the structural similarity of brain networks between neutral and negative emotions by analyzing the network characteristics between these two emotions [Citation3]. Zheng et al. characterized the average connection patterns of different emotions and found that the neural patterns between neutral and negative emotions are similar to each other [Citation44]. Through and , we also observed that model difference has great influence on the classification accuracy. In fact, the traditional machine learning classifier such as SVM and MLP cannot achieve satisfying classification performance, which may be caused by the fact that they are not able to extract powerful discriminative information from the brain network matrix, such as representative subnetworks, and emotion processing related hub nodes [Citation46]. By using CNN-based inception blocks the proposed model can effectively capture these discriminative structure through multiple spatial scales [Citation32]. In our current study, we also observed that the number of inception blocks will influence the classification performance of different kinds of brain network. Basically, as shown in , the optimal number of inception blocks for Lasso-GCA and LS-GCA is 2, while the optimal number of inception blocks for PLV is 3. One thing needs to note is that though the number of inception blocks influence the classification performance difference of different kinds of brain networks, the performance of GCAs for emotion recognition is superior to that of PLV, where the classification accuracies of brain networks estimated by Lasso-GCA are always the best. These results further verified the efficiency of Lasso-GCA in characterizing the crucial network structures for emotion processing. In addition to inception blocks, the transfer learning further makes our proposed method feasible to extract common features among subjects. Moreover, compared with the brain network based graphic features, traditional energy-based features such as PSD and DE could not directly reveal the coupling relationship among brain regions, and may not offer elaborate structural features to identify cognitive states.

Generally, through both and and , we also observed that the classification performance of an effective brain network is better than that of a functional brain network. Focusing on the performance difference between Lasso-GCA and GCA, we observed that the Lasso-GCA is superior to GCA in the aspect of emotion recognition accuracy and feature separability. In essence, due to the constraints of the L1-norm-based regularization terms, Lasso-GCA characterizes the sparser network structures of cognition processing in the brain, which is also consistent with the sparse network structure of the human brain for cognitive processing. Focusing on , we observed that the samples of the same class in the space spanned by the features extracted from an effective brain network are more gathered, while the samples of different classes get more separated, especially for negative and neutral emotions. This may indicate that there are more driving relationships between multiple brain regions for emotion processing, comparing with the synchronization relationships characterized by functional brain networks such as PLV [Citation47].

5. Conclusion

This paper proposed a novel method for emotion recognition by designing a transform learning structure through the convolutional neural network, which efficiently extracts discriminant structural features from brain networks. The results based on the SEED dataset showed that our proposed BiNN can extract domain invariant discriminative features from the network matrix, so as to significantly improve the classification accuracy of cross-subject emotion recognition tasks. Furthermore, we also verified the superiority of an efficient brain network for EEG-based emotion analysis. Our proposed method may provide a novel insight into the EEG analysis of emotions.

Disclosure statement

All Authors agree that there is no relevant conflict of interest.

Additional information

Funding

This work was supported in part by the STI 2030-Major Projects under Grant 2022ZD0208500 and Grant 2022ZD02114000; in part by National Natural Science Foundation of China. (#61901077, #62171074, #U19A2082, and #61961160705); and in part by the Natural Science Foundation of Chongqing (CSTB2022NSCQ-MSX1171) and the Scientific and Technological Research Program of Chongqing Municipal Education Commission (KJQN202200640).This work was supported by National Science and Technology Innovation 2030 Major Project of China.

References

  • Poria S, Cambria E, Bajpai R, et al. A review of affective computing: from unimodal analysis to multimodal fusion. Inf Fusion. 2017;37:98–125. doi: 10.1016/j.inffus.2017.02.003.
  • Joshi VM, Ghongade RB. IDEA: intellect database for emotion analysis using EEG signal. J King Saud Univ-Comp Inf Sci. 2022;34(7):4433–4447. doi: 10.1016/j.jksuci.2020.10.007.
  • Gao X, Huang W, Liu Y, et al. A novel robust student’s t-based granger causality for EEG based brain network analysis. Biomed Signal Process Control. 2023;80:104321. doi: 10.1016/j.bspc.2022.104321.
  • Bocharov AV, Knyazev GG, Savostyanov AN. Depression and implicit emotion processing: an EEG study. Neurophysiol Clin. 2017;47(3):225–230. doi: 10.1016/j.neucli.2017.01.009.
  • Yu M, Xiao S, Hua M, et al. EEG-based emotion recognition in an immersive virtual reality environment: from local activity to brain network features. Biomed Signal Process Control. 2022;72:103349. doi: 10.1016/j.bspc.2021.103349.
  • Lee GP, Meador KJ, Loring DW, et al. Neural substrates of emotion as revealed by functional magnetic resonance imaging. Cogn Behav Neurol. 2004;17(1):9–17. doi: 10.1097/00146965-200403000-00002.
  • Soleymani M, Pantic M, Pun T. Multimodal emotion recognition in response to videos. IEEE Trans. Affective Comput. 2012;3(2):211–223. doi: 10.1109/T-AFFC.2011.37.
  • Hoeser T, Kuenzer C. Object detection and image segmentation with deep learning on earth observation data: a review-part i: evolution and recent trends. Remote Sens. 2020;12(10):1667. doi: 10.3390/rs12101667.
  • Eykholt K, et al. Robust physical-world attacks on deep learning visual classification. Proceedings of the IEEE conference on computer vision and pattern recognition, 2018. p. 1625–1634. doi: 10.1109/CVPR.2018.00175
  • Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. doi: 10.1016/j.media.2017.07.005.
  • Topic A, Russo M. Emotion recognition based on EEG feature maps through deep learning network. Eng SciTechnolInt J. 2021;24(6):1442–1454. doi: 10.1016/j.jestch.2021.03.012.
  • Salama ES, El-Khoribi RA, Shoman ME, et al. EEG-based emotion recognition using 3D convolutional neural networks. IJACSA. 2018;9(8):329–337. doi: 10.14569/IJACSA.2018.090843.
  • Zheng W-L, Lu B-L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, Vol. 2015;7(3):162–175.
  • Zheng W-L, Guo H-T, Lu B-L. Revealing critical channels and frequency bands for emotion recognition from EEG with deep belief network. in 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), 2015. pp. 154–157: IEEE. doi: 10.1109/NER.2015.7146583.
  • Wang Y, Wu Q, Wang C, et al. DE-CNN: an improved identity recognition algorithm based on the emotional electroencephalography. Comput Math Methods Med. 2020;2020:1–12. doi: 10.1155/2020/7574531.
  • Hwang S, Hong K, Son G, et al. Learning CNN features from DE features for EEG-based emotion recognition. Pattern Anal Applic. 2020;23(3):1323–1335. doi: 10.1007/s10044-019-00860-w.
  • Wang X-h, Zhang T, Xu X-m, et al. EEG emotion recognition using dynamical graph convolutional neural networks and broad learning system. in 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Madrid, SPAIN, DEC, 03-06, 2018. pp. 1240–1244: IEEE. doi: 10.1109/BIBM.2018.8621147.
  • Li P, Huang X, Zhu X, et al. Lp (p≤ 1) norm partial directed coherence for directed network analysis of scalp EEGs. Brain Topogr. 2018;31(5):738–752. doi: 10.1007/s10548-018-0624-0.
  • Wang Z-M, Zhou R, He Y, et al. Functional integration and separation of brain network based on phase locking value during emotion processing. IEEE Transactions on Cognitive and Developmental Systems, 2020;99:1. doi: 10.1109/TCDS.2020.3001642.
  • Šverko Z, Vrankić M, Vlahinić S, et al. Complex Pearson correlation coefficient for EEG connectivity analysis. Sensors, Vol. 2022;22(4):1477. doi: 10.3390/s22041477.
  • Wang Z, Tong Y, Heng X. Phase-locking value based graph convolutional neural networks for emotion recognition. IEEE Access. 2019;7:93711–93722. doi: 10.1109/ACCESS.2019.2927768.
  • Gonuguntla V, Mallipeddi R, Veluvolu KC. Identification of emotion associated brain functional network with phase locking value. in 2016 38Th Annual International Conference of the Ieee Engineering in Medicine and Biology Society (Embc). IEEE 2016. pp. 4515–4518: doi: 10.1109/EMBC.2016.7591731.
  • Dongwei C, Fang W, Zhen W, et al. EEG-based emotion recognition with brain network using independent components analysis and granger causality. in 2013 International Conference on computer medical applications (ICCMA) IEEE. Sousse, TUNISIA, JAN. 20-22, 2013. pp. 1–6. doi: 10.1109/ICCMA.2013.6506157.
  • Chen D, Miao R, Deng Z, et al. Sparse granger causality analysis model based on sensors correlation for emotion recognition classification in electroencephalography. Front Comput Neurosci. 2021;15:684373. doi: 10.3389/fncom.2021.684373.
  • Tang C, Li Y, Chen B. Comparison of cross-subject EEG emotion recognition algorithms in the BCI controlled robot contest in world robot contest 2021. Brain Sci Adv. 2022;8(2):142–152. doi: 10.26599/BSA.2022.9050013.
  • Pan SJ, Tsang IW, Kwok JT, et al. Domain adaptation via transfer component analysis. IEEE Trans Neural Netw. 2011;22(2):199–210. doi: 10.1109/TNN.2010.2091281.
  • Li Z, Jing X-Y, Wu F, et al. Cost-sensitive transfer kernel canonical correlation analysis for heterogeneous defect prediction. Autom Softw Eng. 2018;25(2):201–245. doi: 10.1007/s10515-017-0220-7.
  • Ilse M, Tomczak JM, Louizos C, et al. "Diva: domain invariant variational autoencoders," in Medical Imaging with Deep Learning, 2020. pp. 322–348: PMLR.
  • Luo Y, Lu B-L. Wasserstein-distance-based multi-source adversarial domain adaptation for emotion recognition and vigilance estimation. In 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Online conference, Dec. 9-12, 2021. pp. 1424–1428: IEEE. doi: 10.1109/BIBM52615.2021.9669383.
  • Lin Y-P, Jung T-P. Improving EEG-based emotion classification using conditional transfer learning. Front Hum Neurosci. 2017;11:334. doi: 10.3389/fnhum.2017.00334.
  • Li J, Qiu S, Shen Y-Y, et al. Multisource transfer learning for cross-subject EEG emotion recognition. IEEE Trans Cybern. 2020;50(7):3281–3293. doi: 10.1109/TCYB.2019.2904052.
  • Szegedy C, et al. Going deeper with convolutions in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015. p. 1–9.
  • Li P, Liu H, Si Y, et al. EEG based emotion recognition by combining functional connectivity network and local activations. IEEE Trans Biomed Eng. 2019;66(10):2869–2881. doi: 10.1109/TBME.2019.2897651.
  • Zhang J, Wang N, Kuang H, et al. An improved method to calculate phase locking value based on Hilbert–Huang transform and its application. Neural Comput & Applic. 2014;24(1):125–132. doi: 10.1007/s00521-013-1510-z.
  • Li P, Huang X, Zhu X, et al. Robust brain causality network construction based on bayesian multivariate autoregression. Biomed Signal Process Control. 2020;58:101864. doi: 10.1016/j.bspc.2020.101864.
  • Seth AK. A MATLAB toolbox for granger causal connectivity analysis. J Neurosci Methods. 2010;186(2):262–273. doi: 10.1016/j.jneumeth.2009.11.020.
  • Bressler SL, Seth AK. Wiener–granger causality: a well established methodology. Neuroimage. 2011;58(2):323–329. doi: 10.1016/j.neuroimage.2010.02.059.
  • Yu R, Zhang H, An L, et al. Connectivity strength‐weighted sparse group representation‐based brain network construction for M CI classification. Hum Brain Mapp. 2017;38(5):2370–2383. doi: 10.1002/hbm.23524.
  • Shaw L, Routray A. A new framework to infer intra-and inter-brain sparse connectivity estimation for eeg source information flow. IEEE Sensors J. 2018;18(24):10134–10144. doi: 10.1109/JSEN.2018.2875377.
  • Efron B, Hastie T, Johnstone I "Least angle regression,", et al. 2004.
  • Ganin Y, Lempitsky V. "Unsupervised domain adaptation by backpropagation," in International conference on machine learning, 2015. pp. 1180–1189: PMLR.
  • Murtagh F. Multilayer perceptrons for classification and regression. Neurocomputing. 1991;2(5-6):183–197. doi: 10.1016/0925-2312(91)90023-5.
  • Noble WS. What is a support vector machine? Nat Biotechnol. 2006;24(12):1565–1567. doi: 10.1038/nbt1206-1565.
  • Zheng W-L, Zhu J-Y, Lu B-L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affective Comput. 2019;10(3):417–429. doi: 10.1109/TAFFC.2017.2712143.
  • Wilke C, Ding L, He B. Estimation of time-varying connectivity patterns through the use of an adaptive directed transfer function. IEEE Trans Biomed Eng. 2008;55(11):2557–2564. doi: 10.1109/TBME.2008.919885.
  • Wei X, Zhou L, Chen Z, et al. Automatic seizure detection using three-dimensional CNN based on multi-channel EEG. BMC Med Inform Decis Mak. 2018;18(S5):71–80. doi: 10.1186/s12911-018-0693-8.
  • van Diessen E, Numan T, van Dellen E, et al. Opportunities and methodological challenges in EEG and MEG resting state functional brain network research. Clin Neurophysiol. 2015;126(8):1468–1481. doi: 10.1016/j.clinph.2014.11.018.