234
Views
1
CrossRef citations to date
0
Altmetric
Computers and computing

Multimodal Emotion Recognition Framework Using a Decision-Level Fusion and Feature-Level Fusion Approach

ORCID Icon & ORCID Icon

REFERENCES

  • H. Xu, H. Zhang, K. Han, Y. Wang, Y. Peng, and X. Li, “Learning alignment for multimodal emotion recognition from speech,” arXiv preprint arXiv: 1909.05645, 2019.
  • S. Nemati, R. Rohani, M. E. Basiri, M. Abdar, N. Y. Yen, and V. Makarenkov, “A hybrid latent space data fusion method for multimodal emotion recognition,” IEEE. Access., Vol. 7, pp. 172948–64, 2019.
  • F. Rahdari, E. Rashedi, and M. Eftekhari, “A multimodal emotion recognition system using facial landmark analysis,” Iran J. Sci. Techn. Trans. Elect. Eng., Vol. 43, no. 1, pp. 171–89, 2019.
  • L. Chen, K. Wang, M. Li, M. Wu, W. Pedrycz, and K. Hirota, “K-means clustering-based kernel canonical correlation analysis for multimodal emotion recognition in human-robot interaction,” IEEE Trans. Ind. Electron., Vol. 70, no. 1, pp. 1016–24, 2022.
  • E. S. Salama, R. A. El-Khoribi, M. E. Shoman, and M. A. W. Shalaby, “A 3D-convolutional neural network framework with ensemble learning techniques for multi-modal emotion recognition,” Egypt Inform J., Vol. 22, no. 2, pp. 167–76, 2021.
  • Z. He, Z. Li, F. Yang, L. Wang, J. Li, C. Zhou, and J. Pan, “Advances in multimodal emotion recognition based on brain–computer interfaces,” Brain. Sci., Vol. 10, no. 10, p. 687, 2020.
  • W. Liu, J. L. Qiu, W. L. Zheng, and B. L. Lu. “Multimodal emotion recognition using deep canonical correlation analysis”. arXiv preprint arXiv:1908.05349, 2019.
  • H. Zhang, “Expression-EEG based collaborative multimodal emotion recognition using deep autoencoder,” IEEE. Access., Vol. 8, pp. 164130–43, 2020.
  • M. G. Huddar, S. S. Sannakki, and V. S. Rajpurohit, “Multi-level context extraction and attention-based contextual inter-modal fusion for multimodal sentiment analysis and emotion classification,” Int J Multimed Inf. Retr., Vol. 9, no. 2, pp. 103–12, 2020.
  • D. Dresvyanskiy, E. Ryumina, H. Kaya, M. Markitantov, A. Karpov, and W. Minker. “An audio-video deep and transfer learning framework for multimodal emotion recognition in the wild”. arXiv preprint arXiv:2010.03692, 2020.
  • Y. Jiang, W. Li, M. S. Hossain, M. Chen, A. Alelaiwi, and M. Al-Hammadi, “A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition,” Inf. Fusion, Vol. 53, pp. 209–21, 2020.
  • Y. Li, C. T. Ishi, K. Inoue, S. Nakamura, and T. Kawahara, “Expressing reactive emotion based on multimodal emotion recognition for natural conversation in human–robot interaction,” Adv. Robot., Vol. 33, no. 20, pp. 1030–41, 2019.
  • N. Samadiani, G. Huang, W. Luo, C. H. Chi, Y. Shu, Wang, and T. Kocaturk, “A multiple feature fusion framework for video emotion recognition in the wild,” Concurr. Comput. Pract. Exp., Vol. 34, no. 8, p. e5764, 2022.
  • L. Stappen, et al. “The MuSe 2021multimodal sentiment analysis challenge: sentiment, emotion, physiological-emotion, and stress”. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge, pp. 5–14, 2021.
  • C. Akalya devi, D. Karthika Renuka, G. Pooventhiran, D. Harish, S. Yadav, and K. Thirunarayan, “Towards enhancing emotion recognition via multimodal framework,” J. Intell. Fuzzy Syst. 2022.
  • G. Sugitha, B. C. Preethi, and G. Kavitha, “Intrusion detection framework using stacked auto encoder based deep neural network in network,” Concurr. Comput. Pract. Exp., Vol. 34, no. 28, p. e7401, 2022.
  • P. Ravi Kiran Varma, R. R. Sathiya, and M. Vanitha, “Enhanced Elman spike neural network based intrusion attack detection in software defined network,” Concurr. Comput. Pract. Exp., p. e7503, 2022.
  • S. Poria, A. Hussain, and E. Cambria, Multimodal Sentiment Analysis. Springer Science and Business Media LLC, 2018.
  • M. F. H. Siddiqui, and A. Y. Javaid, “A multimodal facial emotion recognition framework through the fusion of speech with visible and infrared images,” Multimodal. Techn. Inter., Vol. 4, no. 3, p. 46, 2020.
  • S. Lin, M. Bai, F. Liu, L. Shen, and Y. Zhou, “Orthogonalization-guided feature fusion network for multimodal 2D+ 3D facial expression recognition,” IEEE Trans. Multimed., Vol. 23, pp. 1581–91, 2020.
  • F. H. Shajin, P. Rajesh, and S. Thilaha, “Bald eagle search optimization algorithm for cluster head selection with prolong lifetime in wireless sensor network,” J. Soft Comput. Eng. Appl., Vol. 1, no. 1, p. 7, 2020.
  • P. Rajesh, F. H. Shajin, and L. Umasankar, “A Novel Control Scheme for PV/WT/FC/battery to power quality enhancement in micro grid system: a hybrid technique,” Energ. Sourc. Part A, 1–17, 2021.
  • F. H. Shajin, and P. Rajesh, “FPGA realization of a reversible data hiding scheme for 5G MIMO-OFDM system by chaotic Key generation-based Paillier cryptography along with LDPC and Its side channel estimation using machine learning technique,” J. Circuits Syst. Comput., Vol. 31, no. 5, p. 2250093, 2021.
  • P. Rajesh, F. H. Shajin, B. Mouli Chandra, and B. N. Kommula, “Diminishing energy consumption cost and optimal energy management of photovoltaic aided electric vehicle (PV-EV) By GFO-VITG approach,” Energ. Sourc. Part A, pp. 1–19, 2021.
  • E. Batbaatar, M. Li, and K. H. Ryu, “Semantic-emotion neural network for emotion recognition from text,” IEEE. Access., Vol. 7, pp. 111866–78, 2019.
  • S. Siriwardhana, T. Kaluarachchi, M. Billinghurst, and S. Nanayakkara, “Multimodal emotion recognition with transformer-based self supervised feature fusion,” IEEE. Access., Vol. 8, pp. 176274–85, 2020.
  • K. Dashtipour, M. Gogate, E. Cambria, and A. Hussain, “A novel context-aware multimodal framework for persian sentiment analysis,” Neurocomputing, Vol. 457, pp. 377–88, 2021.
  • J. P. Singh, A. Kumar, N. P. Rana, and Y. K. Dwivedi, “Attention-based LSTM network for rumor veracity estimation of tweets,” Inf. Syst. Front., pp. 1–16, 2020.
  • M. Li, et al., “Multimodal emotion recognition and state analysis of classroom video and audio based on deep neural network,” J. Interconnect. Netw., p. 2146011, 2022.
  • S. Siriwardhana, T. Kaluarachchi, M. Billinghurst, and S. Nanayakkara, “Multimodal emotion recognition with transformer-based self supervised feature fusion,” IEEE Access., Vol. 8, pp. 176274–85, 2020.
  • Y. Cimtay, E. Ekmekcioglu, and S. Caglar-Ozhan, “Cross-subject multimodal emotion recognition based on hybrid fusion,” IEEE. Access., Vol. 8, pp. 168865–78, 2020.
  • D. Liu, L. Chen, Z. Wang, and G. Diao, “Speech expression multimodal emotion recognition based on deep belief network,” J. Grid Comput, Vol. 19, no. 2, pp. 1–13, 2021.
  • H. Huang, Z. Hu, W. Wang, and M. Wu, “Multimodal emotion recognition based on ensemble convolutional neural network,” IEEE. Access., Vol. 8, pp. 3265–71, 2019.
  • C. Li, Z. Bao, L. Li, and Z. Zhao, “Exploring temporal representations by leveraging attention-based bidirectional LSTM-RNNs for multi-modal emotion recognition,” Inf. Process. Manag., Vol. 57, no. 3, p. 102185, 2020.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.