224
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Deep multi-modal fusion network with gated unit for breast cancer survival prediction

&
Pages 883-896 | Received 21 Dec 2022, Accepted 02 May 2023, Published online: 11 May 2023
 

Abstract

Accurate survival prediction is a critical goal in the prognosis of breast cancer patients because it can help physicians make more patient-friendly decisions and further guide appropriate treatment. Breast cancer is often caused by genetic abnormalities, which prompts researchers to consider information such as gene expression and copy number variation in addition to clinical data in their studies. The integration of these multi-modal data can improve the predictive power of models. However, with the highly unbalanced information of breast cancer patient data, it becomes a new challenge for breast cancer patient survival prediction to fully extract the characteristic information of these multi-modal data and to consider the complementarity of this information. To this end, we propose a deep multi-modal fusion network (DMMFN) to predict the five-year survival of breast cancer patients by integrating clinical data, copy number variation data, and gene expression data. The imbalanced dataset is first processed using the oversampling method SMOTE-NC. Then the abstract modal features of the multi-modal data are extracted by the two-layer one-dimensional convolutional neural network and the bi-directional long short-term memory network. Next, the weight coefficients of each modal data are dynamically adjusted using gated multimodal units to obtain fusion features. Finally, the fusion features are fed into the MaxoutMLP classifier to obtain the final prediction results. We conducted experiments on the METABRIC dataset to verify the validity of the multi-modal data and compared it with other methods. The comprehensive performance evaluation shows that DMMFN has better prediction performance.

Disclosure statement

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Funding

This work was supported by the Scientific Research Plan Projects of Education Department of Jiangxi Province of China under the Grant No. GJJ160554, the Talent Plan Project of Fuzhou City of Jiangxi Province of China under the Grant No. 2021ED008, and the Opening Project of Jiangxi Key Laboratory of Cybersecurity Intelligent Perception under the Grant No. JKLCIP202202.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.