41
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Precipitation forecasting with radar echo maps based on interactive spatiotemporal context with self-attention and the MIM model

, , , &
Received 05 Nov 2023, Accepted 16 Mar 2024, Published online: 28 Mar 2024

References

  • Bonnet SM, Evsukoff A, Morales Rodriguez CA. 2020. Precipitation nowcasting with weather radar images and deep learning in S˜ao Paulo, Brasil. Atmosphere (Basel). 11(11):1157. doi: 10.3390/atmos11111157.
  • Cao Y, Xu J, Lin S, Wei F, Hu H. 2019. Gcnet: non-local networks meet squeeze-excitation networks and beyond. Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops.
  • Cho K, Van Merrïenboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Chung J, Gulcehre C, Cho K, Bengio Y. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
  • Devlin J, Chang M-W, Lee K, Toutanova K. 2018. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Graves A, Graves A. 2012. Long short-term memory. Supervised Sequence Labell Recurrent Neural Netw. 385:37–45.
  • Guo Y, Li C, Zhou D, Cao J, Liang H. 2022. Context-aware dynamic neural computational models for accurate Poly(A) signal prediction. Neural Netw. 152:287–299. doi: 10.1016/j.neunet.2022.04.025.
  • He K, Zhang X, Ren S, Sun J. 2016. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778.
  • Huang Z, Wang X, Huang L, Huang C, Wei Y, Liu W. 2019. Ccnet: criss-cross attention for semantic segmentation. Proceedings of the IEEE/CVF International Conference on Computer Vision. 603–612.
  • Jaihuni M, Basak JK, Khan F, Okyere FG, Sihalath T, Bhujel A, Park J, Lee DH, Kim HT. 2022. A novel recurrent neural network approach in forecasting short term solar irradiance. ISA Transactions. 121:63–74. doi: 10.1016/j.isatra.2021.03.043.
  • Khan ZN, Ahmad J. 2021. Attention induced multi-head convolutional neural network for human activity recognition. Appl Soft Comput. 110:107671. doi: 10.1016/j.asoc.2021.107671.
  • Kolisnik B, Hogan I, Zulkernine F. 2021. Condition-cnn: a hierarchical multi-label fashion image classification model. Expert Syst Appl. 182:115195. doi: 10.1016/j.eswa.2021.115195.
  • Krizhevsky A, Sutskever I, Hinton GE. 2012. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 25.
  • LeCun Y, Bengio Y, Hinton G. 2015. Deep learning. Nature. 521(7553):436–444. doi: 10.1038/nature14539.
  • Li C, Zhang J, Yao J. 2021. Streamer action recognition in live video with spatial-temporal attention and deep dictionary learning. Neurocomputing. 453:383–392. doi: 10.1016/j.neucom.2020.07.148.
  • Li W, Guo Y, Wang B, Yang B. 2023. Learning spatiotemporal embedding with gated convolutional recurrent networks for translation initiation site prediction. Pattern Recognit. 136:109234. doi: 10.1016/j.patcog.2022.109234.
  • Melis G, Koˇcisk`y T, Blunsom P. 2019. Mogrifier lstm. arXiv preprint arXiv:1909.01792.
  • Shi X, Chen Z, Wang H, Yeung D-Y, Wong W-K, Woo W-C. 2015. Convolutional lstm network: a machine learning approach for precipitation nowcasting. Adv Nneural Info Process Syst. 28:802–810.
  • Shi X, Gao Z, Lausen L, Wang H, Yeung D-Y, Wong W-K, Woo W-C. 2017. Deep learning for precipitation nowcasting: a benchmark and a new model. Adv Neural Inf Process Syst. 30:5617–5627.
  • Sun H, Wang H, Li Z, Gao M, Xu Z, Li J. 2019. Study on reflectivity data interpolation and mosaics for multiple doppler weather radars. EURASIP J Wirel Commun Netw. 2019:1–10. doi: 10.1186/s13638-018-1318-8.
  • Sutskever I, Vinyals O, Le QV. 2014. Sequence to sequence learning with neural networks. Adv Neural Inf Process Syst. 27:3104–3212.
  • Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. 2017. Attention is all you need. Adv Nneural Info Process Syst. 30:5998–6008.
  • Wang X, Girshick R, Gupta A, He K. 2018. Non-local neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7794–7803.
  • Wang Y, Jiang L, Yang M H, Li L J, Long M, Fei-Fei L. 2018. Eidetic 3D LSTM: A model for video prediction and beyond. International conference on learning representations. pp. 1–14.
  • Wang Y, Long M, Wang J, Gao Z, Yu PS. 2017. Predrnn: recurrent neural networks for predictive learning using spatiotemporal lstms. Adv Nneural Info Process Syst. 30:879–888.
  • Wang Y, Zhang J, Zhu H, Long M, Wang J, Yu PS. 2019. Memory in memory: a predictive neural network for learning higher-order non-stationarity from spatiotemporal dynamics. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9154–9162.
  • Woo S, Park J, Lee J-Y, Kweon IS. 2018. Cbam: convolutional block attention module. Proceedings of the European Conference on Computer Vision (ECCV). 3–19.
  • Xie J, Chen S, Zhang Y, Gao D, Liu T. 2021. Combining generative adversarial networks and multi-output cnn for motor imagery classification. J Neural Eng. 18(4):046026.
  • Yang B, Wang L, Wong DF, Shi S, Tu Z. 2021. Context-aware self-attention networks for natural language processing. Neurocomputing. 458:157–169. doi: 10.1016/j.neucom.2021.06.009.
  • Yang S, Zhou D, Cao J, Guo Y. 2022. Rethinking Low-light enhancement via transformer-GAN. IEEE Signal Process Lett. 29:1082–1086. doi: 10.1109/LSP.2022.3167331.
  • Yang S, Zhou D, Cao J, Guo Y. 2023. Lightingnet: an integrated learning method for low-light image enhancement. IEEE Trans Comput Imaging. 9:29–42. doi: 10.1109/TCI.2023.3240087.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.