789
Views
0
CrossRef citations to date
0
Altmetric
Articles

Improvement of automatic building region extraction based on deep neural network segmentation

, , &
Pages 393-408 | Received 24 Aug 2022, Accepted 20 Mar 2023, Published online: 06 Apr 2023

References

  • Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495. https://doi.org/10.1109/TPAMI.34
  • BoyKov, Y., & Kolmogorov, V. (2004). An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9), 1124–1137. https://doi.org/10.1109/TPAMI.2004.60
  • Brostow, G. J., Shotton, J., Fauqueur, J., & Cipolla, R. (2008). Segmentation and recognition using structure from motion point clouds. In European conference on computer vision (pp. 44–57).
  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Proceeding of IEEE conference on computer vision and pattern recognition (pp. 248–255).
  • Fan, T., Wang, G., Li, Y., & Wang, H. (2020). Ma-net: A multi-scale attention network for liver and tumor segmentation. IEEE Access, 8, 179656–179665. https://doi.org/10.1109/Access.6287639
  • Fang, W., Ding, Y., Zhang, F., & Sheng, V. S. (2019). DOG: A new background removal for object recognition from images. Neurocomputing, 361(7), 85–91. https://doi.org/10.1016/j.neucom.2019.05.095
  • Femiani, J., Para, W. R., Mitra, N., & Wonka, P. (2018). Facade segmentation in the wild. ArXiv Preprint, arXiv:1805.08634.
  • Fond, A., Berger, M. O., & Simon, G. (2021). Model-image registration of a building's facade based on dense semantic segmentation. Computer Vision and Image Understanding, 206, Article ID 103185. https://doi.org/10.1016/j.cviu.2021.103185
  • Futagami, T., Hayasaka, N., & Onoye, T. (2020). Fast and robust building extraction based on HSV color analysis using color segmentation and GrabCut. SICE Journal of Control, Measurement, and System Integration, 13(3), 97–106. https://doi.org/10.9746/jcmsi.13.97
  • Iwai, M., Futagami, T., Hayasaka, N., & Onoye, T. (2020). Acceleration of automatic building extraction via color-clustering analysis. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 103(12), 1599–1602. https://doi.org/10.1587/transfun.2020SML0004
  • Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440).
  • Men, K., Chen, X., Yang, B., Zhu, J., Yi, J., Wang, S., Li, Y., & Dai, J. (2021). Automatic segmentation of three clinical target volumes in radiotherapy using lifelong learning. Radiotherapy and Oncology, 157, 1–7. https://doi.org/10.1016/j.radonc.2020.12.034
  • Mo, Y., Wu, Y., Yang, X., Liu, F., & Liao, Y. (2022). Review the state-of-the-art technologies of semantic segmentation based on deep learning. Neurocomputing, 493(7), 626–646. https://doi.org/10.1016/j.neucom.2022.01.005
  • Ribani, R., & Marengoni, M. (2019). A survey of transfer learning for convolutional neural networks. In 32nd SIBGRAPI conference on graphics, patterns and images tutorials (pp. 44–57).
  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 234–241).
  • Rother, C., Kolmogorov, V., & Blake, A. (2004). GrabCut: interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics, 23(3), 309–314. https://doi.org/10.1145/1015706.1015720
  • Shao, H., Svoboda, T., & Van Gool, L. (2003). Zubud-zurich buildings database for image based recognition. In Computer vision lab, Swiss Federal Institute of Technology, Switzerland, Tech. Rep (p. 260).
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. ArXiv Preprint, arXiv:1409.1556.
  • Sklansky, J. (1972). Measuring concavity on a rectangular mosaic. IEEE Transactions on Computers, 21(12), 1355–1364. https://doi.org/10.1109/T-C.1972.223507
  • Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105–6114).
  • Tan, M., & Le, Q. (2021). Efficientnetv2: Smaller models and faster training. In International conference on machine learning (pp. 10096–10106).
  • Tang, W., Wang, Y., Zou, X., Li, Y., Deng, C., & Cui, J. (2021). Visualization of GNSS multipath effects and its potential application in IGS data processing. Journal of Geodesy, 95(9), 1–13. https://doi.org/10.1007/s00190-021-01559-9
  • Toft, C., Turmukhambetov, D., Sattler, T., Kahl, F., & Brostow, G. J. (2020). Single-image depth prediction makes feature matching easier. In European conference on computer vision (pp. 473–492).
  • Ueno, D., Yoshida, H., & Iiguni, Y. (2016). Automated GrowCut with multilevel seed strength value for building image. Transactions of the Institute of Systems, Control and Information Engineers, 29(6), 266–274. https://doi.org/10.5687/iscie.29.266. (in Japanese).
  • Wang, Y., Ren, T., Zhong, S. H., Liu, Y., & Wu, G. (2018). Adaptive saliency cuts. Multimedia Tools and Applications, 77(17), 22213–22230. https://doi.org/10.1007/s11042-018-5859-y
  • Wu, K., Otoo, E., & Suzuki, K. (2009). Optimizing two-pass connected-component labeling algorithms. Pattern Analysis and Applications, 12(2), 117–135. https://doi.org/10.1007/s10044-008-0109-y
  • Zangenehnejad, F., & Gao, Y. (2021). GNSS smartphones positioning: Advances, challenges, opportunities, and future perspectives. Satellite Navigation, 2(1), 1–23. https://doi.org/10.1186/s43020-021-00054-y
  • Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., & Liang, J. (2019). Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Transactions on Medical Imaging, 39(6), 1856–1867. https://doi.org/10.1109/TMI.42