219
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

3D reconstruction of partial foot scans using different state of the art neural network approaches

, &
Pages 105-114 | Received 29 Jun 2023, Accepted 13 Feb 2024, Published online: 07 Mar 2024

References

  • Asdecker, B. (2019). Anzahl der Retouren (Deutschland) - Definition [Number of returns (Germany) - Definition]. http://www.retourenforschung.de/definition_anzahl-der-retouren-(deutschland).html
  • Asdecker, B. (2022). CO2-Bilanz einer Retoure - Definition [CO2 balance of a return - Definition]. http://www.retourenforschung.de/definition_co2-bilanz-einer-retoure.html
  • Bau, D., Zhu, J. Y., Wulff, J., Peebles, W., Zhou, B., Strobelt, H., & Torralba, A. (2019, October). Seeing what a GAN cannot generate. In Proceedings of the IEEE International Conference on Computer Vision (pp. 4501–4510).
  • Brandt, M. (2023). Infografik: Bekleidung und Schuhe werden am häufigsten retourniert—Statista [Infographic: Clothing and shoes are most frequently returned—Statista]. https://de.statista.com/infografik/23972/
  • Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., & Yu, F. (2015, December). ShapeNet: An information-rich 3D model repository. http://arxiv.org/abs/1512.03012
  • Dai, A., Qi, C. R., & Nießner, M. (2016, December). Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 (pp. 6545–6554).
  • Engel, J., Schöps, T., & Cremers, D. (2014). LSD-SLAM: Large-scale direct monocular SLAM. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 8690 LNCS(PART 2) (pp. 834–849). https://doi.org/10.1007/978-3-319-10605-2_54
  • Fan, H., Su, H., & Guibas, L. (n.d.). A point set generation network for 3D object reconstruction from a single image. https://github.com/fanhqme/PointSetGeneration.
  • Fei, B., Yang, W., Chen, W.-M., Li, Z., Li, Y., Ma, T., Hu, X., & Ma, L. (2022).) Comprehensive review of deep learning-based 3D point cloud completion processing and analysis. IEEE Transactions on Intelligent Transportation Systems, 23(12), 22862–22883. https://doi.org/10.1109/TITS.2022.3195555
  • Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The KITTI dataset. Sage Journals. http://www.cvlibs.net/datasets/kitti
  • Hu, L., & Kneip, L. (2021). Globally optimal point set registration by joint symmetry plane fitting. Journal of Mathematical Imaging and Vision, 63(6), 689–707. https://doi.org/10.1007/s10851-021-01024-4
  • Huang, Z., Yu, Y., Xu, J., Ni, F., & Le, X. (2020, March). PF-net: Point fractal network for 3D point cloud completion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 7659–7667).
  • Liu, Z., Tang, H., Lin, Y., & Han, S. (2019).) Point-voxel CNN for efficient 3D deep learning. Advances in Neural Information Processing Systems, 32.
  • Lyne Tchapmi, D. P., Kosaraju, V., Hamid Rezatofighi, S., Reid, I., & Savarese, S. (2019). TopNet: Structural point cloud decoder. http://completion3d.stanford.edu
  • Monica, J., & Campbell, M. (n.d.). Vision only 3-D shape estimation for autonomous driving. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1676–1683). IEEE.
  • Mur-Artal, R., Montiel, J. M. M., & Tardós, J. D. (n.d.). IEEE transactions on robotics 1 ORB-SLAM: A versatile and accurate monocular SLAM system. http://webdiis.unizar.es/
  • Pan, L., Chen, X., Cai, Z., Zhang, J., Zhao, H., Yi, S., & Liu, Z. (2021, April). Variational relational point completion network. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 8520–8529).
  • Pan, X., Xia, Z., Song, S., Li, L. E., & Huang, G. (2021). 3D object detection with pointformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(pp. 7463–7472).
  • Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C. C., & Luo, P. (2020). Exploiting deep generative prior for versatile image restoration and manipulation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 7474–7489. https://arxiv.org/abs/2003.13659v4 https://doi.org/10.1109/TPAMI.2021.3115428
  • Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2016, December). PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 (pp. 7–85).
  • Stutz, D., & Geiger, A. (n.d.). Learning 3D shape completion from laser scan data with weak supervision. https://avg.is.tuebingen.mpg.de/research
  • Tatarchenko, M., Dosovitskiy, A., & Brox, T. (2015, November). Multi-view 3D models from single images with a convolutional network. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 9911 LNCS (pp. 322–337). https://arxiv.org/abs/1511.06702v2
  • Wang, J., Cui, Y., Guo, D., Li, J., Liu, Q., & Shen, C. (2022, March). PointAttN: You only need attention for point cloud completion. https://arxiv.org/abs/2203.08485v1
  • Wang, W., Huang, Q., You, S., Yang, C., & Neumann, U. (2017, December). Shape inpainting using 3D generative adversarial network and recurrent convolutional networks. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 2317–2325).
  • Wu, J., Zhang, C., Xue, T., Freeman, W. T., & Tenenbaum, J. B. (2016). Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. Advances in Neural Information Processing Systems, 29, 82–90.
  • Xiang, P., Wen, X., Liu, Y. S., Cao, Y. P., Wan, P., Zheng, W., & Han, Z. (2021, August). SnowflakeNet: Point cloud completion by snowflake point deconvolution with skip-transformer. In Proceedings of the IEEE International Conference on Computer Vision (pp. 5479–5489).
  • Xie, H., Yao, H., Zhou, S., Mao, J., Zhang, S., & Sun, W. (2020). GRNet: Gridding residual network for dense point cloud completion. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 12354 LNCS (pp. 365–381). https://doi.org/10.1007/978-3-030-58545-7_21
  • Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., & Zhou, J. (2021, August). PoinTr: Diverse point cloud completion with geometry-aware transformers. In Proceedings of the IEEE International Conference on Computer Vision (pp. 12478–12487).
  • Yuan, W., Khot, T., Held, D., Mertz, C., & Hebert, M. (2018, August). PCN: Point completion network. In Proceedings - 2018 International Conference on 3D Vision, 3DV 2018 (pp. 728–737).
  • Zhang, J., Chen, X., Cai, Z., Pan, L., Zhao, H., Yi, S., Yeo, C. K., Dai, B., & Loy, C. C. (2021, April). Unsupervised 3D shape completion through GAN inversion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 1768–1777).
  • Zhou, Q.-Y., Park, J., & Koltun, V. (2018). Open3D: A modern library for 3D data processing. https://arxiv.org/abs/1801.09847v1