599
Views
0
CrossRef citations to date
0
Altmetric
Research Article

An efficient and accurate deep learning method for tree species classification that integrates depthwise separable convolution and dilated convolution using hyperspectral data

, ORCID Icon, , ORCID Icon, ORCID Icon, ORCID Icon, , & ORCID Icon show all
Article: 2307999 | Received 09 Oct 2023, Accepted 16 Jan 2024, Published online: 23 Jan 2024
 

ABSTRACT

Addressing accuracy and computational complexity challenges in hyperspectral image classification for small sample and multi-species scenarios, we developed DSC-DC, a lightweight convolutional neural network. This model is based on Depthwise Separable Convolution and Dilated Convolution and was trained using the Teakettle Experimental Forest dataset (USA). In this study, DSC-DC achieved an overall accuracy (OA) of 99.83%, average accuracy (AA) of 99.64%, and Kappa coefficient of 0.9996. Compared to Support Vector Machine and K-Nearest Neighbors, it demonstrated markedly higher OA (3.88% to 7.55%) and AA (30.71% to 34.09%). Compared to Inception-V3, ResNet50, and MSR-3DCNN, DSC-DC marginally outperformed in accuracy (OA: 0.06% to 0.31%; AA: 0.32% to 3.64%) while reducing the training time by 3.5, 5, and 35 times, and the prediction time by 2, 3, and 17 times, respectively. Moreover, DSC-DC exhibits slightly superior accuracy and efficiency compared to a 5-layer optimal structure of the 3D-CNN model. The application of the DSC-DC model to the hyperspectral dataset from the Jiepai branch of the Gaofeng State Owned Forest Farm in the Guangxi province, China, further demonstrated the reliability, versatility, and practical potential of this model. This study provides a reliable and efficient reference solution for small-sample and multi-tree classification tasks.

Acknowledgments

We would like to acknowledge all the people who have contributed to this paper. The authors would like to express their gratitude to Cambridge Proofreading (https://proofreading.org/) for the expert linguistic services provided.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The original contributions presented in this study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.

Additional information

Funding

This research was jointly supported by the Key Research and Development Program of Yunnan Province, China (No. 202303AC100009), and the Ten Thousand Talent Plans for Young Top-notch Talents of Yunnan Province (No. YNWR-QNBJ-2018-184).