651
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Deep convolutional transformer network for hyperspectral unmixing

, , & ORCID Icon
Article: 2268820 | Received 09 Jun 2023, Accepted 05 Oct 2023, Published online: 30 Oct 2023
 

ABSTRACT

Hyperspectral unmixing (HU) is considered one of the most important ways to improve hyperspectral image analysis. HU aims to break down the mixed pixel into a set of spectral signatures, often commonly referred to as endmembers, and determine the fractional abundance of those endmembers. Deep learning (DL) approaches have recently received great attention regarding HU. In particular, convolutional neural networks (CNNs)-based methods have performed exceptionally well in such tasks. However, the ability of CNNs to learn deep semantic features is limited, and computing cost increase dramatically with the number of layers. The appearance of the transformer addresses these issues by effectively representing high-level semantic features well. In this article, we present a novel approach for HU that utilizes a deep convolutional transformer network. Firstly, the CNN-based autoencoder (AE) is used to extract low-level features from the input image. Secondly, the concept of tokenizer is applied for feature transformation. Thirdly, the transformer module is used to capture the deep semantic features derived from the tokenizer. Finally, a convolutional decoder is utilized to reconstruct the input image. The experimental results on synthetic and real datasets demonstrate the effectiveness and superiority of the proposed method compared with other unmixing methods.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61871226, Grant 61571230, and in part by Open Research Fund in 2021 of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense under Grant JSGP202204, and Jiangsu Geological Bureau Research Project under Grant 2023KY11.