570
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Hyperspectral image classification using improved multi-scale block local binary pattern and bi-exponential edge-preserving smoother

&
Article: 2237654 | Received 19 Nov 2022, Accepted 12 Jul 2023, Published online: 27 Jul 2023

ABSTRACT

In this paper, a multi-strategy fusion (MSF) framework, based on improved MBLBP and bi-exponential edge-preserving smoother (BEEPS), is proposed for hyperspectral image (HSI) classification. First, MBLBP operator is adopted to characterize the overall structural information of HSI, where the averaging strategy allocates same weights for the pixels in a local sub-region, so that the edges tend to be blurred due to being isotropic. To solve this question, the steering kernel is first introduced into MBLBP for learning the local structure prior of HSI. Then, a support vector machine classifier is used to calculate the soft classified probabilities of pixels. Furthermore, BEEPS is adopted to smooth the soft classified probabilities maps in the post-processing stage, and the purpose is to further improve classification accuracy of HSI by considering context-aware information for each class label. Experiments are performed on three real hyperspectral datasets, namely, Indian Pines, KSC, and Houston 2013, only 1%, 6, and 5 labeled samples are randomly selected for training, the overall accuracy(kappa) obtained by MSF is 99.47%(99.40), 99.52%(99.47), and 94.25%(93.78), respectively, which is far better than the contrast methods.

Introduction

Hyperspectral image (HSI), collected by sensors technology, can record rich spectral and spatial information from hundreds of continuous spectral channels, which support the fine recognition of various land cover objects (Fu et al., Citation2016). Therefore, HSI has attracted more and more attention in different application fields such as precision agriculture (Zhang et al., Citation2021), target detection (Y. Li et al., Citation2019; Ou et al., Citation2020), food safety (Aviara et al., Citation2022), and military application (Hupel & Stütz, Citation2022). Particularly, HSI classification is an important and challenging task in remote sensing, and the goal is to recognize the pixels into a certain class. In the past few decades, people often focused on spectral information, utilizing only spectral information to finish the classification task, and proposed a large number of pixel-wise classification methods, such as random forests (RF) (Ham et al., Citation2005), sparse representation (Y. Chen et al., Citation2011), neural networks (Ratle et al., Citation2009), and support vector machine (SVM) (Melgani & Bruzzone, Citation2004). Among these algorithms, the SVM classifier provides competitive performance in terms of classification accuracy, since it is insensitive to the “curse of dimensionality” problem, i.e., the high computational burden and the lack of training samples can lead to the high computational complexity along with may even affect the performance of classifiers (H. Liu et al., Citation2019; Peng et al., Citation2015).

Although pixel-wise classification algorithms have the advantages of conceptual simplicity and computational effectiveness by fully utilizing spectral information, they often happen with undesired salt and pepper appearance because spatial information and the variability of spectral features have not been appropriately addressed (Fauvel et al., Citation2013). In recent years, various spatial-spectral analysis technologies have been developed that exploit both spectral and spatial contextual information to complement pixel-wise classification. For example, Markov models were adopted for HSI classification by integrating spatial and contextual information into the classifier (Cao et al., Citation2020; Jiang et al., Citation2020). However, this model has two disadvantages of heavy computation and insufficient samples to characterize the object of interest. Another family of spatial-spectral classification is based on the kernel combination or fusion, e.g. morphological, composite, and graphic kernels (Anand et al., Citation2021; Ergul & Bilgin, Citation2020). The kernel-based algorithms, particularly, in the joint with SVM have been demonstrated to provide good classification performance in HSI (B. Kumar & Dikshit, Citation2017; S. Yang et al., Citation2019).

Inspired by the sparse coding mechanism of human vision systems (Olshausen & Field, Citation1996), sparse representation has been proved to be an effective tool in HSI classification based on the hypothesis that an unknown test pixel can be approximately represented by a few atoms among all training samples (Y. Chen et al., Citation2011). Recently, the joint representation models have emerged, drawing on the promotion of sparse representation and collaborative representation, such as kernel-based joint sparse model, structured joint sparse model, and multiscale adaptive sparse representation (Fang et al., Citation2014; Yan et al., Citation2019; W. Yang et al., Citation2020).

Another important model of spatial-spectral classification methods is image segmentation. First, based on the homogeneity of either texture or intensity, the HSI dataset is segmented into different regions where pixels within the same region can be deemed as a spatial neighborhood (Sun et al., Citation2021); Then, the pixel-wise classification results with the segmentation map obtained by image segmentation are integrated by the majority voting rule. At present, different segmentation technologies such as hierarchical segmentation, watershed, and minimum spanning forest have been proposed for HSI classification (Akbari, Citation2019; Lv et al., Citation2020). In general, the segmentation techniques have a great influence on the performance of segmentation-based algorithms. To obtain better segmentation results, multiple spatial-spectral classification algorithms can be combined, but this strategy may be time-consuming (Majdar & Ghassemian, Citation2020).

Recently, edge-preserving filtering (EPF) plays an important role in the field of remote sensing and has been widely applied in many fields, such as image segmentation, image denoising, classification, detail enhancement, and image fusion (Kang et al., Citation2014; S. Li et al., Citation2013; Xing et al., Citation2021). Up to now, as a preprocessing step, a large number of edge-preserving filtering techniques have been applied before performing HSI classification task, and the representative methods include bilateral filter (BF), bi-exponential edge-preserving smoother (BEEPS), curvature filter, recursive filter, and guided image filter (GIF) (Hao et al., Citation2022; J. Liao et al., Citation2019; Wan & Zhao, Citation2019). Among them, BF, BEEPS, and GIF have been broadly utilized for their simplicity and effectiveness. In particular, compared with the BF and GIF, BEEPS provides competitive performance as it presents no ripples in the frequency domain and the computational complexity of each pixel is constant. Inspired by the spatial variability due to the noise can be smoothed out with the help of edge-preserving filtering technologies, researches have proved that post-processing classification with EPF is also an important strategy in enhancing the performance of classifiers (Wang et al., Citation2017). A complementary spectral-spatial method for HSI classification was developed, which integrates texture smoothing pre-processing feature extraction method and post-processing feature extraction method based on edge preserving filtering (Shi et al., Citation2022).

Besides, deep learning (DL) models have been extensively applied in the field of HSI classification due to their ability of exploiting intrinsic and differentiable high-level features from original images. HSI classification method based on stacked autoencoder was exploited to generate higher classification accuracy by using the spatial-spectral features (Y. Chen et al., Citation2014). Later, the DL model based on deep belief network (DBN) was presented (Tong et al., Citation2015). Since then, various deep learning methods, like recurrent neural network (RNN), generative adversarial network (GAN), random patches network (RPNet), convolutional neural network (CNN), and others, have been developed for HSI classification, respectively. Among these networks, CNN has attracted extensive attention in HSI classification because of its excellent classification ability. For instance, A 3-DCNN-based feature extraction model with combined regularization is built to extract the spectral-and-spatial-based deep features for HSI classification (Y. Chen et al., Citation2016). Furthermore, a novel supervised classification algorithm for HSI classification was developed, which combines both spectral and spatial information in a unified Bayesian framework (Cao et al., Citation2018). Very recently, a deep CNN and Markov random field (MRF)-based two-stage classification framework is developed for HSI classification (Singh et al., Citation2021). To obtain higher classification accuracy under the condition of limited training samples, a multidimensional CNN combined with an attention mechanism model was proposed (J. Liu et al., Citation2022).

Generally speaking, the deep models could exploit the robust and high-level features, which provide competitive results to state-of-the-art methods in terms of classification accuracy. However, the structure of a deep network consists of a large number of parameters and layers, and these disadvantages make DL very time-consuming in the training process (G. Zhao et al., Citation2020). To alleviate this situation, a new broad learning system (BLS) (C. Chen & Liu, Citation2018) was proposed. Compared with DL networks, less network training parameters and fewer labeled training samples are needed in BLS, leading to the training time is significantly lower. At present, some spatial-spectral classification methods based on BLS have been developed for HSI classification (G. Zhao et al., Citation2020; L. Zhao et al., Citation2022).

Well known, hyperspectral images consist of hundreds of contiguous spectral channels, nevertheless many of which include redundant information. General speaking, band selection (e.g. linear prediction error (LPE)) and linear projections (e.g. principal component analysis (PCA)) are two main ways to reduce the dimensionality. Recently, researchers are increasingly focusing their attention on utilizing nature-inspired optimization algorithms for the selection of features for HSI classification. A three-dimensional discrete wavelet transform-based spatial feature extraction was proposed by using improved whale optimization-based band selection for HSI classification (Manoharan & Boggavarapu, Citation2021). A whale optimization-based band selection for the hyperspectral image is presented by resembling the hunting behaviour of humpback whales (B. L. N. P. Kumar & Manoharan, Citation2021). A novel unsupervised multi-objective multi-verse optimizer-based band selection approach was developed for HSI classification (Sawant et al., Citation2022).

Apart from the global structure obtained from dimensionality reduction approaches, e.g. principal component analysis (PCA) and linear discriminant analysis (LDA), the local structure is also significant in practical applications (Uddin et al., Citation2021). Local binary pattern (LBP) is a simple and effective texture operator with low computational complexity, which has effectively encapsulated integrated spatial structure of local texture information for texture classification in pattern recognition and computer vision fields (Ye et al., Citation2020). In general, traditional LBP is demonstrated as a useful and powerful local descriptor for micro-structures of images, but it also has some shortcomings for recognizing and classifying scene categories (Y. Li et al., Citation2022): noisy image generates low-quality results for basic LBP operator because it is built in a small spatial support area, and features extracted from a local neighborhood window incapable characterize larger-scale structure information that may be stapled textural features of the scene. To solve the abovequestions, the multiscale block LBP (MBLBP) is proposed to extract the overall structural information by averaging the values of subregions, which can prevent the loss of detailed features to a certain extent (Y. Li et al., Citation2022; S. Liao et al., Citation2007). Currently, the MBLBP operator has provided very effective and robust performance in some research fields, such as target recognition, surface defect detection, image retrieval, and object tracking (Halidou et al., Citation2014; W. Li et al., Citation2015).

Previous researches have demonstrated that the MBLBP descriptor characterizes a relatively complete image representation by encoding the microstructures and macrostructures. Furthermore, post-processing classification is a critical step in enhancing the quality of classifiers. Inspired by these ideas, to characterize both the overall structural and edge-preserving filtering information to improve the accuracy of HSI classification, a spatial-spectral HSI classification model based on the improved MBLBP and BEEPS algorithms is proposed in this paper. For the sake of convenience, this proposed HSI classification method is called MSF with a multi-strategy fusion scheme.

The main contributions of our proposed framework are as follows:

An improved MBLBP descriptor by introducing the steering kernel is applied to HSI in the pre-processing stage, which could capture the spatial correlation among neighboring pixels, and could mitigate this phenomenon of “same class has different spectral information” and “different classes have same spectral information” to a certain extent.

BEEPS is introduced into the initial probability maps in the post-processing stage, where the extracted context-aware information for each class label can significantly improve HSI classification accuracy. Namely, it is to make up for the information loss of feature extraction by adding smoothing to the probability maps.

Comparative experiments are performed on three real hyperspectral datasets, and the experimental results demonstrate that the proposed MSF method is superior to other state-of-the-art algorithms in terms of classification accuracy and visual classification maps, especially suitable for small-sample problems in HSI classification.

The rest of this paper is arranged as follows. Section 2 gives the flowchart of MSF framework and presents the description of IMBLBP methodology. Section 3 provides the experimental results and discussions on three real hyperspectral images. Finally, the paper is summarized and future research directions are given in Section 4.

Methodology

In view of the above description, the application that we are focused on in this paper is a multi-strategy supervised classification framework. For illustrative intents, the diagram of the proposed classification framework is depicted in , which is constituted of three main strategies: (1) Extract overall structural information of principal components (PCs); (2) Acquire the original soft-classified probabilities maps; (3) Smooth the soft-classified probability maps by the BEEPS. Concretely, principal component analysis (PCA) is employed to transform the high-dimensional HSI into a lower dimensional subspace. After dimensionality reduction, the proposed IMBLBP algorithm is applied to each principal component, to characterize the overall structural information. In addition, the obtained IMBLBP features as input are fed into the SVM classifier to generate initial probability maps. Finally, a BEEPS based post-processing is applied to the initialed probability maps, where the maximum probability criteria are adopted to determine the final classification result.

Figure 1. Illustration of the proposed classification framework.

Figure 1. Illustration of the proposed classification framework.

In this section, the feature extraction based on IMBLBP operator is introduced. For the basic LBP operator, a small spatial support region, so that the bit-wise comparison between the central pixel and its neighborhood pixels is easily affected by noise. Furthermore, features extracted in a 3 × 3 neighborhood window cannot characterize large scale structures that may lose the detailed features. To address the limitations of LBP, an improved multiscale block LBP with steering kernel (IMBLBP) is proposed, which is applied to extract the structural features of HSI.

In traditional MBLBP, the comparison operator between the average intensity values of rectangular sub-regions is adopted to replace with the comparison between single pixels in LBP. An MBLBP descriptor contains nine sub-regions, and each sub-region is a square block with 2n + 1 pixels (or just one pixel, is considered as the basic LBP), as shown in . Formally, the resulting MBLBP can be calculated as:

Figure 2. The 9×9 MB-LBP operator.

Figure 2. The 9×9 MB-LBP operator.

(1) MBLBPgc=p=07s(gpgc)(1)

where gˉp=1s2i=1s2gp(i) denotes the average intensities of neighborhood block pixels, and gˉc=1s2i=1s2gc(i) represents the average intensity of the center block. The function s() is interpreted as follows:

(2) s(gpgc)=1gpgc0gp<gc(2)

Besides the local structures, MBLBP can characterize the global structures of an image by averaging pixels in the overlapping sub-regions. Note that, averaging strategy allocates the same weights for the pixels in a local sub-region and ignores the directions of edges in the image so that the edges are inclined to be blurred in the averaging process.

In view of the edge structure plays a critical role in executing MBLBP algorithm and it is expected to improve the distinguish ability of texture information by taking advantage of the edge direction. In this paper, instead of the mean filtering strategy in MBLPB, the steering kernel is introduced into the MBLBP descriptor to learn local structure prior of HSI. Due to the steering kernel concerning the edge direction and the radiometric similarities of pixels in a square block, the strategy enables the proposed IMBLBP to be more reliable than the basic MBLBP. Mathematically, the steering kernel is defined as:

(3) wik=det(Ci)2πh2exp(xixk)TCi(xixk)2h2(3)

where h represents the smoothing parameter to manage the supporting range of the kernel, xi and xk denote the pixel coordinates. Here, Ci represents a symmetric gradient covariance matrix calculated from a local square block wi. Specially, to acquire a steady estimate of covariance matrix, a parametric approach is utilized in (Takeda et al., Citation2007). That is, the matrix Ci is supposed a composition of three elements as follows:

(4) Ci=γiUθiΛiUθiT(4)
(5) Uθi=cosθisinθisinθicosθi(5)
(6) Λi=σi00σi1(6)

where γi represents the scaling parameter, Uθiand Λi denote the rotation and elongation matrix, respectively. Here, the singular value decomposition of the local gradient matrix Gi is used to determine the three parameters γi,θi, and σi.

As mentioned before, instead of the averaging strategy in MBLBP, the proposed IMBLBP adopts the weighted averaging method to characterize the edge direction information by introducing the steering kernel. In this way, the normalized local steering kernel is used as the weight function, and then gˉp and gˉc in EquationEquation (1) are recalculated as:

(7) gˉp=kωiwikgp(7)
(8) gˉc=kωiwikgc(8)

After calculating the weighted average intensities of neighborhood block pixels and the weighted average intensity of the center block, i.e. gˉp and gˉc, respectively. The function s() is still interpreted by EquationEquation (2).

Experimental results and discussions

Experimental setup

Dataset description

Three publicly available hyperspectral datasets are conducted to verify the superiority of the proposed method, i.e. the Indian Pines image, the Kennedy Space Center (KSC) image, and the Houston 2013 image.

The Indian Pines dataset was gathered by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in Northwestern Indiana, with a spectral coverage changing from 0.4 to 2.5 μm. The image scene comprises of 16 types of crops, with a size of 145 × 145 pixels, and the spatial resolution is 20 m per pixel. After eliminating 20 water immersion and noisy channels, the remaining 200 spectral bands are used for experimental analysis. The category information and the number of samples used for training and testing in the Indian Pines dataset are shown in .

Table 1. Class labels and train-test distribution of samples for the Indian Pines dataset.

The second real-world HSI dataset was captured by the AVIRIS instrument over the KSC of Florida in 1996, with a geometric resolution of 18 m per pixel. The size of the image is 512 × 614 and a total of 224 spectral channels, where the wavelength ranging from 0.4 to 2.5 μm. The number of spectral channels is 224 for KSC, and the remaining 176 bands are utilized for subsequent analysis after abandoning the low signal-to-noise ratio and water absorption channels. The category information and the number of samples used for training and testing in the KSC dataset are shown in .

Table 2. Class labels and train-test distribution of samples for the KSC dataset.

The third dataset was captured by the compact airborne spectrographic imager (CASI) sensor over the Houston University campus and the neighboring urban area. This image consists of 144 bands with each band of size 349 × 1905, where the spatial resolution is 2.5 m/pixel, and the spectral coverage ranges from 0.38 to 1.05 µm. There are a total of 15,029 labeled samples available that are divided into 15 different land cover types. The category information and the number of samples used for training and testing in the Houston 2013 dataset are shown in .

Table 3. Class labels and train-test distribution of samples for the Houston 2013 dataset.

Parameter setting

In our experiments, three standard performance metrics: the overall accuracy (OA), the average accuracy (AA), and the kappa coefficient (κ) are adopted for quantifying the quality of different classification algorithms. In this section, we evaluate the proposed IMBLBP and MSF algorithms with some state-of-the-art classification techniques, such as the raw spectral features with the SVM classifier (SVM) (Melgani & Bruzzone, Citation2004), the guided image filtering features using the SVM classifier (GF-SVM), the bi-exponential edge-preserving smoothing features with the SVM classifier (BEEPS-SVM) (Wan & Zhao, Citation2019), decision-level fusion with the LBP features along with initial spectral features and global Gabor features (DF-SVM) (W. Li et al., Citation2015), multiscale adaptive sparse representation (MASR) (Fang et al., Citation2014), spectral-spatial shared kernel ridge regression (SSSKRR) (C. Zhao et al., Citation2019), a spatial-spectral joint classification frame based on the edge-preserving filtering (EPF-B-g) (Kang et al., Citation2014) and local binary pattern features using the SVM classifier (LBP-SVM) (W. Li et al., Citation2015), 3D convolutional neural network with logistic regression (3D-CNN-LR) (Y. Chen et al., Citation2016) and a spatial-spectral classification using locality preserving projection, local binary pattern, and broad learning system (LPP-LBP-BLS)) (G. Zhao et al., Citation2020). All contrastive algorithms utilize the same parameter setting provided by the original papers.

Before performing the proposed IMBLBP and MSF methods, it is needed to conduct parameter setting. To maximize the overall classification accuracy, 10fold cross validation is used to empirically regulate the related parameters, i.e. the patch size W, the number of principle components K, the spatial parameter λ, and the range parameter σ. For IMBLBP, the number of principle components K is chosen from the set 3,5,7,9,11,13,15, and the patch size W is chosen from the set 5,9,13,17,21,25,29,33. Here, K and W are set as K=7, W=25 for the Indian Pines and the KSC datasets, and K=11, W=21 for the Houston 2013 dataset, respectively. For MSF, the spatial parameter λ is chosen from the interval 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9, and the range parameter σ is chosen from the interval 1,2,3,4,5,6,7,8,9. For the Indian Pines dataset, λ and σ are roughly set as λ=0.5, σ=7; for the KSC dataset, λ and σ are roughly set as λ=0.8, σ=5; for the Houston 2013 dataset, and λ and σ are roughly set as λ=0.7, σ=4. Specially, the training sets are consistent with in the process of selecting the appropriate parameters mentioned previously, and the selection process is further explained in the next section.

Experimental results

Results on Indian Pines dataset

The first experiment is conducted on the Indian Pines data set, and it is commonly adopted and relatively difficult to classify. To analyze the small sample size problem, 1% labeled samples are randomly chosen from each class to compose the training set, and the rest samples are then employed as the testing set. The OA, AA, and κ metrics of different methods are present in . Note that, to make the comparison impartial, each experiment is executed 10 times, and the average classification accuracies are recorded in .

Table 4. Accuracy for every class (%), OA (%), AA (%), and κ of various classification methodologies for the Indian Pines image.

Compared with the spectral-based classifier SVM, the accuracy metrics generated by other various spatial-spectral analysis methodologies are significantly better. For instance, GF-SVM offers over 6.39% (0.0729) greater OA(κ) than SVM, and BEEPS-SVM generates over 10.48% (0.1216) greater OA(κ) than SVM. The reason for the high OAs of GF-SVM and BEEPS-SVM is that edge-preserving filtering technology can extract spatial-spectral information of HSI while effectively reducing noise interference. In particular, DF-SVM acquires over 20.16% (0.2303) greater OA(κ) than SVM, mainly because decision-level fusion-based SVM that merges the advantages from a classifier ensemble of spectral features, LBP features, and Gabor features. As can be observed, the gains (in OA, AA, and Kappa) of the MASR are more than 20%, as compared to the SVM. This is due to the fact that the MASR effectively extracts the complementary yet correlated information at multiple scales. From the results, the SSSKRR provides over 16.84% (0.1930) greater OA(κ) than SVM, based on the reason that SSSKRR considers the non-linear separability between various objects by adopting the kernel ridge regression and nonlinear sharing subspace method.

Moreover, EPF-B-g yields superior performance than most of the compared methodologies. This is because post-processing filtering technology which combines both spatial and spectral features is extraordinarily useful for the HSI classification. Note that the classification accuracy of 3D-CNN is lower compared to other classification methods, and the main reason is that the large number of parameters in deep learning model leads to overfitting under the condition of deficient training samples, resulting in the poor classification performance.

In particular, the OA(κ) values of LBP-SVM for the Indian Pines scene, which are 18.50% (0.2125), successively higher than that of SVM. The main reason is that LBP operator encodes the information of image texture configuration while supplying local structure patterns. We also provide the experimental results of LPP-LBP-BLS as comparison. It is worth noting that the IMBLBP is superior to LBP-SVM and LPP-LBP-BLS. Mainly because the spatial features from IMBLBP are more effective for classifier than that of the LBP features, in addition, the feature extraction of HSI using IMBLBP can effectively alleviate this phenomenon of “same class has different spectral information” and “different classes have same spectral information”. By contrast, the proposed MSF framework provides the best classification metrics compared to other state-of-the-art algorithms, i.e. OA = 99.47%, AA = 99.03%, and κ = 99.40%. Among all the 16 land cover classes, MSF carries out the best in 15 classes and surpasses 99% accuracies in 13. The result can be explained that the classification accuracy of classifier can be obviously improved by integrating the spatial correlation among neighboring pixels in the pre-processing and adding smoothing term in the post-processing.

presents the classification maps acquired by different methodologies for the Indian Pines scene. From this figure, we can clearly observe that the classification maps resulting from SVM and 3D-CNN-LR are very bad since lots of noisy evaluations are visible [], where the classification maps are unanimous with the classification results as given in . By contrast, MASR and SSSKRR methods [] produce cleaner classification maps than that from employing spectral features only []. It is also important to notice that the edge-preserving filtering-based methods [] can smooth classification maps by characterizing spatial information in contrast to SVM; moreover, the classification map of EPF-B-g contains more homogeneous regions when compared with the GF-SVM and BEEPS-SVM methods. This means that, compared to the GF and BEEPS features, edge-preserving filtering technology as a post-processing approach, is more helpful for the classification task. In addition, the LBP-based methods [] can provide competitive classification maps due to the property of LBP features, especially in some small local regions, such as the class of alfalfa, the class of grass-pasture-mowed and the class of oats. It can be explained by the fact that LBP is capable of well capturing local structures of HSI. As expected, the classification map from our MSF framework presents the least noise, which is almost close to the ground-truth map []. As a consequence, we deem that the classification performance of the classifier can be markedly improved by integrating multiscale block LBP features and context-aware information for class labels.

Figure 3. Classification maps on the Indian Pines scene. (a) three-band color composite image; (b) ground truth map, and the classification results generated by the (c) SVM, (d) GF-SVM, (e) BEEPS-SVM, (f) DF-SVM, (g) MASR, (h) SSSKRR, (i) EPF-B-g, (j) 3D-CNN-LR, (k) LPP-LBP-BLS, (l) LBP-SVM, (m) IMBLBP, and (n) MSF.

Figure 3. Classification maps on the Indian Pines scene. (a) three-band color composite image; (b) ground truth map, and the classification results generated by the (c) SVM, (d) GF-SVM, (e) BEEPS-SVM, (f) DF-SVM, (g) MASR, (h) SSSKRR, (i) EPF-B-g, (j) 3D-CNN-LR, (k) LPP-LBP-BLS, (l) LBP-SVM, (m) IMBLBP, and (n) MSF.

Results on the KSC dataset

provides the statistical metrics of different methods on the KSC dataset. In this section, 6 labeled samples are arbitrarily selected from every class for constructing the training dataset, and the rest of samples are then utilized for the experimental verification. From this , it can be seen that the accuracy metrics gained by the spatial-spectral classification techniques are markedly better than the pixel-wise classifier SVM (i.e. the GF-SVM, BEEPS-SVM, DF-SVM, MASR, EPF-B-g, and LPP-LBP-BLS methods), and these compared methods achieve over 9% advantage in OA, AA, and κ than that of SVM. In particular, the classification accuracy of 3D-CNN-LR is worst than that of other classification algorithms, mainly because the large number of parameters in deep learning model leads to overfitting under the condition of deficient training samples.

Table 5. Accuracy for every class (%), OA (%), AA (%), and κ of various classification methodologies for the KSC image.

As expected, compared with the basic LBP, the OA(κ) of IMBLBP is increased from 90.77% (0.8973) to 91.21% (0.9021). This means that, by introducing multiscale block and steering kernel, the extracted overall structural information is more useful for classification tasks as compared to the LBP features. Obviously, the proposed MSF outperforms other compared classification algorithms in terms of OA, AA and κ. It signifies that the context-aware information for each class label to the IMBLBP can further enhance classification performance.

provides the classification maps of different methods on the KSC dataset. From the result, we can summarize at least two clear conclusions. Firstly, classification maps produced from the classifier using spatial features [e.g. ] are smoother than that from using spectral features []. Secondly, MSF obtains the best classification map in all comparison methodologies [].

Figure 4. Classification maps on the KSC scene. (a) three-band color composite image; (b) ground truth map, and the classification results obtained by the (c) SVM, (d) GF-SVM, (e) BEEPS-SVM, (f) DF-SVM, (g) MASR, (h) SSSKRR, (i) EPF-B-g, (j) 3D-CNN-LR, (k) LPP-LBP-BLS, (l) LBP-SVM, (m) IMBLBP, and (n) MSF.

Figure 4. Classification maps on the KSC scene. (a) three-band color composite image; (b) ground truth map, and the classification results obtained by the (c) SVM, (d) GF-SVM, (e) BEEPS-SVM, (f) DF-SVM, (g) MASR, (h) SSSKRR, (i) EPF-B-g, (j) 3D-CNN-LR, (k) LPP-LBP-BLS, (l) LBP-SVM, (m) IMBLBP, and (n) MSF.

Results on Houston 2013 dataset

The classification results of all algorithms on the Houston 2013 dataset are presented in . In this section, five labeled samples are randomly chosen from every class for constructing the training samples, and the remaining samples are adopted to test the model. It can be observed that the performance obtained by the edge-preserving filtering techniques is much better than that with the original spectral features only. For example, GF-SVM offers over 3.5% higher accuracy than SVM, and BEEPS-SVM yields 3.9% higher accuracy than SVM. Moreover, the EPF-B-g achieves much higher classification accuracy than the GF-SVM and BEEP-SVM methods. This means that performing edge-preserving filtering in the post-processing stage is very useful for promoting the classifier performance. From the results, the classification result of the advanced 3D-CNN-LR methodology is not satisfied compared to other algorithms. The reason for the low OA of 3D-CNN-LR is that the spatial features may lead to over fitting phenomenon or even misclassify the image under the condition of limited training samples. Despite LBP being highly discriminative spatial descriptor, the LBP-based algorithms (i.e. the LPP-LBP-BLS, LBP-SVM, and IMBLBP-SVM methods) still yield unsatisfactory performance. Mainly because the adjacent interval of different land-cover classes is relative small and the distribution of the same land-cover class is too scattered for the Houston 2013 image, and this may leads to the presence of highly mixed pixels in boundary region that complicates the classification problem. It is worth noting that the proposed MSF algorithm not only produces the highest OA(94.25%), AA(95.26%), and κ(93.78%), but also most of the classes surpass other methodologies in terms of classification accuracy (e.g. Healthy grass, Stressed grass, Highway, Water, Railway, Parking lot1, and Parking lot2). This means that integrating context-aware information for each class label in the post-processing step is very critical to enhance the classification accuracy of classifier.

Table 6. Accuracy for every class (%), OA (%), AA (%), and κ of various classification methodologies for the Houston 2013 image.

shows the false-color image, the ground truth, and the classification visualization results of all methodologies on the Houston 2013 dataset. In general, the classification map generated by the SVM presents many noisy scatter points [], while the edge preserving filtering features with the SVM classifier mitigate this problem through the elimination of noisy interference phenomenon [e.g. ]. It is obvious that 3D-CNN-LR, LPP-LBP-BLS, LBP-SVM, and IMBLBP-SVM algorithms produce poor classification maps, which present noises in some categories and show more misclassifications [], roughly speaking, the classification maps are unanimous with the classification results as given in . Specifically, compared to other classical methodologies, there is least noise and best homogeneity in the classification result map obtained by MSF, which is closer to the ground truth distribution map []. In general, the visual comparison results in and quantitative evaluation results reported in lead to a similar conclusion, which fully prove the superiority of this proposed method under the condition of small training samples.

Figure 5. Classification maps on the Houston 2013 scene. (a) three-band color composite image; (b) ground truth map, and the classification results obtained by the (c) SVM, (d) GF-SVM, (e) BEEPS-SVM, (f) DF-SVM, (g) MASR, (h) SSSKRR, (i) EPF-B-g, (j) 3D-CNN-LR, (k) LPP-LBP-BLS, (l) LBP-SVM, (m) IMBLBP, and (n) MSF.

Figure 5. Classification maps on the Houston 2013 scene. (a) three-band color composite image; (b) ground truth map, and the classification results obtained by the (c) SVM, (d) GF-SVM, (e) BEEPS-SVM, (f) DF-SVM, (g) MASR, (h) SSSKRR, (i) EPF-B-g, (j) 3D-CNN-LR, (k) LPP-LBP-BLS, (l) LBP-SVM, (m) IMBLBP, and (n) MSF.

Evaluation of the influence of parameters

Effect of the filtering parameters

Before performing the proposed MSF method, the proper parameters of IMBLPB need to be evaluated first. To be fair, the remaining model parameters are fixed apart from K and W, moreover, the average OA is reported after repeating 10 times in each experiment. illustrates the average OA versus different numbers of selected bands. It can be observed that the OA of IMBLBP dramatically ascends when the number of bands increases. As the number of bands further increases, the OA of IMBLBP tardily rises or becomes relatively stable. Note that more selected bands will increase the computational complexity and running time. Therefore, the number of selected bands K is roughly set as K=7 for the Indian Pines scene and the KSC datasets. For the Houston 2013 dataset, the OA of IMBLBP dramatically ascend when the number of bands increases and then tardily descends with the number of bands further increases. Maybe because more redundant information is generated with the number of bands increases. In this case, K is roughly set as K=11 for the Houston 2013 dataset, where the setting provides a good compromise between the calculation time and the classification accuracy.

Figure 6. Analysis of the influence of the number of selected bands on overall accuracy using the IMBLBP method for (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

Figure 6. Analysis of the influence of the number of selected bands on overall accuracy using the IMBLBP method for (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

shows the average OA versus different patch sizes. Similarly, with the increase in patch size W, the OA of IMBLBP intensely increases and then become gradually stabilizes. Here, IMBLBP reaches the maximum OA with patch size W=25 for the Indian Pines and the KSC datasets. For the Houston 2013 dataset, with the increase in patch size W, the OA of IMBLBP intensely increases and then gradually descends. Here, IMBLBP reaches the maximum OA with patch size W=21 for the Houston 2013 dataset.

Figure 7. Analysis of the influence of different patch sizes on overall accuracy using the IMBLBP method for (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

Figure 7. Analysis of the influence of different patch sizes on overall accuracy using the IMBLBP method for (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

Effect of the filtering parameters

shows the OA of the proposed MSF algorithm versus the smoothing parameter λ and the range parameter σ. To avoid any bias, the rest of the model parameters are fixed besides λ and σ, in addition, the classification results are repeated over 10 trials. From , we observe that the OA of MSF is sensitive to the parameters λ and σ, and the OA value is not strictly increasing or decreasing, when λand σgradually increase. Commonly, with the parameters λand σgrow, the OA of MSF dramatically rises and then become declines. In this case, the parameters λ and σ are roughly selected in this section. For the Indian Pines dataset, we can observe that the MSF algorithm produces the highest OA value when the smoothing parameter is set to λ=0.5 and the range parameter is set to σ=7. For the KSC dataset, the MSF algorithm obtains the highest OA value when the smoothing parameter is set to λ=0.8 and the range parameter is set to σ=5. For the Houston 2013 dataset, the MSF algorithm obtains the highest OA when the smoothing parameter is set to λ=0.7 and the range parameter is set to σ=4.

Figure 8. Analysis of the influence of the range parameter and the spatial parameter on overall accuracy using the MSF method for (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

Figure 8. Analysis of the influence of the range parameter and the spatial parameter on overall accuracy using the MSF method for (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

Investigation on the effect of the training sample size

analyzes the effect of different training sizes on all methods. The training size is randomly chosen from the ranges of [1%, 3%, 5%, 7%, 9%], [3, 6, 9, 12, 15], and [5, 10, 15, 20, 25] for the Indian Pines, KSC, and Houston 2013 datasets, respectively. As shown in , as the training sizes increase, the performance of all the compared methods significantly increases. It is obvious that the proposed MSF framework always yields the highest classification accuracy compared with others classification methods. By contrast, the advantage of MSF mainly reflects in the case of insufficient training samples, for instance, 1% ratio of training samples is randomly chosen for the Indian Pines dataset and six training samples per class for the KSC dataset, MSF even acquires over 99.4% and 99.5% overall accuracies, respectively. For the Houston 2013 dataset, the OAs of MSF with different training samples are higher than other classification methods. When the number of training samples per class is only 5, the OA of MSF reaches 94.25%. When the number of training samples per class is 25, the OA of MSF is 99.85%. This verifies that the proposed method can not only generate good classification in small samples but also keep the good classification performance in large samples, which further demonstrates the availability of the proposed method.

Figure 9. Overall accuracy with various numbers of training samples per class for the (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

Figure 9. Overall accuracy with various numbers of training samples per class for the (a) Indian Pines, (b) KSC, and (c) Houston 2013 datasets.

Conclusion

In this paper, a simple yet efficacious multi-strategy fusion framework has been developed for spatial-spectral HSI classification. The proposed framework aims at extracting multiscale block overall structural information in pre-processing stage, where steering kernel is firstly introduced into MBLBP to enhance the encoding performance. Then, BEEPS is employed to smooth the soft-classified probabilities maps in post-processing stage, and this step aims at optimizing the classification accuracy by concerning context-aware information for each class label. The experimental results on three real hyperspectral datasets demonstrate that the proposed MSF is superior to the comparison methods. Additionally, the major advantage is MSF obtains excellent classification results with different training sizes. Even with a very small training sample size (e.g. 1% ratio of training samples per class for the Indian Pines dataset), the proposed MSF can obtain approximately 13.85% overall accuracy enhancement over the advanced MASR, 14.23% enhancement over EPF-B-g, and 17.02% enhancement over the recently proposed LPP-LBP-BLS. Although the proposed MSF method generates best classification results for datasets in different situations, but there are still some shortcomings.

  1. The proposed algorithm only uses the principal component analysis method to reduce the dimensionality of HSI, while ignoring the impact of different transform based or band selection-based dimensionality reduction methods on classification performance.

  2. The IMBLBP operator is capable of capturing the spatial correlation among neighboring pixels; however, it cannot fully characterize the joint spectral-spatial structure without considering the cubic structure of HSI.

Therefore, based on the shortcomings listed previously, the focus of our future research is as follows:

  1. The ways of reducing the number of spectral channels and improving classification effect through optimizing band selection.

  2. How to fully analyzing the cubic structure of HSI and how to fully characterize the joint spectral-spatial structure by utilizing the 3-D LBP model.

Acknowledgments

This research was supported by the Hengyang Normal University Fund Project (2022QD07).

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Akbari, D. (2019). Improved neural network classification of hyperspectral imagery using weighted genetic algorithm and hierarchical segmentation. IET Image Processing, 13(12), 2169–16. https://doi.org/10.1049/iet-ipr.2018.5693
  • Anand, R., Veni, S., Geetha, P., & Subramoniam, S. R. (2021). Extended morphological profiles analysis of airborne hyperspectral image classification using machine learning algorithms. International Journal of Intelligent Networks, 2, 1–6. https://doi.org/10.1016/j.ijin.2020.12.006
  • Aviara, N. A., Liberty, J. T., Olatunbosun, O. S., Shoyombo, H. A., & Oyeniyi, S. K. (2022). Potential application of hyperspectral imaging in food grain quality inspection, evaluation and control during bulk storage. Journal of Agriculture and Food Research, 8, 100288. https://doi.org/10.1016/j.jafr.2022.100288
  • Cao, X., Wang, X., Wang, D., Zhao, J., & Jiao, L. (2020). Spectral-spatial hyperspectral image classification using cascaded Markov random fields. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(12), 4861–4872. https://doi.org/10.1109/JSTARS.2019.2938208
  • Cao, X., Zhou, F., Xu, L., Meng, D., Xu, Z., & Paisley, J. (2018). Hyperspectral image classification with markov random fields and a convolutional neural network. IEEE Transactions on Image Processing, 27(5), 2354–2367. https://doi.org/10.1109/TIP.2018.2799324
  • Chen, Y., Jiang, H., Li, C., Jia, X., & Ghamisi, P. (2016). Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Transactions on Geoscience & Remote Sensing, 54(10), 6232–6251. https://doi.org/10.1109/TGRS.2016.2584107
  • Chen, Y., Lin, Z., Zhao, X., Wang, G., & Gu, Y. (2014). Deep learning-based classification of hyperspectral data. IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing, 7(6), 2094–2107. https://doi.org/10.1109/JSTARS.2014.2329330
  • Chen, C., & Liu, Z. (2018). Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE Transactions on Neural Networks and Learning Systems, 29(1), 10–24. https://doi.org/10.1109/TNNLS.2017.2716952
  • Chen, Y., Nasrabadi, N. M., & Tran, T. D. (2011). Hyperspectral image classification using dictionary-based sparse representation. IEEE Transactions on Geoscience & Remote Sensing, 49(10), 3973–3985. https://doi.org/10.1109/TGRS.2011.2129595
  • Ergul, U., & Bilgin, G. (2020). MCK-ELM: Multiple composite kernel extreme learning machine for hyperspectral images. Neural Computing and Applications, 32(11), 6809–6819. https://doi.org/10.1007/s00521-019-04044-9
  • Fang, L., Li, S., Kang, X., & Benediktsson, J. A. (2014). Spectral-spatial hyperspectral image classification via multiscale adaptive sparse representation. IEEE Transactions on Geoscience & Remote Sensing, 52(12), 7738–7749. https://doi.org/10.1109/TGRS.2014.2318058
  • Fauvel, M., Tarabalka, Y., Benediktsson, J. A., Chanussot, J., & Tilton, J. C. (2013). Advances in spectral-spatial classification of hyperspectral images. Proceedings of the IEEE, 101(3), 652–675. https://doi.org/10.1109/JPROC.2012.2197589
  • Fu, W., Li, S., Fang, L., Kang, X., & Benediktsson, J. A. (2016). Hyperspectral image classifcation via shape-adaptive joint sparse representation. IEEE Journal of Selected Topics Applied Earth Observations and Remote Sensing, 9(2), 556–567. https://doi.org/10.1109/JSTARS.2015.2477364
  • Halidou, A., You, X., Hamidine, M., Etoundi, R. A., Diakite, L. H. (2014). Fast pedestrian detection based on region of interest and multi-block local binary pattern descriptors. Computers & Electrical Engineering, 40(8), 375–389. https://doi.org/10.1016/j.compeleceng.2014.10.003
  • Ham, J., Chen, Y., Crawford, M. M., & Ghosh, J. (2005). Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience & Remote Sensing, 43(3), 492–501. https://doi.org/10.1109/TGRS.2004.842481
  • Hao, Q., Sun, B., Li, S., Crawford, M. M., & Kang, X. (2022). Curvature filters-based multiscale feature extraction for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 60(1), 1–16. https://doi.org/10.1109/TGRS.2021.3091860
  • Hupel, T., & Stütz, P. (2022). Adopting hyperspectral anomaly detection for near real-time camouflage detection in multispectral imagery. Remote Sensing, 14(15), 3755. https://doi.org/10.3390/rs14153755
  • Jiang, X., Zhang, Y., Liu, W., Gao, J., Liu, J., Zhang, Y., & Lin, J. (2020). Hyperspectral image classification with CapsNet and Markov random fields. IEEE Access, 8, 191956–191968. https://doi.org/10.1109/ACCESS.2020.3029174
  • Kang, X., Li, S., & Benediktsson, J. A. (2014). Spectral-spatial hyperspectral image classification with edge-preserving filtering. IEEE Transactions on Geoscience and Remote Sensing: A Publication of the IEEE Geoscience and Remote Sensing Society, 52(5), 2666–2677. https://doi.org/10.1109/TGRS.2013.2264508
  • Kumar, B., & Dikshit, O. (2017). Spectral contextual classification of hyperspectral imagery with probabilistic relaxation labeling. IEEE Transactions on Cybernetics, 47(12), 4380–4391. https://doi.org/10.1109/TCYB.2016.2609882
  • Kumar, B. L. N. P., & Manoharan, P. (2021). Whale optimization-based band selection technique for hyperspectral image classification. International Journal of Remote Sensing, 42(13), 5109–5147. https://doi.org/10.1080/01431161.2021.1906979
  • Liao, J., Wang, L., Zhao, G., & Hao, S. (2019). Hyperspectral image classification based on bilateral filter with linear spatial correlation information. International Journal of Remote Sensing, 40(17), 6861–6883. https://doi.org/10.1080/01431161.2019.1597301
  • Liao, S., Zhu, X., Len, Z., Zhang, L., & Li, S. Z. (2007). Learning multi-scale block local binary patterns for face recognition. Advances in Biometrics, 4642, 828–837. https://doi.org/10.1007/978-3-540-74549-5_87
  • Li, W., Chen, C., Su, H., & Du, Q. (2015). Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Transactions on Geoscience & Remote Sensing, 53(7), 3681–3693. https://doi.org/10.1109/TGRS.2014.2381602
  • Li, S., Kang, X., & Hu, J. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7), 2864–2875. https://doi.org/10.1109/TIP.2013.2244222
  • Li, Y., Tang, H., Xie, W., & Luo, W. (2022). Multidimensional local binary pattern for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 60(1), 1–13. https://doi.org/10.1109/TGRS.2021.3069505
  • Liu, H., Li, J., He, L., & Wang, Y. (2019). Superpixel-guided layer-wise embedding CNN for remote sensing image classification. Remote Sensing, 11(2), 174. https://doi.org/10.3390/rs11020174
  • Liu, J., Zhang, K., Wu, S., Shi, H., Zhao, Y., Sun, Y., Zhuang, H., & Fu, E. (2022). An investigation of a multidimensional CNN combined with an attention mechanism model to resolve small-sample problems in hyperspectral image classification. Remote Sensing, 14(785–1), 785–17. https://doi.org/10.3390/rs14030785
  • Li, Y., Xu, J., Xia, R., Wang, X., & Xie, W. (2019). A two-stage framework of target detection in high-resolution hyperspectral images. Signal, Image and Video Processing, 13(7), 1339–1346. https://doi.org/10.1007/s11760-019-01470-z
  • Lv, J., Zhang, H., Yang, M., & Yang, W. (2020). A novel spectral-spatial based adaptive minimum spanning forest for hyperspectral image classification. GeoInformatica, 24(4), 827–848. https://doi.org/10.1007/s10707-020-00403-0
  • Majdar, R. S., & Ghassemian, H. (2020). A probabilistic framework for weighted combination of multiple-feature classifications of hyperspectral images. Earth Science Informatics, 13(1), 55–69. https://doi.org/10.1007/s12145-019-00411-1
  • Manoharan, P., & Boggavarapu, P. (2021). Improved whale optimization based band selection for hyperspectral remote sensing image classification. Infrared Physics & Technology, 119(103949–1), 103948–16. https://doi.org/10.1016/j.infrared.2021.103948
  • Melgani, F., & Bruzzone, L. (2004). Classification of hyperspectral remote sensing images with support vector machines. IEEE Transactions on Geoscience & Remote Sensing, 42(8), 1778–1790. https://doi.org/10.1109/TGRS.2004.831865
  • Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607–609. https://doi.org/10.1038/381607a0
  • Ou, X., Zhang, Y., Wang, H., Tu, B., Guo, L., Zhang, G., & Xu, Z. (2020). Hyperspectral image target detection via weighted joint k-nearest neighbor and multitask learning sparse representation. IEEE Access, 8, 11503–11511. https://doi.org/10.1109/ACCESS.2019.2962875
  • Peng, J., Zhou, Y., & Chen, C. L. P. (2015). Region-kernel-based support vector machines for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing: A Publication of the IEEE Geoscience and Remote Sensing Society, 53(9), 4810–4824. https://doi.org/10.1109/TGRS.2015.2410991
  • Ratle, F., Camps-Valls, G., & Weston, J. (2009). Semi-supervised neural networks for efficient hyperspectral image classification. IEEE Transactions on Geoscience & Remote Sensing, 48(5), 2271–2282. https://doi.org/10.1109/TGRS.2009.2037898
  • Sawant, S. S., Prabukumar, M., Loganathan, A., Alenizi, F. A., & Ingaleshwar, S. (2022). Multi-objective multi-verse optimizer based unsupervised band selection for hyperspectral image classiffcation. International Journal of Remote Sensing, 43(11), 3990–4024. https://doi.org/10.1080/01431161.2022.2105666
  • Shi, L., Li, C., Li, T., & Peng, Y. (2022). A complementary spectral-spatial method for hyperspectral image classification. IEEE Transactions on Geoscience & Remote Sensing, 60(5531017–1), 5531017–17. https://doi.org/10.1109/TGRS.2022.3180935
  • Singh, M. K., Mohan, S., & Kumar, B. (2021). Hyperspectral image classification using deep convolutional neural network and stochastic relaxation labeling. Journal of Applied Remote Sensing, 15(4), 042612-1: 042612–21. https://doi.org/10.1117/1.JRS.15.042612
  • Sun, H., Zheng, X., & Lu, X. (2021). A supervised segmentation network for hyperspectral image classification. IEEE Transactions on Image Processing, 30, 2810–2825. https://doi.org/10.1109/TIP.2021.3055613
  • Takeda, H., Farsiu, S., & Milanfar, P. (2007). Kernel regression for image processing and reconstruction. IEEE Transactions Image Process, 16(2), 349–366. https://doi.org/10.1109/TIP.2006.888330
  • Tong, L., Zhang, J., & Ye, Z. (2015). Classification of hyperspectral image based on deep belief networks. IEEE International Conference on Image Processing(ICIP), Paris, France (pp. 5132–5136).
  • Uddin, M. P., Mamun, M. A., Afjal, M. I., & Hossain, M. A. (2021). Information-theoretic feature selection with segmentation-based folded principal component analysis (PCA) for hyperspectral image classification. International Journal of Remote Sensing, 42(1), 286–321. https://doi.org/10.1080/01431161.2020.1807650
  • Wang, Z., Hu, H., Zhang, L., & Xue, J. H. (2017). Discriminatively guided filtering (DGF) for hyperspectral image classification. Neurocomputing, 275, 1981–1987. https://doi.org/10.1016/j.neucom.2017.10.046
  • Wan, X., & Zhao, C. (2019). Spectral-spatial hyperspectral image classification combining multi-scale bi-exponential edge-preserving filtering and susan edge detector. Infrared Physics & Technology, 102, 103055. https://doi.org/10.1016/j.infrared.2019.103055
  • Xing, Q., Chen, C., & Li, Z. (2021). Progressive path tracing with bilateral-filtering-based denoising. Multimedia Tools and Applications, 80(1), 1529–1544. https://doi.org/10.1007/s11042-020-09650-7
  • Yan, J., Chen, H., Zhai, Y., Liu, Y., & Liu, L. (2019). Region-division-based joint sparse representation classification for hyperspectral images. IET Image Processing, 13(10), 1694–1704. https://doi.org/10.1049/iet-ipr.2018.6667
  • Yang, S., Hou, J., Jia, Y., Mei, S., & Qian, D. (2019). Pseudolabel guided kernel learning for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(3), 1000–1010. https://doi.org/10.1109/JSTARS.2019.2895070
  • Yang, W., Peng, J., Sun, W., & Du, Q. (2020). Log-euclidean kernel-based joint sparse representation for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(12), 5023–5034. https://doi.org/10.1109/JSTARS.2019.2952408
  • Ye, Z., Dong, R., Bai, L., Jin, C., & Nian, Y. (2020). Hyperspectral image classification based on segmented local binary patterns. Sensing and Imaging, 21(1), 1–16. https://doi.org/10.1007/s11220-020-0274-7
  • Zhang, J., Dai, L., & Cheng, F. (2021). Identification of corn seeds with different freezing damage degree based on hyperspectral reflectance imaging and deep learning method. Food Analytical Methods, 14(2), 389–400. https://doi.org/10.1007/s12161-020-01871-8
  • Zhao, L., Han, Z., & Luo, Y. (2022). Robust discriminative broad learning system for hyperspectral image classification. Optoelectronics Letters, 18(7), 444–448. https://doi.org/10.1007/s11801-022-2043-4
  • Zhao, C., Liu, W., Xu, Y., & Wen, J. (2019). Hyperspectral image classification via spectral-spatial shared kernel ridge regression. IEEE Geoscience and Remote Sensing Letters, 16(12), 1874–1878. https://doi.org/10.1109/LGRS.2019.2913884
  • Zhao, G., Wang, X., & Cheng, Y. (2020). Hyperspectral image classification based on local binary pattern and broad learning system. International Journal of Remote Sensing, 41(24), 9393–9417. https://doi.org/10.1080/01431161.2020.1798553