1,063
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Scene-level buildings damage recognition based on Cross Conv-Transformer

, , &
Pages 3987-4007 | Received 05 Jul 2023, Accepted 15 Sep 2023, Published online: 28 Sep 2023

ABSTRACT

Different to pixel-based and object-based image recognition, a larger perspective based on the scene can improve the efficiency of assessing large-scale building damage. However, the complexity of disaster scenes and the scarcity of datasets are major challenges in identifying building damage. To address these challenges, the Cross Conv-Transformer model is proposed to classify and evaluate the degree of damage to buildings using aerial images taken after earthquake. We employ Conv-Embedding and Conv-Projection to extract features from the images. The integration of convolution and Transformer reduces the computational burden of the model while enhancing its feature extraction capabilities. Furthermore, the two branch Conv-Transformer architecture with global and local attention is designed, allowing each branch to focus on global and local features respectively. The cross-attention fusion module merges feature information from the two branches to enrich classification features. At last, we utilize aerial images captured during the Beichuan and Yushu earthquakes as both the training and test sets to assess the model. The proposed Cross Conv-Transformer model improved classification accuracy by 4.7% and 2.1% compared to the ViT and EfficientNet. The results show that the Cross Conv-Transformer model could significantly reduces misclassification between severely and moderately damaged categories.

1. Introduction

Earthquake pose a significant threat to human society worldwide, resulting in substantial environmental damage, casualties, and property losses (Chen, Wang, and Xiao Citation2018). Accurately assessing and mapping the damage to buildings after an earthquake is crucial for the prompt and precise allocation of rescue resources (Duarte, Nex, and Kerle Citation2020).

From the perspective of data, there are various methods available for extracting building damage information from remote sensing images, including optical data (Fan et al. Citation2019b), synthetic aperture radar (Adriano et al. Citation2019), and LiDAR data (Wang and Li Citation2020). Medium-resolution optical satellite images can provide an overview of the damage caused by earthquake disasters on a large scale. However, the limited resolution prevents the detection of finer-scale earthquake damage (Fan et al. Citation2021). While aviation aircraft could provide apparent and detailed information because their better spatial resolution. It is the most effective data to classify the damaged buildings after the earthquake. In terms of damage detection methods, both single-phase and multi-phase classification approaches. Multi-phase methods primarily rely on detecting changes between pre and post-disaster data to identify damage information (Akhmadiya et al. Citation2020). In contrast, single-phase methods only utilize post-disaster data to identify damage information (Settou, Kholladi, and Ben Ali Citation2022). Multi-phase methods present challenges in identifying data of consistent quality before and after the disaster, requiring meticulous and time-consuming. Single-phase methods circumvent issues caused by differences in acquisition periods, weather conditions, and background factors that can significantly affect the accuracy of image classification. Hence, single-phase remote sensing image information extraction technology proves to be a more effective approach.

Machine learning is used to extract disaster loss information from remote sensing images. Bialas utilized the random forest method to extract buildings from high-resolution aerial images (Bialas, Oommen, and Havens Citation2019). The results showed that the performance of machine learning algorithm could maintain a relatively stable segmentation effect for any given task as long as the features used for classification are correctly selected. However, it requires manual feature design and encounter difficulties in model training (Naito et al, Citation2020; Mangalathu et al. Citation2020). Convolutional neural networks (CNN), as powerful deep learning structures, can automatically extract rich hierarchical features from satellite images. Consequently, several remote sensing image methods based on CNN frameworks have been developed (Ma et al. Citation2019; Zhu et al. Citation2017). Gebrehiwot et al further adopted a VGG convolutional neural network architecture to classify UAV images acquired after flood disaster (Gebrehiwot et al., Citation2019). The experiments revealed that deep convolutional neural network is superior to support vector machine (SVM) classifier in flood area classification. Similarly, Yang et al using a variety of CNN model with transfer learning method to identify earthquake-damaged buildings. Among them, DenseNet121 achieved the best performance in the classification task. However, the recognition accuracy remained below 90%, and the models did not classify the levels of damage to collapsed buildings (Yang, Zhang, and Luo Citation2021). To address this issue, Prashath developed a lightweight CNN network for extracting damage information from UAV images, achieving a model accuracy of 91% (Prashath, Priyadharshini, and Lakshmi Citation2021). However, CNN’s receptive field is limited, and increasing the depth of the model to enlarge the receptive field often results in information loss. While Some researchers improve the identification accuracy of buildings damage from the perspective of dataset. Wang developed a buildings damage classification method that includes building localization and addressing the imbalanced sample distribution (Wang, Alvin Wei, and Zhang Citation2022). The results show that the architecture could identify buildings damage well. Moreover, other researchers have used an attention-based strategy to classify damaged buildings at the pixel level (Liu et al. Citation2022). Shen et al introduced a cross-directional attention module to explore the correlation between pre-disaster and post-disaster images, proposing a two-stage convolutional neural network called BDANet for buildings damage assessment. The model achieved state-of-the-art performance on the xBD dataset (Shen et al., Citation2022). Shi et al proposed improved YOLOv4 model to detect objects only using the aerial image datasets of collapsed buildings after Beichuan earthquake and Yushu earthquake, and the extraction accuracy reached 93.76% (Shi et al. Citation2021). However, compared with the perspective of the scene, all these methods are complex as they require first determining the footprint of each building. Furthermore, when assessing post-disaster building damage, relying solely on the perspective of objects and pixels often leads to inaccurate positioning of detection frames, fragmented pixel classification results, and inefficient model training.

Transformer has achieved remarkable results in the field of natural language processing (NLP) (Vaswani et al., Citation2017). Transformer has several advantages over CNN, including parallel computing, global vision, and flexible stacking. Additionally, it can capture global context information, establish long-range dependencies, and extract more powerful features. For instance, Hong reevaluated hyperspectral image classification using the Transformer sequence perspective and introduced a new backbone network called SpectralFormer, which achieved high accuracy in hyperspectral image classification (Hong et al. Citation2021). Jia et al. presented a new multi-scale convolution embedding module for hyperspectral images to efficiently extract spatial spectral information. This module can be effectively combined with the Transformer to leverage unlabeled data for training (Jia and Wang Citation2022). Thus, harnessing the potential of the Transformer can significantly enhance the models’ capability to identify scenes depicting damage caused by natural disasters, even in complex backgrounds.

The primary contributions of this paper are as follows:

  1. An aerial image dataset is created for the classification of scenes depicting damaged buildings using data augmentation and noise addition.

  2. A Cross Conv-Transformer model is proposed that leverages the strengths of CNN’s feature extraction and Transformer’s global attention capabilities by incorporating both global and local attention.

  3. We propose the Cross Attention in the two-branch model, which offers the advantages of linear computation and memory to filter features from different branches.

The rest of this article is organized as follows: Section 2 introduces the Transformer’s detailed mechanism information and its most recent research applications in disaster information extraction. Section 3 introduces the details of the study area, dataset creation process and the details of our method. Section 4 introduces the details of experimental design, experimental evaluation indicators and experimental results. Section 5 summarizes the content of the full text and future research directions. Section 6 introduces conclusions of the paper.

2. Related work

As a result of the successful application of Transformers in speech recognition and machine translation, Transformer-based networks have also found their way into the field of computer vision (Radford and Tim Citation2021). The pioneering network that fully embraces the Transformer architecture for image classification is the Vision in Transformer model (ViT) (Dosovitskiy et al. Citation2020). In this model, the input image is segmented into fixed-size image patches, which are then further divided into a sequence through linear projection. Following the sequence, position embeddings are applied, and the resulting representation is passed through the Multi-Head Self-Attention mechanism for global attention modeling. Subsequently, the output is forwarded to the Head module for classification. Research findings demonstrate that the ViT model achieves comparable classification accuracy on the ImageNet dataset when compared to CNN-based classification models. For instance, Bazi employed the ViT model to classify datasets of remote sensing scenes (Bazi et al. Citation2021). The results underscore the ViT model’s proficiency in extracting multi-channel characteristics from remote sensing images and accurately discerning them.

As illustrated in , the ViT model consists of three main components: an embedding layer, an encoder, and the final head. Initially, the input image is divided into fixed-size image blocks, and each block is transformed into a one-dimensional vector. These flattened patches are then converted into tokens, with an additional class token introduced to encode category information, following a linear projection step. Notably, the linear projection operation causes the loss of positional information for each image patch with respect to the original image. To address this, positional embedding is employed in the token representation subsequent to the linear projection. This positional embedding enables the acquisition of relative position information among the patches by computing the cosine similarity between them. Consequently, patches sharing the same rows or columns exhibit high similarity. Subsequently, all the patch tokens are fed into the Transformer encoder, along with the MLP Head, for the classification process.

Figure 1. Map of vision in transformer.

Figure 1. Map of vision in transformer.

The most crucial component of the Transformer architecture is the Transformer Encoder (Voita et al., Citation2020), which comprises the Multi-head Self Attention (MSA) module and the Feed Forward Network (FFN) (Mangan and Alon Citation2003; Xiong et al. Citation2020). The MSA module, serving as the core of the Transformer, consists of a linear layer, self-attention layer, and concatenation layer. The process begins by converting a clipped 2D image into a vector, denoted as X, after passing through the linear projection layer and incorporating position information. Subsequently, three weight matrices, initialized for the query (Q), key (K), and value (V), are introduced. By multiplying these matrices with the vector X, the MSA module identifies the information with the highest weight through a dot product operation among Q, K, and V. This operation establishes the global connections among all image blocks, enabling the identification of the relative importance of a patch embedding compared to others in the sequence. Consequently, the MSA module determines the focal point of the visual task by establishing the center of attention.

Research on natural disaster information extraction based on Transformer has been gaining increasing attention. Ahan et al introduced the Flood-Transformer model, which represents the first visual transformer-based model capable of detecting and segmenting flood areas from aerial images (Roy et al., Citation2022). This model employed the SWOC Flood method to segment the dataset and achieved a superior mean Intersection over Union (mIoU) score of 0.93, surpassing other existing methods. Furthermore, Amir et al proposed the SiamixFormer model, which consists of two transformer encoders. It takes both pre- and post-disaster images as input (Mohammadian and Ghaderi Citation2022). The outputs from each stage in both encoders are fed into a temporal converter for feature fusion, which generates queries, keys, and values from the pre and post-disaster images. Additionally, the model incorporates temporal features in the fusion process. The SiamixFormer model was evaluated on the xBD buildings disaster dataset for buildings change detection and demonstrated superior performance compared to state-of-the-art models.

Although Transformer has shown promising results in image classification, it requires more computational resources compared to the CNN model. The CNN structure leverages spatial sub-sampling and weight sharing to capture information that is invariant to shifting, scaling, and distortion. This characteristic provides an advantage that is not present in the Transformer model. Furthermore, the hierarchical structure of convolution allows the model to consider different levels of local spatial context information in damaged buildings. This includes capturing simple low-level edge features as well as high-level texture and semantic information (C. F. Chen et al. Citation2019). To address the limitations of Transformer and enhance its performance, recent studies have proposed various variants of the Vision in Transformer (ViT) model. Some of these approaches incorporate distillation techniques for data-efficient training of visual converters (Touvron et al. Citation2020), while others combine the pyramid structure of CNN to leverage its benefits (W. Wang et al. Citation2021). Among them, the approach of integrating CNN and Transformer features the benefits of straightforward design and efficient training, establishing it as a current research focal point. For instance. Marco et al. employed a combination of the ViT and ResNet to create an AI-driven framework for automated hierarchical classification of road tunnel defects, with the aim of improving the efficiency of this potent indirect measurement approach (Rosso et al. Citation2023). Zhang et al. introduced a purely data-driven deep learning model, EPT, to mine potential crustal and tectonic movement patterns from the global historical earthquake catalog data. By employing multi-head self-attention from ViT, it captures long-term dependency relationships within regional time series, highlighting the connections between salient features and mitigating the challenges faced by Long Short-Term Memory (LSTM) networks in focusing on long-term information in extended time series (Zhang et al. Citation2023).

In contrast to the aforementioned methods, we present a novel two-branch model that incorporates both local and global attention mechanisms to extract multi-scale features. Our approach aims to leverage the strengths of both convolutional and transformer models in image classification, as well as harness the effectiveness of multi-scale feature fusion in visual tasks.

3. Materials and methods

3.1. Material

On May 12, 2008, a magnitude 8.0 earthquake struck Wenchuan, Sichuan Province. On April 14, 2010, a magnitude 7.1 earthquake occurred in Yushu, Qinghai Province. Both earthquakes caused a considerable number of buildings to collapse, as well as several deaths and major economic losses. Beichuan County was one of the most severely affected places in the Wenchuan earthquake. Most masonry structures in Beichuan were damaged in various ways, including wall cracking, partial collapse, and complete collapse. In our research, remote sensing images captured by aerial photography on the second day after the Yushu and Beichuan earthquakes were chosen as the data source, with an image resolution of 0.5 meters. The geographical location and aerial images of the study area are shown in . The selected images cover the entire city area, and contain substantially large damaged and undamaged buildings, which could provide better data support for deep learning training.

Figure 2. Location of the study areas.

Figure 2. Location of the study areas.

The main reasons for selecting post-earthquake aerial images of Beichuan and Yushu as the image data source are: (1) The structures of most buildings in Beichuan and Yushu are different. Most of the damaged buildings in Beichuan are of masonry structure, and most of the damaged buildings in Yushu are of concrete structure. The performance of the damaged masonry structure and civil structures on the remote sensing images is different. (2) The backgrounds of the two regions are quite different. The vegetation in the Beichuan area is relatively lush, so the buildings are often surrounded by vegetation. Yushu is located in the area with sparse vegetation, and the color of damaged buildings is usually similar to the surrounding background, so it is difficult to classify accurately. Therefore, selecting data from these two places for training and test, which can verify the model’s robustness.

Furthermore, due to the severe damage caused by the earthquakes in Beichuan and Yushu, most buildings were in contact with each other, without clear boundaries. Therefore, in the dataset design, group building scenes were chosen as the units for recognizing overall damage levels. This approach also enhances the efficiency of large-scale building damage recognition.

According to the seismic damage assessment standards issued by the State Seismological Administration of China and the actual situation of building damage in the two earthquakes, we pay more attention to the levels of building damage. Then the sample categories were defined through the visual interpretation, expert knowledge and on-site investigation. Then damaged buildings are divided into 3 levels based on the damaged rate of the building in the images, as depicted in . Where the damaged rate is the ratio of the number of damaged buildings to the total number of buildings.

Table 1. Classification of building damaged by aerial images after earthquake.

Due to the dense distribution of buildings in the Beichuan and Yushu areas, it becomes challenging to distinguish the damage category of individual buildings due to their interleaved and overlapping nature. There is no clear demarcation between building fragments, further complicating the identification process. Additionally, the severe damage is concentrated in specific areas after the two earthquakes. To address these challenges in post-earthquake building damage scene identification, we propose a sample generation method that utilizes groups of buildings as the unit for scene identification.

To ensure the quality of the generated samples, we follow a step-by-step approach. Firstly, we utilize the administrative boundary vector of Beichuan and Yushu to extract the aerial image of the region of interest. Next, the road vector data of the same areas are used to divide the aerial image into blocks. We employ a fixed sliding window method to extract slice images from each block. To select a slice image as a sample, we require that the building area within the slice image exceeds 50% of the entire image area. This criterion ensures that all sample images contain a sufficient number of building samples.

This research classifies the damage degree of group damaged buildings into three levels, based on the actual damage observed in the two earthquakes. The classification process relies on the overall and local image features of the group buildings following the earthquake, employing a block assessment method.

In the classification process, the collapse rate of all buildings within each block is assessed comprehensively. This assessment provides an indication of the damage degree of the buildings within the block. Subsequently, all image slice samples within the block are labeled according to the corresponding damage category. illustrates the block-level collapse rate, which represents the proportion of the number or area of collapsed buildings to the total number or area of the entire block.

Table 2. Example images of classification instances.

Finally, to account for the large number of buildings in the study area and the memory requirements for model training, the sample partition size within each block is set to 224 × 224. We divide the dataset into four categories, depicts the number distribution of each part of the sample set. There are 5560 severely damaged, 5272 moderate damaged, 5165 slightly damaged, and 5046 negative samples. The total number of samples is 21,043. To ensure the model’s comprehensive grasp of damaged building attributes across diverse locations, we carefully curated an equal number of sample images for each building damage category from the datasets of the two earthquake disasters. This approach bolsters sample representativeness and mitigates challenges arising from imbalanced sample categories during model training.

Table 3. Distribution of the sample set.

3.2. Methods

In previous studies, modifications were made to the Transformer block by incorporating convolution. This involved either replacing the multi-head attention with a convolutional layer or introducing an additional convolutional layer within the Transformer sequence structure to capture local relationships (Gulati et al. Citation2020). In contrast, our approach is inspired by recent advancements that introduce convolution into the Transformer network, specifically in two key aspects of the vision Transformer (Wu et al. Citation2021). Firstly, we employ convolution instead of the existing linear embedding for performing attention operations. Secondly, we design a hierarchical structure that generates patch tokens with different resolutions. This approach significantly reduces the computational load of the linear projection in the Vision in Transformer (ViT) and enhances the efficiency of model training.

As illustrated in , the Conv-Transformer model is comprised of three stages, each consisting of two components: Conv-Token Embedding and Conv-Transformer. In the initial stage, instead of using the embedding operation as in the ViT model, the image is fed into the Conv token embedding layer, reshaping the tokens into a two-dimensional spatial sequence to be processed by the subsequent layers. A normalization layer is then applied to the tokens, enabling the Conv-Transformer structure to progressively reduce the number of token markers at each stage, while widening the token markers. This process achieves spatial down-sampling and enhances the representation of features, leading to increased richness (Touvron et al. Citation2020; Wu et al. Citation2021).

Figure 3. The flowchart of the Conv-Transformer structure.

Figure 3. The flowchart of the Conv-Transformer structure.

Subsequently, in the Conv-Transformer part, convolution is employed to perform Conv-Projection operations. These operations create the embeddings for the query, key, and value (Yuan et al. Citation2021). It is worth noting that the class token is only added in the final stage. Lastly, the predictions for the samples’ classes are made using the MLP (Fully Connected Layer) head.

To enhance the feature extraction capability, we incorporate the convolution operation into the Transformer network by utilizing the Conv-Token Embedding layer and Conv-Projection within the multi-head self-attention module.

To be specific, the Conv-Token Embedding operation aims to capture local spatial context information, ranging from low-order edge details to high-order semantic information. It follows a multi-stage hierarchical approach similar to CNN. In this operation, an image or token map from the previous stage is inputted, and the Conv-Token Embedding operation is performed in the subsequent stage to generate a new token map. The resulting token map is then flattened into a one-dimensional vector and passed on to the subsequent Transformer part. The Conv-Token Embedding layer allows for adjusting the dimensions and number of tokens at each stage by manipulating the convolution parameters. As depicted in , In stage 1, the Conv-Token Embedding parameter is set to the convolution kernel size as c = 7, the number of conv s = 64, and the stride p = 4. In stage 2, the convolution kernel size c = 3, the number of conv s = 192 and the stride p = 2. In stage 3, the convolution kernel size c = 3, the number of conv s = 384, and the stride p = 2. By applying the Conv-Token Embedding layer in each stage, the token sequence length is reduced while the token dimensionality is increased. This enhances the ability of each layer’s token to represent complex visual patterns across a large spatial range.

illustrates the implementation details of the Conv-Projection in our Conv-Transformer structure. Initially, the token is transformed into a two-dimensional tensor. Subsequently, Conv-Projection is performed using a convolution operation with a convolution kernel size (S) of 3. The number of convolutions employed in this operation is identical to the Conv-Token Embedding used in the corresponding stage. Finally, the token, after undergoing Conv-Projection, is flattened into a one-dimensional sequence containing the query, key, and value components. This processed token sequence then proceeds to the subsequent Conv-Transformer stage for further processing.

Figure 4. The flowchart of the Conv-Projection structure.

Figure 4. The flowchart of the Conv-Projection structure.

By incorporating Conv-Token Embedding and Conv-Projection into each stage, we have devised the Conv-Transformer structure (Graham et al. Citation2021). This design eliminates the need for a separate position embedding module, thereby simplifying the design of visual tasks that involve variable input resolutions. The Conv-Transformer structure effectively captures local spatial context and enables the model to handle varying input sizes without the reliance on explicit position information.

Furthermore, the incorporation of multi-branch methods in CNN networks has been shown to be effective in capturing features at different scales, thereby enhancing feature richness (Shocher et al. Citation2020). This approach has found success in various computer vision tasks, including object detection and recognition (Knyaz, Kniaz, and Remondino Citation2018). For instance, Fan et al. proposed a two-branch feature extraction network architecture called bLVNet-TAM, which achieved promising results in video action recognition tasks (Fan et al. Citation2019a). While the utilization of multi-scale feature representations has been well-established in CNN models, its application in Transformers is relatively limited. Therefore, in our research, we adopt a two-branch Transformer structure to classify and analyze the scenes of damaged buildings after earthquakes, leveraging the benefits of multi-scale feature extraction.

On the other hand, the size of the token patch has an impact on the accuracy and complexity of ViT. For example, When the patch size is 16, the performance of ViT is 6% better than that of 32, but it uses more storage resource. Therefore, we take advantage of finer-grained patches while balancing complexity, which introduce a two branch Transformer in particular. Each branch operates on a distinct scale (or patch size in patch embedding), and then a simple and effective module to fuse information between branches is proposed.

To sum up, we design two branch model: (1) The Big-Branch employs a larger patch size, with more transformer encoders, and a larger embedding size. (2) Small-Branch: This branch has a smaller patch size, fewer encoders, and a lower embedding size. After merge the outputs of the two branches, the CLS markers of the two end branches are employed for prediction.

Moreover, to enable the model to capture both global and local information within the image across the two branches, we propose the incorporation of global and local attention mechanisms. These mechanisms are derived from the original Self Attention model and are implemented separately in the Big and Small branches, respectively.

In (a), the Self Attention mechanism divides the image into fixed-size patches and applies an attention mechanism to capture features between each patch. However, this approach often focuses on only a small portion of the total image area, resulting in redundant computations and potential interference from irrelevant features. To address these issues, we introduce Local Attention in the Conv-Transformer part of each stage within the Big-branch. Here, the token feature dimension after Conv-Embedding and Conv-Projection is mapped to (H/L × W/L, L × L, C) vectors, as illustrated in (b). The token vectors are further divided into L × L windows, and Self Attention is applied within each window, resulting in attention dimensions of (H/L × W/L). Additionally, to enhance feature representation, Global Attention is employed in the Small-branch. As depicted in (c), we employ a G × G uniform grid on token vectors (G × G, H/G × W/G, C), followed by Self Attention within this sparse global grid. By utilizing local windows and a global dilution grid (L = G = 4), our approach effectively captures information from both local and global perspectives, ensuring balanced computation between the two. Importantly, these methods exhibit linear complexity with respect to spatial size or sequence length, thereby reducing computational complexity. However, the previous two-branch model simply concatenates information from the two branches and feeds it to the subsequent classifier. This approach fails to consider the correlation and information redundancy between the branches, resulting in reduced classification efficiency and performance. To address this limitation, we introduce the Cross Attention module, which allows for the fusion of information from the two branch transformers.

Figure 5. Self Attention (a), Local Attention (b) and Global Attention (c). (Local Attention can only obtain the information of the image in the window, while Global attention can pay attention to the information of the entire image).

Figure 5. Self Attention (a), Local Attention (b) and Global Attention (c). (Local Attention can only obtain the information of the image in the window, while Global attention can pay attention to the information of the entire image).

In order to effectively fuse and integrate information from the two scale Conv-Transformer branches, we adopt a Cross Attention token fusion approach in our study (Chen, Fan, and Panda Citation2021). The underlying concept of the Cross Attention module is illustrated in , where it involves the interaction between the class token of one branch and the patch token of the other branch. To facilitate the integration of multi-scale features, we utilize the class token of each branch as a representative entity to exchange information with the patch token of the other branch, and subsequently incorporate this information back into its own branch. Given that the class token has acquired abstract knowledge shared by all patch tokens, the interaction with a patch token from another branch contributes to capturing data of diverse scales (Huang et al. Citation2020). Once the class token merges with other branch tokens in the subsequent transformer encoder, it engages in interactions with its own patch token. This allows the class token to assimilate information from other branches into its own patch token, thereby enhancing the representation of each patch token.

Figure 6. Cross Attention implementation in details (Only the Class tokens are fused because the Class tokens represent all the information of the branch patches).

Figure 6. Cross Attention implementation in details (Only the Class tokens are fused because the Class tokens represent all the information of the branch patches).

shows the details of the Cross Attention method. Specifically, for the Big-branch, it first collects the patch token from the small branch, and then connects the patch token with the class token, as shown in equation (1). Let Xi be the token sequence of branch i (including patch and CLS markers), where i can be a Big-branch or a Small-branch. Xclsi represent the class token of branch. (1) Xm=[fl(XclsL)||Xpatchs](1) where fl(·) is a function used for alignment of dimensions. Then, because the information from the patch token is fused into the class token, employ Cross Attention (CA) between the class token and the patch token, where the class token is the sole query. CA may be stated mathematically using the following equations: (2) q=Xclsmwq,k=XmWk,v=XmWv(2) (3) A=softmax(qkTCh),CA(Xm)=Av(3) (4) yclsl=fl(Xclsl)+MCA(LN[(fl(Xclsl)||XpatchS]))(4) (5) zl=[gl(yclsl)||Xpatchl](5)

Figure 7. Cross Attention implementation in detail (The class token in Big-branch interacts with the patch tokens in Small-branch as query, and the class token in Small-branch also performs the same operation to complete the interaction of the feature information between the two branches).

Figure 7. Cross Attention implementation in detail (The class token in Big-branch interacts with the patch tokens in Small-branch as query, and the class token in Small-branch also performs the same operation to complete the interaction of the feature information between the two branches).
Wq, Wk and Wv are learnable parameters, C = 192 and h = 6 are the embedding dimension and number of the head. In the model operation, only the class token is used in the query, so the computational and memory complexity are linear when generating the attention map, it improves the overall efficiency of the process. In addition, just like self attention in Transformer, we also use Multiple heads in Cross Attention (MCA). The equation (4) of the Cross Attention mechanism using layer normalization and residual connection is given below. Among them, fl(·) and gl(·) are the aligned projection function and back projection function, respectively. After the Cross Attention module, we normalize the output and add it to the input of the previous layer to form a residual connection then get the final result zl.

presents the structure of the proposed Cross Conv-Transformer network, incorporating the various modules discussed earlier. The network is composed of two branches: the Big-branch and the Small-branch. In the Big-branch, the input images are divided into patches of size 16 × 16 pixels, while in the Small-branch, the images are divided into patches of size 12 × 12 pixels. The original Transformer encoder is replaced with the previously proposed Conv-Transformer structure. Each branch is further divided into three stages, where each stage consists of a Conv-Embedding and Conv-Transformer part. The Conv-Embedding operation divides the input image into small patch sequences, which are then processed by the subsequent Conv-Transformer part to extract global features. The Big-branch has 1, 4, and 16 (N4, N5, N6) Conv-Transformer parts in the three stages, respectively, while the Small-branch has 1, 2, and 10 (N1, N2, N3) Conv-Transformer parts. After each stage, a new token map is generated, gradually reducing in size and increasing in dimension. The token map obtained in the third stage is transformed into a sequence through layer normalization and then fed into the previously designed Cross Attention model to fuse the information learned by the two branches, allowing the model to capture global and local features. Finally, the fused class token is selected and passed through the MLP Head for image scene classification.

Figure 8. Cross Conv-Transformer network structure. (The network is divided into two branches, each branch is composed of a different number of Conv-Transformer. The output of the last two branches is sent to the Cross-Attention module for information fusion and filtering, and finally to the MLP for classification).

Figure 8. Cross Conv-Transformer network structure. (The network is divided into two branches, each branch is composed of a different number of Conv-Transformer. The output of the last two branches is sent to the Cross-Attention module for information fusion and filtering, and finally to the MLP for classification).

4. Experimental results and analysis

4.1. Evaluation metrics

The experimental results of the baseline model and our proposed Cross Conv-Transformer model are evaluated using the Overall Accuracy (OA) and confusion matrix. OA is calculated as the ratio of the number of correctly classified images to the total number of images in the test set after completing model training (Foody Citation2020). OA serves as the primary performance metric for characterizing the image classification performance of the model, with values ranging from 0 to 1. Higher values indicate better classification performance.

Additionally, the confusion matrix provides detailed information about the correct and misclassified instances for each class (Xu, Zhang, and Miao Citation2020). It is a tabular representation where columns represent the predicted class of the instances, and rows represent the actual class of the instances. Each element Xij in the matrix represents the number of images predicted to belong to the ith category while actually belonging to the jth category.

4.2. Experiment setting

The GeForce RTX 3080 GPU is used in this experiment. The settings and parameters were adjusted gradually during the training phase. The maximum number of training epochs was set to 200. The optimizer used for training was Adam, and a batch size of 12 was selected. The initial learning rate was set to 1e-4, and it was gradually reduced during the model training process. The momentum for the model’s settings was set to 0.85, and the weight decay coefficient was set to 0.005. During the training process, the loss function was monitored, and training was stopped when the loss function no longer showed improvement. The trained model weights were then saved for later use during verification and evaluation. Throughout the training process, important evaluation metrics were recorded to assess the performance of the model.

4.3. Results

In order to compare the proposed method with the pure CNN model and the pure Transformer model, ResNet, EfficientNet and ViT model are selected as the benchmark respectively because these models are the representative of CNN and Transformer. We evaluate the performance of the three models by using loss and accuracy curves during training and confusion matrix of classification.

After multiple training iterations, the average accuracies of the ResNet, EfficientNet, ViT and the Cross Conv-Transformer models ultimately reached 91.34%, 94.89%, 93.64%, and 97.61%, respectively. During training phase, the accuracy and loss value changes are depicted in in .

Figure 9. Precision and loss curve of the four models during training.

Figure 9. Precision and loss curve of the four models during training.

It is evident that the Cross Conv-Transformer model consistently outperforms both ViT and EfficientNet throughout the entire training process. Notably, our proposed model exhibits higher accuracy at the beginning and end of training, demonstrates a narrower fluctuation range in the accuracy curve, achieves faster accuracy improvement, and requires a shorter training period. Conversely, the ViT model exhibits higher loss values when training approaches convergence. Furthermore, it is worth noting that EfficientNet outperforms ViT, as the pure Transformer model requires a substantial amount of training data to fully leverage its advantages.

In our study, we employed 95% of the data (19,459 samples) for training and validation. To ensure balanced data categories, an equal amount of data was assigned to each category in the training set. Subsequently, 5% of the data (1584 samples) was reserved for testing. Notably, the test set consisted of a larger number of positive samples (damaged buildings) compared to negative samples. This decision was made based on the classification task’s objective, considering the importance of identifying damaged buildings. The confusion matrix, depicting the model’s performance compared to the ground truth, is presented in . Analysis of the confusion matrix demonstrates that our model outperforms the baseline models.

Table 4. Confusion matrix of the ResNet.

Table 5. Confusion matrix of the ViT.

Table 6. Confusion matrix of the EfficientNet.

Table 7. Confusion matrix of the Crosss Conv-Transformer.

In the of baseline models, ResNet yielded the lowest accuracy, reaching merely 91.34%. This can be attributed to the straightforward single-branch convolutional stacking approach, which led to the model discarding numerous essential feature details. Specifically, 33 severely damaged samples were inaccurately categorized as moderately damaged, while 27 and 25 moderately damaged samples were erroneously classified as severely damaged and lightly damaged, respectively. Additionally, 21 and 14 lightly damaged samples were incorrectly grouped into moderately damaged and severely damaged classes.

Specifically, the ViT model exhibits classification errors primarily between severely damaged and moderately damaged samples, as well as between moderately damaged and slightly damaged samples. Among the misclassifications, 22 severely damaged samples were classified as moderately damaged, 30 and 16 moderately damaged samples were misclassified as severely damaged and slightly damaged, and 31 slightly damaged samples were misclassified as moderately damaged.

Similarly, EfficientNet performs slightly better than ViT but still exhibits some misclassifications. Notably, 18 severely damaged samples were misclassified as moderately damaged, 8 and 10 moderately damaged samples were misclassified as severely damaged and slightly damaged, and 31 slightly damaged samples were misclassified as moderately damaged.

In contrast, our proposed model in , which combines the strengths of CNN and Transformer, demonstrates superior performance. It effectively reduces misclassifications between moderate damage and severe damage, particularly improving the model’s sensitivity in classifying moderate damage. This improvement is particularly meaningful in the classification of building damage scenes after an earthquake.

In summary, from an analysis of the changes in training loss and accuracy curves, it is evident that the Cross Conv-Transformer model demonstrates faster convergence during the training phase. In comparison to the baseline models, the Cross Conv-Transformer model achieves earlier reduction in training loss, and when the loss function curve reaches a plateau, it attains higher accuracy than both ViT, ResNet and EfficientNet. These findings highlight the lightweight nature of our proposed model, which not only facilitates convenient training but also exhibits stronger feature extraction capabilities.

Moreover, examining the classification confusion matrix results of the three models on the earthquake disaster buildings damage dataset, we observe that misclassification is more prevalent between severe damage and moderate damage. This can be attributed to the limited disparity in image texture and shape between severely damaged and moderately damaged buildings, as well as the minimal contrast between the background and the color of the damaged structures. Nevertheless, the Cross Conv-Transformer model outperforms both ViT and EfficientNet by minimizing misclassifications.

As illustrated in , integrating the recognition outcomes of building damage scenes with geographic information from image slices enables a more accurate evaluation of damage conditions in different locations following disasters. In this study, we selected the blocks within the primary urban area of Yushu as the fundamental units for earthquake disaster assessment. The seismic image slices were divided into individual blocks, with the predominant sample categories in each block representing the corresponding earthquake damage categories. The building damage classification standard referred to methods used in research on similar regions (Zhao et al. Citation2013). The experimental results demonstrate that out of the 137 blocks in Yushu’s main urban area, the Cross Conv-Transformer model successfully detected the building damage levels in 133 blocks. The accuracy of block-level damage identification reached 97.08%, with a Kappa coefficient of 0.82, surpassing the second-best performing EfficientNet model by 4.35%. False detections mainly occurred between neighborhoods classified as moderately damaged and lightly damaged.

Figure 10. Block level distribution map of earthquake damage degree in Yushu.

Figure 10. Block level distribution map of earthquake damage degree in Yushu.

Subsequently, we conducted ablation experiments to assess the efficacy of the two-branch architecture and the Cross Attention component within our proposed framework. Furthermore, to elucidate the impact of our designed attention and feature fusion mechanism in buildings damage scene recognition, we employed Grad-CAM to visualize the attention feature heat map. The intensity of the heat map corresponds to the level of importance attributed by the model.

presents the results of the ablation experiments. Initially, we employed a single branch structure, which yielded a performance reduction of 2.62% compared to the two-branch structure. Subsequently, by incorporating Cross Attention as the feature merging component within the two-branch structure, the accuracy of the scene classification task improved by 0.82% in comparison to the method of simply merging the features from the two branches. These findings highlight the effectiveness of the two-branch framework in providing a richer combination of local and global information, consequently enhancing the classification performance of the model. Furthermore, the use of Cross Attention facilitates better integration of features at different scales, surpassing the performance of simple stacking operations.

Table 8. Confusion matrix of the Crosss Conv-Transformer.

visually demonstrates the impact of the feature attention and fusion model we designed. As a result of this model, the attention is directed towards the features of damaged areas in the images. In scenes depicting completely collapsed buildings, the model places greater attention on the building debris scattered on the ground. Moreover, the attention mechanism allows the model to focus on both global and local information, enabling more precise delineation of the boundaries of the damaged areas.

Figure 11. Heat map of the model’s feature.

Figure 11. Heat map of the model’s feature.

5. Discussion

Image classification plays a crucial role in the field of computer vision and finds significant applications in assessing damaged buildings using remote sensing images. In this research, we propose the Cross Conv-Transformer model for accurately classifying post-earthquake building damage. Our experimental findings demonstrate the strong performance of our model, as well as the ResNet EfficientNet and ViT models, in the task of disaster building damage classification. However, despite the overall success, there are instances of misclassification that can be attributed to several factors. One prominent factor is the complex background environment in aerial images following an earthquake. Additionally, the characteristics of damaged building debris and exposed soil bear striking similarities, thereby increasing the likelihood of misclassification. Our experiments reveal that misclassification tends to occur more frequently between buildings with moderate damage and those with severe damage. Notably, our proposed method exhibits superior performance compared to the baseline models, as it effectively reduces misclassification across all damage classes.

To summarize, the Cross Conv-Transformer model demonstrates superior capability in classifying damaged buildings. Our model leverages the inherent advantages of convolutional neural networks to enhance feature extraction for building damage scenes, while also capitalizing on the strengths of Transformers, such as parallel computing, global vision, and flexible stacking. Consequently, the model effectively captures the long-range dependencies between different regions within the building damage images. Furthermore, we employ a two-branch approach, incorporating window-sized attention and grid-sized attention, along with feature fusion, to facilitate the model’s comprehensive learning of local and global features, thereby improving its robustness. As the Transformer structure continues to evolve, we anticipate the emergence of more Transformer-based networks in the near future. These networks are expected to possess enhanced feature extraction capabilities and offer more targeted remote sensing high-resolution image classification. Additionally, we aim to continuously enhance our natural disaster remote sensing dataset by expanding its scope to encompass a greater variety of data and disaster types. In future research, we will emphasize exploring the relationship between the degree of building damage and the features extracted by deep learning models, with the objective of improving the model’s proficiency in extracting fine-grained building damage features.

6. Conclusions

In this study, we have created a dataset specifically for buildings damage scene recognition, utilizing aerial images from the Beichuan and Yushu earthquakes. Notably, we have incorporated both Convolutional Neural Networks (CNN) and Transformers into the task of earthquake disaster scene recognition. Our proposed approach involves a two-branch Conv-Transformer model that incorporates local and global attention mechanisms. Through extensive experiments, we have observed that the Cross Conv-Transformer model outperforms the baseline models in terms of classification accuracy across different levels of building damage. Furthermore, it exhibits lower loss during the training phase. By analyzing the confusion matrix, we found that the Cross Conv-Transformer model effectively reduces the classification errors between severely damaged and slightly damaged buildings. Moreover, the accuracy of the Cross Conv-Transformer model’s classification is consistently higher during the training phase. Our research demonstrates the effective application of the Cross Conv-Transformer model for extracting valuable disaster information from aerial images.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by National Key Research and Development Program of China [2019YFE0127400]; KAKENHI [19K20309].

References

  • Adriano, Bruno, J. Xia, G. Baier, N. Yokoya, and S. Koshimura. 2019. “Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping During the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia.” Remote Sensing 11: 886. doi:10.3390/rs11070886.
  • Akhmadiya, Asset, N. Nabiyev, K. Moldamurat, K. Dyusekeev, and S. Atanov. 2020. “Change Detection Based Building Damage Assessment Method Using Radar Imageries with GLCM Textural Parameters.” doi:10.20944/preprints202001.0225.v1.
  • Bazi, Y., L. Bashmal, M. Rahhal, R. Dayil, and N. Ajlan. 2021. “Vision Transformers for Remote Sensing Image Classification.” Remote Sensing 13 (3): 1–20. doi:10.3390/rs13030516.
  • Bialas, J., T. Oommen, and T. Havens. 2019. “Optimal Segmentation of High Spatial Resolution Images for the Classification of Buildings Using Random Forests.” International Journal of Applied Earth Observation and Geoinformation 82: 101895. doi:10.1016/j.jag.2019.06.005.
  • Chen, C., Q. Fan, N. Mallinar, T. Sercu, and R. Feris. 2019. “Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition.” 7th International Conference on Learning Representations, ICLR, 1–20.
  • Chen, C., Q. Fan, and R. Panda. 2021. “CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification.” IEEE International Conference on Computer Vision, 347–356. doi:10.1109/ICCV48922.2021.00041.
  • Chen, S., X. Wang, and P. Xiao. 2018. “Urban Damage Level Mapping Based on Co-Polarization Coherence Pattern Using Multitemporal Polarimetric SAR Data.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11 (8): 2657–2667. doi:10.1109/JSTARS.2018.2818939.
  • Dosovitskiy, A., L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, and M. Dehghani. 2020. “An Image is Worth 16 Words: Transformers for Image Recognition at Scale.” arXiv:2010.11929. http://arxiv.org/abs/2010.11929.
  • Duarte, D., F. Nex, and N. Kerle. 2020. “Detection of Seismic Façade Damages with Multi-Temporal Oblique Aerial Imagery.” GIScience & Remote Sensing 57 (5): 670–686. doi:10.1080/15481603.2020.1768768.
  • Fan, Q., C. F. R. Chen, H. Kuehne, et al. 2019a. “More is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation.” Advances in Neural Information Processing Systems 32.
  • Fan, X., G. Nie, Y. Deng, et al. 2019b. “Estimating Earthquake-Damage Areas Using Landsat-8 OLI Surface Reflectance Data.” International Journal of Disaster Risk Reduction 33: 275–283. doi:10.1016/j.ijdrr.2018.10.013.
  • Fan, X., G. Nie, C. Xia, et al. 2021. “Estimation of Pixel-Level Seismic Vulnerability of the Building Environment Based on Mid-resolution Optical Remote Sensing Images.” International Journal of Applied Earth Observation and Geoinformation 101: 102339. doi:10.1016/j.jag.2021.102339.
  • Foody, G. M. 2020. “Explaining the Unsuitability of the Kappa Coefficient in the Assessment and Comparison of the Accuracy of Thematic Maps Obtained by Image Classification.” Remote Sensing of Environment 239: 111630. doi:10.1016/j.rse.2019.111630.
  • Gebrehiwot, A., L. Beni, G. Thompson, P. Kordjamshidi, and T. Langan. 2019. “Deep Convolutional Neural Network for Models for Identifying Damaged Buildings Aerial Vehicles Data.” Sensors 19 (7), doi:10.3390/s19071486.
  • Graham, B., A. El-Nouby, H. Touvron, et al. 2021. “Levit: A Vision Transformer in ConvNet’s Clothing for Faster Inference.” IEEE/CVF International Conference on Computer Vision, 12259–12269.
  • Gulati, A., J. Qin, C. C. Chiu, et al. 2020. “Conformer: Convolution-Augmented Transformer for Speech Recognition”. The Annual Conference of the International Speech Communication Association, INTERSPEECH, 10: 5036–5040. doi:10.21437/Interspeech.2020-3015.
  • Hong, D., Z. Han, J. Yao, et al. 2021. “SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers.” IEEE Transactions on Geoscience and Remote Sensing 60: 1–15. doi:10.1109/TGRS.2021.3130716.
  • Huang, J., J. Tao, B. Liu, et al. 2020. “Multimodal Transformer Fusion for Continuous Emotion Recognition.” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3507–3511.
  • Jia, S., and Y. Wang. 2022. “Multiscale Convolutional Transformer with Center Mask Pretraining for Hyperspectral Image Classification.” arXiv:2203.04771. http://arxiv.org/abs/2203.04771.
  • Knyaz, V. A., V. V. Kniaz, and F. Remondino. 2018. “Image-to-Voxel Model Translation with Conditional Adversarial Networks.” The European Conference on Computer Vision (ECCV) Workshops, 1: 1–19.
  • Liu, C., S. M. E. Sepasgozar, Q. Zhang, et al. 2022. “A Novel Attention-Based Deep Learning Method for Post-Disaster Building Damage Classification.” Expert Systems with Applications 202: 117268. doi:10.1016/j.eswa.2022.117268.
  • Ma, L., Y. Liu, X. Zhang, et al. 2019. “Deep Learning in Remote Sensing Applications: A Meta-Analysis and Review.” ISPRS Journal of Photogrammetry and Remote Sensing 152: 166–177. doi:10.1016/j.isprsjprs.2019.04.015.
  • Mangalathu, S., H. Sun, C. C. Nweke, et al. 2020. “Classifying Earthquake Damage to Buildings Using Machine Learning.” Earthquake Spectra 36 (1): 183–208. doi:10.1177/8755293019878137.
  • Mangan, S., and U. Alon. 2003. “Structure and Function of the Feed-Forward Loop Network Motif.” The National Academy of Sciences 100 (21): 11980–11985. doi:10.1073/pnas.2133841100.
  • Mohammadian, A., and F. Ghaderi. 2022. “SiamixFormer: A Siamese Transformer Network for Building Detection and Change Detection from Bi-Temporal Remote Sensing Images.” arXiv:2208.00657. http://arxiv.org/abs/2208.00657.
  • Naito, S., H. Tomozawa, Y. Mori, et al. 2020. “Building-Damage Detection Method Based on Machine Learning Utilizing Aerial Photographs of the Kumamoto earthquake.” Earthquake Spectra 36 (3): 1166–1187. doi:10.1177/8755293019901309.
  • Prashath, R. R., N. Priyadharshini, and C. B. Lakshmi. 2021. “Aerial Image Based Calamity Monitoring Using Deep Learning for Emergency Responsive Applications.” IOP Conference Series: Materials Science and Engineering 1055: 012094. doi:10.1088/1757-899X/1055/1/012094.
  • Radford, A., and S. Tim. 2021. “Language Understanding.” Encyclopedia of Autism Spectrum Disorders, 2640–2640. doi:10.1007/978-3-319-91280-6_300915.
  • Rosso, M. M., G. Marasco, S. Aiello, et al. 2023. “Convolutional Networks and Transformers for Intelligent Road Tunnel Investigations.” Computers & Structures 275: 106918. doi:10.1016/j.compstruc.2022.106918.
  • Roy, R., S. S. Kulkarni, V. Soni, et al. 2022. “Transformer-Based Flood Scene Segmentation for Developing Countries.” arXiv:2210.04218. http://arxiv.org/abs/2210.04218.
  • Settou, T., M. K. Kholladi, and A. Ben Ali. 2022. “Improving Damage Classification Via Hybrid Deep Learning Feature Representations Derived from Post-Earthquake Aerial Images.” International Journal of Image and Data Fusion 13 (1): 1–20. doi:10.1080/19479832.2020.1864787.
  • Shen, Y., S. Zhu, T. Yang, C. Chen, D. Pan, J. Chen, L. Xiao, and Q. Du. 2022. “BDANet: Multiscale Convolutional Neural Network with Cross-Directional Attention for Building Damage Assessment from Satellite Images.” IEEE Transactions on Geoscience and Remote Sensing 60: 1–16. doi:10.1109/TGRS.2021.3080580.
  • Shi, L., F. Zhang, J. Xia, J. Xie, Z. Zhang, Z. Du, and R. Liu. 2021. “Identifying Damaged Buildings in Aerial Images Using the Object Detection Method.” Remote Sensing 13 (21): 4213. doi:10.3390/rs13214213.
  • Shocher, A., G. Yossi, M. Inbar, Y. Michal, I. Michal, F. William, and D. Tali. 2020. “Semantic Pyramid for Image Generation.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 7455–7464. doi:10.1109/CVPR42600.2020.00748.
  • Touvron, H., C. Matthieu, D. Matthijs, M. Francisco, S. Alexandre, and J. Hervé. 2020. “Training Data-Efficient Image Transformers & Distillation Through Attention.” arXiv:2012.12877. http://arxiv.org/abs/2012.12877.
  • Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, and I. Polosukhin. 2017. “Attention is All You Need.” Advances in Neural Information Processing Systems 12: 5999–6009.
  • Voita, E., D. Talbot, F. Moiseev, R. Sennrich, and I. Titov. 2020. “Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned.” arXiv:1905.09418. doi:10.48550/arXiv.1905.09418.
  • Wang, Y., C. Alvin Wei, and L. Zhang. 2022. “Building Damage Detection from Satellite Images After Natural Disasters on Extremely Imbalanced Datasets.” Automation in Construction 140: 104328. doi:10.1016/j.autcon.2022.104328.
  • Wang, X., and P. Li. 2020. “Extraction of Urban Building Damage Using Spectral, Height and Corner Information from VHR Satellite Images and Airborne LiDAR Data.” ISPRS Journal of Photogrammetry and Remote Sensing 159: 322–336. doi:10.1016/j.isprsjprs.2019.11.028.
  • Wang, W., E. Xie, X. Li, D. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao. 2021. “Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction Without Convolutions.” arXiv:2102.12122. http://arxiv.org/abs/2102.12122.
  • Wu, H., B. Xiao, N. Codella, M. Liu, X. Dai, L. Yuan, and L. Zhang. 2021. “CvT: Introducing Convolutions to Vision Transformers.” arXiv:2103.15808. http://arxiv.org/abs/2103.15808.
  • Xiong, R., Y. Yang, D. He, K. Zheng, S. Zheng, and T. Liu. 2020. “On Layer Normalization in the Transformer Architecture.” International Conference on Machine Learning, 10524–10533. https://proceedings.mlr.press/v119/xiong20b.
  • Xu, J., Y. Zhang, and D. Miao. 2020. “Three-Way Confusion Matrix for Classification: A Measure Driven View.” Information Sciences 507: 772–794. doi:10.1016/j.ins.2019.06.064.
  • Yang, W., W. Zhang, and P. Luo. 2021. “Transferability of Convolutional Neural Network Models for Identifying Damaged Buildings Due to Earthquake.” Remote Sensing 13 (3): 1–20. doi:10.3390/rs13030504.
  • Yuan, Li, Y. Chen, T. Wang, W. Yu, Y. Shi, Z. Jiang, and S. Yan. 2021. “Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet.” arXiv:2101.11986. doi:10.48550/arXiv.2101.1198.
  • Zhang, B., Z. Hu, P. Wu, H. Huang, and J. Xiang. 2023. “Engineering Applications of Artificial Intelligence EPT : A Data-Driven Transformer Model for Earthquake Prediction.” Engineering Applications of Artificial Intelligence 4: 106176. doi:10.1016/j.engappai.2023.106176.
  • Zhao, L., J. Yang, et al. 2013. “Damage Assessment in Urban Areas Using Post-Earthquake Airborne PolSAR Imagery.” International Journal of Remote Sensing 34 (24): 8952–8966. doi:10.1080/01431161.2013.860566.
  • Zhu, X., Devis Tuia, L. Mou, G. Xia, L. Zhang, F. Xu, and F. Fraundorfer. 2017. “Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources.” IEEE Geoscience and Remote Sensing Magazine 5: 8–36. doi:10.1109/MGRS.2017.2762307.