1,019
Views
3
CrossRef citations to date
0
Altmetric
COMPUTER SCIENCE

Enhancing deep learning techniques for the diagnosis of the novel coronavirus (COVID-19) using X-ray images

ORCID Icon, & ORCID Icon
Article: 2181917 | Received 02 Dec 2022, Accepted 29 Jan 2023, Published online: 08 Mar 2023

Abstract

Deep learning techniques combined with radiological imaging provide precision in the diagnosis of diseases that can be utilised for the classification and diagnosis of several diseases in the medical sector. Several research studies have focused on binary classification of COVID-19, and there is limited research focusing on multiclass classification of COVID-19. The purpose of this study is to develop a model that can enhance the multiclass classification of COVID-19 using raw chest X-ray images. The study involved using convolutional neural networks as the classifier. Five pre-trained deep learning models including VGG16, MobileNet, EfficientNetB0, NasNetMobile and ResNet50V2 are used to distinguish between COVID-19 infection and other lung diseases. Data Augmentation and Normalization techniques have been used to improve the models’ performance and avoid training problems. The study findings revealed that it is possible to distinguish between COVID-19 infection and other lung diseases using pre-trained deep learning models. The proposed technique successfully classifies five classes (normal, COVID-19, lung opacity, viral pneumonia, and bacterial pneumonia). It is found that NasNetMobile model outperformed the rest of the models and achieved the highest results. It achieved an overall accuracy, sensitivity, specificity and precision of 91%, 91%, 97.7% and 91%, respectively. The VGG16 model produced better results in detecting COVID-19 infection, resulting in an accuracy of 95.8%. The suggested technique is more accurate in comparison to the other newly developed techniques presented in the literature. This provides healthcare staff with a powerful tool for the diagnosis of COVID-19 based on deep learning.

1. Introduction

The new Coronavirus (COVID-19) pandemic, which originated in a Chinese city called Wuhan in December 2019, became a huge global health problem (Roosa et al., Citation2020; CitationYan et al). It spread very fast across the world within a few months causing serious disruptions to economic and development activities in addition to the health-related impact (W. Wang et al., Citation2020). Given the contagious nature of the virus, it was very easy for all countries across the globe to be affected over a short period of time. To reduce the spread of the virus, governments put in place some restrictions such as lockdowns, and this impacted almost every sector of life across the world (Ji et al., Citation2020). The World Health Organisation (WHO) moved quickly to declare the new Coronavirus (COVID-19) as a new pandemic. Observations made by the WHO indicated that the virus takes between 2 and 14 days inside the human body to incubate (Painuli et al., Citation2021). One of the main problems posed by the virus is that it presents different symptoms in individuals. Affected people will manifest from mild to severe symptoms including critical illness. COVID-19 tends to affect the lungs, mainly leading to pneumonia in severely ill patients (Usama Khalid Bukhari et al., Citation2020). Patients who develop pneumonia will have inflamed lungs and this causes the alveoli to be filled with fluid or sticky substance (Obaro & Madhi, Citation2006). This affects the effective gaseous exchange between the blood circulatory system and the lungs. A reduction in the amount of oxygen and carbon dioxide exchanged between the lungs and the circulatory system makes it difficult to breathe (A. U. Ibrahim et al., Citation2021).

There are several microbial pathogens that can also infect and damage the lungs resulting in patients developing pneumonia; however, there are certain radiological characteristics that make it feasible to diagnose pneumonia caused by COVID-19 (Usama Khalid Bukhari et al., Citation2020). The main difference between the commonly known pneumonia and pneumonia, which results from COVID-19 infection, is that the one caused by coronavirus is more harmful and damages the lungs badly over a short time. In addition, pneumonia caused by COVID-19 tends to affect the lungs as a whole, whereas the typical pneumonia tends to damage selected parts of the lungs (E. Khan et al., Citation2022). Several techniques have been developed and adopted for the diagnosis of typical pneumonia. These include techniques such as analysis of blood gas, the sputum test, CT scan, and chest X-rays, among others (E. Khan et al., Citation2022). One of the commonly used methods across the world for the diagnosis of pneumonia is chest radiography (X-ray; A. K. Jaiswal et al., Citation2019). There are several reasons for this including that chest X-ray is a fast and relatively cheap method (CitationAntin et al., Citation2017), and it is a common clinical method (Narayan Das et al., Citation2020). Furthermore, compared to the computed tomography (CT) and other methods such as magnetic resonance imaging (MRI), the chest X-ray gives a lower radiation dose to patients (Gaál et al., Citation2020). However, despite the benefits cited above, a high degree of accuracy is difficult to achieve during the diagnosis. Given the short supply of the test kits, real-time polymerase chain reaction (RT-PCR) tests are difficult to conduct for every infected person (Ai et al., Citation2020; Hemdan et al., Citation2020). The other downside of these tests is that they are time-consuming, and the results are produced within a range of a few hours up to 2 days. Furthermore, too many false-negative results are produced from these tests (Basu et al., Citation2020; Nayak et al., Citation2021). It is against this backdrop that the search for a different screening technique that takes less time and that is more reliable should be prioritised (Farooq & Hafeez, Citation2020). It is imperative to come up with a new technique that medical staff can utilise for diagnosing COVID-19. The benefit of early detection of COVID-19 is that patients’ lives can be saved by ensuring the provision of on-time medical attention.

Innovative solutions to most of the computer science problems such as image processing and bio-informatics have emerged through the adoption of Artificial Intelligence (AI) and Machine Learning (ML) techniques in healthcare settings (Shuja et al., Citation2021). The structure of the brain has inspired the development of deep learning, which is part of the machine learning (Narin et al., Citation2021). Its operations have been widely employed in medical imaging helping to produce relevant results from the datasets. Deep learning models have been applied in different areas such as classification, segmentation and detection of lesions from medical data. In addition, deep learning models have also been instrumental in the analysis of data derived from medical imaging techniques such as magnetic resonance imaging (MRI), CT and X-ray. It is possible to detect and diagnose diseases such as diabetes mellitus (Yildirim et al., Citation2019), skin cancer (Dorj et al., Citation2018; Hosseinzadeh Kassani & Hosseinzadeh Kassani, Citation2019), brain tumour (Saba et al., Citation2020), breast cancer (Ribli et al., Citation2018; Yu et al., Citation2021) and the diagnosis of pulmonary pneumonia X-ray images (Rajpurkar et al., Citation2017) and lung segmentation based on these analyses (Gaál et al., Citation2020).

It is possible to apply the understanding of a particular problem to a different but similar scenario using the transfer learning approach (Weiss et al., Citation2016). The use of transfer learning systems is helpful, for instance, it provides a faster and easier solution to make use of a pre-trained network with transfer learning compared to training a network from scratch. Deep learning-based solutions have been widely used in medical imaging analysis and computer-assisted interventions (Campanella et al., Citation2019). However, despite the flexibility of the available deep learning platforms, it has been observed that these solutions are unable to provide specific functionality for medical image analysis and a substantial implementation effort is required to adapt them to this function (Razavian, Citation2019). In addition, deep learning networks help to develop computational frameworks that comprise several processing layers to learn about how data are represented in multiple levels of abstraction (Li & Li, Citation2022). The processing layers inside the deep learning networks are interconnected through nodes (neurons) and information is communicated from the previous layers to each of the hidden layers. The convolutional neural networks (CNNs) are the most used deep learning networks (Ouchicha et al., Citation2020). The CNNs have been applied to address different problems including the detection of pulmonary pneumonia X-ray images (Rajpurkar et al., Citation2017) and lung segmentation (Gaál et al., Citation2020). This is because the CNNs can produce a desired output by converting a multidimensional input image (Lecun & Bengio, Citation1995).

The consulted literature reveals that several research studies have so far been focusing on the binary classification of COVID-19 (Abbas et al., Citation2021; Mahmud et al., Citation2020; Umair et al., Citation2021; S. Wang et al., Citation2021). However, there is limited research focusing on multiclass classification of COVID-19 (Barua et al., Citation2021; Hussain et al., Citation2021; Kobat et al., Citation1119/202162; Ozturk et al., Citation2020; Tuncer et al., Citation2020, Citation2021; Xu et al., Citation2020). For instance, in a study by Tuncer et al. (Tuncer et al., Citation2021), a new classification method to distinguish between COVID-19 and pneumonia was developed. Given that COVID-19 and pneumonia are very similar, three classes of datasets including COVID-19, pneumonia and normal chest x-ray images were used in the study. The results showed that the best classifier is a cubic support vector machine. The output of multiclass classification is not enough, and therefore, there is a need to improve their performance. Our study sought to develop a model that enhances the multiclass classification of COVID-19 using raw chest X-ray images. This study proposes five deep learning models to help distinguish between COVID-19 and the other lung diseases and healthy cases. The five models that have been proposed for the automatic detection of COVID-19 using raw chest X-ray images are as follows: VGG16, MobileNet, EfficientNet, NasNetMobile and ResNet50. The newly proposed model is to provide accurate diagnostics for five class classification including Bacterial, COVID-19, Lung Opacity, Normal and Viral. The data set used consists of five-class x-ray database, and this is divided into training, validation and test sets. To improve the performance of deep learning models and prevent problems associated with training models such as overfitting, some assistive techniques have been used.

So far, several deep learning models have been developed for the diagnosis of COVID-19. This makes it necessary to develop effective methodologies to evaluate the performance of the models with the view of identifying the optimal one. In this case, further work has been done to evaluate the performance of deep learning models with the view of identifying an optimal deep learning model that can be used to identify COVID-19 (D. A. Ibrahim et al., Citation2021; Mahmoudi et al., Citation2022; Mohammed et al., Citation2022; Nagi et al., Citation2022). For instance, Mohammed et al. (Citation2022) developed a methodology for the evaluation of different deep learning models. They used two datasets, the first data set included 746 CT images, 349 of them were confirmed COVID-19 cases and the other 397 were of healthy individuals, and the second data set was composed of unimproved CT images of the lung for 632 positive cases of COVID-19 with 15 trained and pre-trained deep learning models. Their study showed that ResNet50 produced an accuracy of 91.46% and F1 score of 90.49% proving to be an optimal deep learning model for the diagnosis of COVID-19. Similarly, Nagi et al. (Citation2022) developed a CNN model called the Custom Model that facilitated the evaluation of the performance of deep learning models on a larger COVID-19 chest X-ray image data set. Using criteria such as accuracy, precision, recall, and F1 score, among others, they found that Xception was the top performer in terms of accuracy and precision, while the MobileNetV2-based model could identify slightly more COVID-19 cases than the Xception at the same time showing fewer false negatives while giving far more false positives compared to the other models. In addition, the Custom CNN model was better than the MobileNetV2 in terms of precision. The Xception model produced the best accuracy, precision, recall, and F1 score out of the three models used in the experiment, and these were 94.2%, 99%, 95% and 97%, respectively. Our study adopts a similar approach but seeks to identify ways of enhancing the diagnosis of COVID-19 focusing on multiclass classification.

2. Related work

The following observations have been made from the consulted literature. Most of the research focusing on the detection of COVID-19 has been conducted using chest X-rays, and this reveals the important role of chest X-rays in the diagnosis of chest infections (A. K. Jaiswal et al., Citation2019), in particular, in the diagnosis of COVID-19. This is because chest X-rays are cheap, fast and a commonly used clinical method (CitationAntin et al). In addition, compared with computed tomography (CT) and magnetic resonance imaging (MRI), they give the patient a lower radiation dose (Gaál et al., Citation2020). The studies also reveal that the use of transfer learning-based CNN is important in the development of an effective model. Despite the massive achievements made since the advent of COVID-19, this field of study continues to face some challenges. For instance, deep learning methods require large amounts of data for training; however, this is not yet available for the COVID-19 class. To address the small dataset issues, data augmentation techniques can be used (Akter et al., Citation2021). In addition, the literature shows that each study produces its own protocol, classes and data and there are no unified data, classes and evaluation protocols. As a result, comparison between different methods is difficult. In our literature review, we also found that the existing studies are more focused on the binary classification of COVID-19 (Abbas et al., Citation2021; Mahmud et al., Citation2020; Umair et al., Citation2021; S. Wang et al., Citation2021), and there is a dearth of research regarding COVID-19 multiclass classification (Hussain et al., Citation2021; Ozturk et al., Citation2020; Xu et al., Citation2020). The current performance of multiclass classification is inadequate; hence, an improvement of their performance is needed. Another observation made from the existing literature is that due to inter-class similarities, multiclass classification of X-ray is a challenging task. In the following paragraphs, we provide a brief summary of different projects that have been conducted to date.

There are several projects that have been carried out with the view of developing a reliable system for COVID-19 classification. Noticeably, the amount of data used in each project varies. It also emerges that out of all the studies that have been developed, the convolutional neural network (CNN) appears to be the most preferred approach. It is well documented that this approach has been utilised for the diagnosis of COVID-19 in recent times (Alghamdi et al., Citation2021; Alghamdi & Dahab, Citation2022; Barua et al., Citation2021; Dong et al., Citation2021; Islam et al., Citation2021; Kobat et al., Citation1119/202162; CitationMohammed et al; Nayak et al., Citation2021; Shi et al., Citation2021; Subramanian et al., Citation2022; Tuncer et al., Citation2020, Citation2021). In a study by El Asnaoui and Chawki (Akter et al., Citation2021), a comparison of CT images between normal bacteria and coronavirus was conducted using the deep convolutional neural network architectures for automatic multiclass classification of X-ray. On the other hand, a CNN based on inception network was designed by S. Wang et al. (Citation2021) for the detection of COVID-19 disease in CT. In the same vein, Mahmud et al. (Citation2020) came up with a deep learning-based technique that is capable of classifying COVID-19 and pneumonia infection. In this case, the features were extracted using a deep CNN model called CovXNet. The model generated good results, classifying non-COVID-19 pneumonia and COVID-19 pneumonia at 96% accuracy. Similarly, A. Jaiswal et al. (Citation2020) suggested another model, the DenseNet201, which helps to distinguish between the COVID-19-infected patients and not infected patients using chest CT images. L. Wang et al. (Citation2020) developed a COVID-NET with a deep CNN architecture for detecting COVID-19. Based on their experimental results, their proposed method produced good outcomes. In addition, they also presented the COVIDx data set, and this has been adopted in the study at hand. Another technique for the binary classification of COVID-19 was developed by Umair et al. (Umair et al., Citation2021). Their study involved comparison of four deep learning models and various evaluation parameters were utilised to validate the results. Apostolopoulos & Mpesiana (Citation2020) proposed a transfer learning-based CNN model that helps to classify medical data into three groups (normal, pneumonia and COVID-19). Their study was conducted using five different pre-trained CNN models and reported an accuracy of 98.75% for the binary fusion problem and 98.48% for the multiclassification problem. Alqudah et al. (Citation2020) employed two separate methods to identify COVID-19 from chest X-ray images. The first method employed the following CNNs: AOCTNet, MobileNet and ShuffleNet. In the second method, the features of the images were removed and classified using random forest (RF) algorithms, softmax classifier, support vector machine (SVM) and K nearest neighbor (kNN). A different system called CoroDet, based on CNN, was developed by Hussain et al. (Hussain et al., Citation2021) for the detection of COVID-19 infection. The suggested CNN network consists of 22 layers, is trained using chest X-rays and CT scan images and classifies COVID-19 and non-COVID-19. In addition, it is also useful for classifying three different categories such as normal, pneumonia and COVID-19. The new model with 22 layers generated good classification results. In another study, Hemdan et al. (Hemdan et al., Citation2020) used 7 different deep CNN architectures to investigate the COVID-19 classification. Their experimental studies involved working with COVID-19 and non-COVID-19 categories. They used a dataset with a total of 50 images and achieved better results using the VGG19 and DenseNet201 models. Ghoshal & Tucker (Citation2020) developed a Dropweight-based Bayesian Convolutional Neural Network model, and this was tested with four different categories including normal, bacterial, pneumonia, non-COVID-19 viral pneumonia and COVID-19.

Another CNN-based technique was developed by Abbas et al. (Citation2021), and this was useful for the classification of COVID-19 infection using chest X-ray images. The CNN model commonly known as De Trac, that is, decompose, transfer, and compose, was used. The study involved the use of several data sets from various hospitals throughout the world. The DeTrac model produced good results, an accuracy of 93.1% and a sensitivity of 100%. Using some pre-trained models, Sethy et al. (Citation2020) extracted features from X-ray images. Based on their experimental outcomes, the features obtained from ResNet50 model and then classified using SVM produced better results in comparison to the other models. Similarly, Usama Khalid Bukhari et al. (Citation2020) used ResNet-50 CNN architectures on 278 CXR images, divided into three categories including COVID-19, normal and pneumonia. Their study model produced good results and showed considerable difference in pulmonary changes from the other types of pneumonia that are caused by COVID-19.

An improved ResNet-50 CNN architecture called COVIDResNet was proposed in another study (Farooq & Hafeez, Citation2020). The experiment was conducted through increasingly re-sizing input images to 128 x 128 x 3, 224 x 224 x 3 and 229 x 229 x 3 pixels and selecting the automatic learning rate for fine-tuning the network at each stage. The results from the work showed high accuracy and computational efficiency for multi-class classification. In another study, a 24 layered CNN model for the classification of COVID-19 and normal images was developed by Panwar et al. (Citation2020). The model was called nCOVnet, and the training of the model involved use of an X-ray data set. The model produced an accuracy of up to 97%. Zhang et al. (CitationZhang et al) developed a new deep learning supported anomaly detection model for COVID-19 using X-ray images. When the threshold was set to 0.25, their model produced a sensitivity of 90%, and a specificity of 87.84%. Similarly, another transfer learning-based CNN model for detecting COVID-19 was proposed by Narin et al. (Citation2021). They used ResNet50, InceptionV3, and Inception-ResNetV2 pre-trained models for transfer learning. Their simulation results showed that the ResNet50-based model produced the best results. In the following section, we provide details of how our study, which focused on multiclass classification of COVID-19, was carried out.

3. Materials and methods

As indicated above, most of the existing research studies have focused on the binary classification of COVID-19 (Abbas et al., Citation2021; Mahmud et al., Citation2020; Umair et al., Citation2021; S. Wang et al., Citation2021). Yet, very little research has been dedicated to multiclass classification of COVID-19 (Hussain et al., Citation2021; Ozturk et al., Citation2020; Tuncer et al., Citation2021; Xu et al., Citation2020). It is still necessary to work towards the improvement of the performance of multiclass classification. This study focuses on distinguishing between COVID-19 and other lung diseases and health cases. In this case, convolutional neural networks (CNNs; Krizhevsky et al., 6, Citation2017), which is a deep learning method is used as the classifier and the following pre-trained models: EfficientNetB0 (Tan & Le, Citation2019), VGG16 (Simonyan & Zisserman, Citation2014), ResNet50-V2, NasNetMobile (Zoph et al., Citation2017), and MobileNetV2 (Sandler et al., Citation2018), are used for transfer learning for detecting COVID-19 automatically using raw chest X-ray images. The new model seeks to ensure an accurate diagnostic for multiclass classification, that is, bacterial, COVID-19, lung opacity, normal and viral. To avoid problems of overfitting, which is associated with training models, assistive techniques are utilised. This section presents a discussion of the data set used for the training and testing as well as the deep learning models employed in this study.

3.1. Data set

We made use of a publicly available data set on Kaggle (“Covid-19 X-ray—Two proposed Databases | Kaggle.”, Citation2022) with a five-class X-ray database including viral pneumonia, normal, COVID-19, bacterial pneumonia and lung opacity to distinguish between COVID-19 and the other lung diseases and healthy cases. This was divided into training, validation and test sets. The selected data set was drawn from different sources (Cohen et al., Citation2020; Irvin et al., Citation2019; Jaeger et al., Citation2014; Kermany et al., Citation2018) and from the Hospital of Tolga, Alge (Vantaggiato et al., Citation2021). Table shows the number of samples for each class. On the other hand, Figure shows an example of X-ray for each class of the five-class COVID-19 database.

Figure 1. Chest X-ray images of normal, Covid-19, bacterial pneumonia, viral.

Figure 1. Chest X-ray images of normal, Covid-19, bacterial pneumonia, viral.

Table 1. An overview of the number of samples for each class in the data set

Pneumonia and Lung Opacity

3.2. Data normalization and augmentation

To ensure efficiency in the training of deep learning models, a data set with a large number of samples is required. However, such datasets are not always readily available given that COVID-19 is a new disease. To resolve this issue, the image augmentation approach is implemented to increase the amount of the data set, to improve the performance of the models and to get rid of the problem of overfitting. There exist several image augmentation techniques, and hence, it is important to be able to select a good technique that can maintain all the information in the original dataset. The challenge is that this also increases the size of the dataset. If the original data set has inter-class similarities, rotational augmentation and flipping augmentation are useful approaches to adopt. Rotational augmentation involves rotating the images either clockwise or anticlockwise at different degrees. While from a practical perspective, images can be rotated from 1 to 359 degrees, use of 30 degrees is appropriate to preserve the augmentation safety. In this case, flipping can be horizontal and vertical around the axis. In addition, the data set is normalised within a range of 0 and 1. Each pixel of images in the data set is multiplied by a factor of 1/255. This ensures that the pixel intensity in the dataset is consistent. This helps to perform calculations faster during training, thereby reducing the amount of time for training as well as improving performance. Figure shows some examples of augmentation.

Figure 2. Sample of augmented images.

Figure 2. Sample of augmented images.

3.3. Convolutional Neural Networks (CNNs) for detecting COVID-19

The CNNs constitute one of the main groups of deep learning techniques used for the identification and classification of images. The CNNs have been used to detect and to classify medical images/videos successfully (Loey et al., 4, Citation2020). Several AI systems such as the CNNs, that are based on DL, have been suggested for use in detecting COVID-19. Such techniques have demonstrated great potential in COVID-19 diagnosis. Notably, they produce more accurate diagnosis of COVID-19 patients using chest X-ray images. These results are linked to the use of algorithms that are capable of learning about the features in the data set automatically by the deep CNNs (Lecun et al., Citation2015).

The CNNs are more powerful than the conventional or traditional networks because of their capability to detect features automatically rather than manually (Sarker, Citation2021). The CNN architecture consists of several layers or multi-building blocks. These include the convolution and the pooling layers, which are followed by one or more fully connected layers at the end (A. Khan et al., Citation2020). The extraction of features is carried out in the convolutional and pooling layers and the classification process takes place in the fully connected layer (Narin et al., Citation2021). An example of a CNN architecture is shown in Figure .

Figure 3. A CNN architecture and the different layers.

Figure 3. A CNN architecture and the different layers.

The training of CNNs can be achieved using two different strategies (Abbas et al., Citation2021). One of the strategies involves using them as an end-to-end network where a large number of annotated images must be made available (and this is practically impossible in medical imaging). The second strategy involves the use of transfer learning, which is usually an effective solution especially when there is a limited availability of annotated images. This strategy makes use of a pre-trained CNN model, and this removes the challenge of training a CNN from scratch, which requires large, labelled data sets and a massive amount of computing resources (Al Hadhrami et al., Citation2018).

3.4. Pre-trained model with deep transfer learning

Training a model requires high computing power from the beginning and this is a time-consuming process (Shin et al., Citation2016). The training has to be done in areas that resemble the context of its application. Pre-trained models with transfer learning help to speed up the convergence with network generalisation (Alom et al., Citation2019). The majority of pre-trained models used in transfer learning are designed for the large CNN. There exist different types of pre-trained models that are used for the diagnosis of COVID-19. Examples include AlexNet (Krizhevsky et al., 6, Citation2017), Inception/GoogleNet (Szegedy et al., Citation2015, Citation2016), and ResNet (He et al., Citation2016), visual geometry group (VGG; Liu & Deng, Citation2016; Simonyan & Zisserman, Citation2014), Xception (Chollet, Citation2017), DenseNet (Huang et al., Citation2017), and MobileNet (Howard et al., Citation2017). These models can be used to identify a new task without requiring comprehensive training from the beginning to the end, and they can also be used in several areas depending on their learning capabilities. Two strategies are normally used to harness the full potential of the pre-trained CNN (Apostolopoulos & Mpesiana, Citation2020):

  1. The first approach is called the extraction of features through the transfer learning strategy: Through this approach, the original architecture and the learned weights are retained by the pre-trained model (Huh et al., Citation2016). In this case, the pre-trained model helps to extract the features only. The extracted features are fed into a new network that involves the classification task. This is an important approach that prevents computational costs involved when comprehensive training is required from the initial stages to the end.

  2. The second approach is called fine tuning: this involves a more complex procedure focusing on the achievement of optimal results by making some targeted changes to the pre-trained model such as changing the architecture and parameter tuning. By so doing, the weights of the pre-trained CNN model are kept the same in some layers and changed in the other layers. The first layers tend to maintain their weights because the features derived from these layers are general and can be applied to new tasks. On the other hand, the last layers have got some specific features that are likely to benefit from the fine-tuning approach in line with some changes that will be made to the targeted dataset. Figure shows a comprehensive architecture of the transfer learning technique used for COVID-19 diagnosis.

    Figure 4. The overall architecture of the transfer learning technique for COVID-19 diagnosis.

    Figure 4. The overall architecture of the transfer learning technique for COVID-19 diagnosis.

3.5. Proposed methodology

With the view of distinguishing between COVID-19 and the other lung diseases and healthy cases, this study makes use of CNNs (Krizhevsky et al., 6, Citation2017). The pre-trained models including EfficientNetB0 (Tan & Le, Citation2019), VGG16 (Liu & Deng, Citation2016; Simonyan & Zisserman, Citation2014), ResNet50-V2 (He et al., Citation2016), NasNetMobile (Zoph et al., Citation2017), and MobileNetV2 (Sandler et al., Citation2018), are used for transferring learning for detecting COVID-19 automatically using raw chest X-ray images. The proposed model is designed to accurately diagnose multiclass classification (bacterial, COVID-19, lung opacity, normal and viral). To enhance the performance of deep learning models and to prevent problems associated with training models such as overfitting, assistive technologies are used. The proposed workflow for multiclass classification developed in this study is shown in Figure .

Figure 5. The proposed workflow for multiclass classifying the COVID-19 status in X-Ray images.

Figure 5. The proposed workflow for multiclass classifying the COVID-19 status in X-Ray images.

3.5.1. MobileNetV2

MobileNetV2 (Sandler et al., Citation2018) is the updated version of MobileNet and consists of two new features listed below:

  • Linear bottlenecks—This prevents nonlinearities from destroying too much information and is located between the layers. The performance can be damaged severely by using non-linear layers in bottlenecks.

  • Shortcut connections between the bottlenecks—The bottleneck depth-separable convolution with residuals is the main building block. The MobileNetV2 architecture contains the first fully convolutional layer containing 32 filters, 19 residual bottleneck layers and Relu6 and batch normalization layers.

3.5.2. NasNetMobile

NasNetMobile (Zoph et al., Citation2017) Neural Architecture Search Network (Nasnet): is a type of convolutional neural network. The building blocks consist of

  • Normal Cells: these are the convolutional cells that do not change, and they retain a feature map of the same dimension.

  • Reduction Cell: these are convolutional cells that retain a feature map where some changes are made to the height and width of the feature map—they are reduced by a factor of two.

Initially, Nasnet applies its operations on the small data set before transferring its block to the large data set producing a higher mAP.

3.5.3. EfficientNetB0

EfficientNetB0 (Tan & Le, Citation2019) is the smallest version of EfficientNet, which is a CNN architecture and a scaling method that uses a compound coefficient to uniformly scale all dimensions of depth/width/resolution. The core of EfficientNetB0 is the MBConv block, and this inverted residual block is used to reduce the number of trainable parameters. The MBConv block consists of the squeeze and excitation block, which helps with the extraction of features.

3.5.4. The VGG

The VGG (Liu & Deng, Citation2016; Simonyan & Zisserman, Citation2014) model is used to investigate the depth of layers with a very small convolutional filter size (3 x 3) of large-scale images. The network has a depth of 16 layers (13 Convolutional layers and 3 dense layers) and can classify images of multiple classes. While designing an extremely deep network, the primary concept of the VGG architecture is to keep the convolution size modest and constant.

3.5.5. ResNet50

ResNet50 (He et al., Citation2016) is another type of ResNet model. It consists of a total of 50 layers deep including 48 Convolution layers, one MaxPool and one Average Pool layer. Subsequently, the five models are fine-tuned in line with the objectives of the study at hand. The classifier head, which consists of fully connected layers and global average pooling layers, is removed from all the five networks. Two additional layers, Dense Layer 1 and Dense Layer 2 of size 1000 and 5, respectively, are added. The last fully connected layer is usually activated using the softmax function; however, in this study, the Adam optimiser is used. To normalise the output of previous layers, the batch normalisation layer is used prior to the full connection of all the layers. The introduction of the batch normalisation layer reduces the convergence time and improves the accuracy. The dropout layer is used after the first fully connected layer to ensure that the model is generalised and that the problem of overfitting is avoided.

The cost function is the cross-entropy function which is shown in the following equation:

H(P,Q) = −∑NcP(o,l)logQ(o,l)

where N stands for the total classes, l denotes the labels of classes, and P (o, l) is the true probability of observation o over class c, while Q(o, l) is the predicted probability of observation o over class c.

The hyperparameter tuning plays an important role when training the deep learning model. Understandably, one of the important parameters is the learning rate. In our study, a learning scheduler is used rather than employing one learning rate throughout the training. The learning scheduler is designed in a manner that ensures that the learning rate is divided by a factor of 2 whenever the validation loss stops reducing. This work involves using the initial learning rate of 0.0001. Through experimentation, small learning rates have been shown to be better when using pre-trained models. This facilitates the retention of much of the information from the previously trained model. While higher learning rates make the training faster, they can cause the explosion of weights during the training process, and this can adversely affect the training process.

4. Performance metrics

One of the ways useful to visualise the performance of your prediction model involves using a confusion matrix. In a confusion matrix, each entry depicts the number of predictions made by the model where it classified the classes either correctly or incorrectly. “TN” stands for True Negative, and it shows the number of negative examples classified accurately. Similarly, “TP” means True Positive and this shows the number of positive examples classified accurately. “FP” gives the False Positive value, and this is the number of actual negative examples classified as positive; “FN” shows a False-Negative value, and this is the number of actual positive examples classified as negative.

5. Experimental results

Following the training of the deep learning models for 50 times, the results obtained are shown in Figures . Based on these results, the performance of the models will be evaluated.

Figure 6. The confusion matrix of the VGG16 Model.

Figure 6. The confusion matrix of the VGG16 Model.

Figure 7. The confusion matrix of the mobilenet model.

Figure 7. The confusion matrix of the mobilenet model.

Figure 8. The confusion matrix of the efficientNet B0 model.

Figure 8. The confusion matrix of the efficientNet B0 model.

Figure 9. The confusion matrix of the NasMobileNet.

Figure 9. The confusion matrix of the NasMobileNet.

Figure 10. The confusion matrix of the resNet50-V2.

Figure 10. The confusion matrix of the resNet50-V2.

The values of TP, TN, FN, FB for each class in the different models were calculated, and these are represented in Tables .

Table 2. Confusion matrix values for the VGG16 model

Table 3. Confusion matrix values for MobileNet model

Table 4. Confusion matrix values for the efficientNet model

Table 5. Confusion matrix values for the NasMobileNet model

Table 6. Confusion matrix values for the ResNet50model

Based on the above metrics, the performance metrics, which include the overall accuracy, accuracy for each class, sensitivity (recall), specificity, and precision are calculated using Eq. 1–5, respectively.

  1. Overall Accuracy = correct predictions/total predictions.

  2. Accuracy for each class = (TP + TN)/(TP + FP + TN + FN)

  3. Sensitivity = TP/(TP + FN)

  4. Specificity = TN/(TN + FP)

  5. Precision = TP/(TP + FP)

The results for the models are shown in Table .

Table 7. Overall accuracy of deep learning models

Table 8. Accuracy for each class of deep learning models

Table 9. Sensitivity of deep learning models

Table 10. Specificity of deep learning models

Table 11. Precision of deep learning models

According to the results shown in Tables , we noticed that NasNetMobile model performance was the best among all models. It achieved an overall accuracy of 91% as well as sensitivity, specificity and accuracy by 91%, 97.7% and 91%, respectively. From the same tables, it can be seen that the EfficientNet B0 model achieved the highest accuracy of class bacterial, which was 96%. The VGG16 and NasNetMobile models produced an accuracy of 95.8% for the COVID-19 class, which is the highest accuracy for this class. The VGG16 model was distinguished by achieving the highest accuracy of the classes by 98.4% for the normal class. From the same tables, we noted that the NasNetMobile model outperformed the rest of the models in many values, as it achieved high performance in both accuracy of viral and COVID-19 classes, specificity of Viral and COVID-19 classes, sensitivity of Lung Opacity, and normal and viral classes. Also, the accuracy of the viral class of 96.2%. From the results, we noted that the ResNet50V2 model achieved poor performance compared to the rest of the models except for the privacy of the Class Normal, its value was 99.2% and this value is identical to the value we obtained from the VGG16 model of the same class.

6. Discussion

As highlighted earlier, this study sought to identify a reliable deep learning system that can be used for multiclass classification of COVID-19. We observed from the onset of the study that most of the currently available research focuses on the binary classification of COVID-19; yet, there is limited research on multiclass classification of COVID-19. It is also found that the performance of multiclass classification is not yet adequate, and this merits improvement. Understandably, multiclass classification of X-ray is a challenging task. Several reasons explain this and these include: (1) The existence of inter-class similarities; (2) it is difficult to find data sets, yet, deep learning methods require large amounts of labelled data for training and this is not readily available for the novel COVID-19 class; (3) the available data sets in the public domain are highly class imbalanced, and this constitutes a major challenge to deal with when working on multiclass classification; and (4) each project defines its own protocol, classes, and data and there are no unified data, classes, and evaluation protocols, making it difficult to compare different methods.

Our comprehensive literature review indicated that most of the research so far has focused on the use of chest X-rays. This depicts the important role of chest X-rays in the diagnosis of chest infections and, in particular, the diagnosis of COVID-19. The use of chest X-rays is common because it is fast, cheap and a widely used clinical method, which gives the patient a lower radiation dose compared to the other approaches such as CT and MRI. The results from the current study show that it is possible to distinguish between COVID-19 infection and other lung diseases using pre-trained deep learning models, namely, VGG16, MobileNet, EfficientNet B0, NasNetMobile and ResNet50V2. These proposed models are developed to provide accurate diagnostics for multi-class classification. Data Augmentation and Normalization techniques have been used to improve the models’ performance and avoid training problems. The proposed technique successfully classifies five categories (normal, COVID-19, lung opacity, viral pneumonia, and bacterial pneumonia). It emerged that the NasNetMobile model outperformed the rest of the models and achieved the highest results. It achieved an overall accuracy, sensitivity, specificity and precision of 91%, 91%, 97.7% and 91%, respectively. The VGG16 model produced better results in detecting COVID-19 infection, resulting in an accuracy of 95.8%. The suggested technique demonstrates better results in terms of accuracy in comparison to the recent techniques discussed in the literature. Based on the results, the differences in pulmonary changes caused by COVID-19 from the other types of pneumonia using digital images of the chest X-rays are very effective. As a result, this approach is a very useful adjunct strategy for the identification of changes in the lungs caused by COVID-19 infection. The proposed models are developed to provide accurate diagnostics for multi-class classification. We feel that these results provide healthcare staff with a powerful tool in the diagnosis of COVID-19 and opens opportunities for further research in the area focusing on the development of more reliable and efficient deep learning systems in the future.

7. Conclusion

The findings from this study help to demonstrate that the distinction between COVID-19 and the other lung diseases and healthy cases can be achieved using convolutional neural networks (CNNs). This is a deep learning approach that is used as a classifier, while pre-trained models including EfficientNetB0, VGG16, ResNet50-V2, NasNetMobile and MobileNetV2 are used for transfer learning for detecting COVID-19 automatically using raw chest X-ray images. This suggested model helps to produce an accurate diagnostic for multiclass classification (bacterial, COVID-19, lung opacity, normal and viral). To enhance the performance of deep learning models and to prevent problems associated with training models such as overfitting, assistive technologies are used. The normalization of the dataset is done within a range of 0 and 1, and this is done to ensure that the dataset is consistent in terms of pixel intensity. This in turn helps to perform calculations during training faster, reducing training time and enhancing performance. In addition, to increase the number of training images, thereby improving the performance of the models and avoiding overfitting, the data augmentation technique was used. Following the training and an evaluation of the performance of the test models, we obtained the highest accuracy of 91% and this was achieved by the NasNetMobile model for multiclass classification. In addition, as captured in Table , it can be seen that this model produced the best results compared with all the other models. The robust architecture of the NasNetMobile model, which is designed to work with small data sets, helps it to outperform the rest of the other models. In future iterations, we will focus on comparing all the deep learning models using the dataset employed in this study. We will conduct experiments with algorithms and techniques for feature extraction from X-ray images with the view of training and testing of machine learning algorithms including random forest, decision tree and nearest neighbour. In addition, we will seek to collect more X-ray images of people with COVID-19 and make these available to researchers and students.

Acknowledgements

This research work was funded by the Institutional Fund Projects under the grant number IFPIP: 641-611-1443. The authors gratefully acknowledge the technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Ministry of Education and King Abdulaziz University [IFPIP:641-611-1443].

References

  • Abbas, A., Abdelsamea, M. M., & Gaber, M. M. (2021). Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Applied Intelligence, 51(2), 854–25. https://doi.org/10.1007/s10489-020-01829-7
  • Ai, T., Yang, Z., Hou, H., Zhan, C., Chen, C., Lv, W., Tao, Q., Sun, Z., & Xia, L. (2020). Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology, 296(2), E32–E40. https://doi.org/10.1148/radiol.2020200642
  • Akter, S., Shamrat, F. M. J. M., Chakraborty, S., Karim, A., & Azam, S. (2021, November). COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images. Biology (Basel), 10(11), 1174. https://doi.org/10.3390/BIOLOGY10111174
  • Alghamdi, H. S., Amoudi, G., Elhag, S., Saeedi, K., & Nasser, J. (2021). Deep learning approaches for detecting COVID-19 from chest X-ray images: A survey. IEEE Access, 9, 20235–20254. https://doi.org/10.1109/ACCESS.2021.3054484
  • Alghamdi, M. M. M., & Dahab, M. Y. H. (2022). Diagnosis of COVID-19 from X-ray images using deep learning techniques. Cogent Engineering, 9(1). https://doi.org/10.1080/23311916.2124635
  • Al Hadhrami, E., Al Mufti, M., Taha, B., & Werghi, N. (2018). Transfer learning with convolutional neural networks for moving target classification with micro-Doppler radar spectrograms. 2018 Int. Conf. Artif. Intell. Big Data, ICAIBD 2018, (May), 148–154. https://doi.org/10.1109/ICAIBD.2018.8396184
  • Alom, M. Z., Taha, T.M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, S., Hasan, M., Van Essen, B.C., Awwal, A.A.S., & Asari, V.K., (2019). A state-of-the-art survey on deep learning theory and architectures. Electronics, 8(3), Mar. https://doi.org/10.3390/electronics8030292
  • Alqudah, A. M., Qazan, S., & Alqudah, A., (2020). “automated systems for detection of COVID-19 using chest X-ray images and lightweight convolutional neural networks,”. Research Square. https://doi.org/10.21203/rs.3.rs-24305/v1
  • Antin, B., Kravitz, J., & Martayan, E., (2017). “detecting pneumonia in chest X-rays with supervised learning.”. SemanticScholar.
  • Apostolopoulos, I. D., & Mpesiana, T. A. (2020). Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Physical and Engineering Sciences in Medicine, 43(2), 635–640. https://doi.org/10.1007/s13246-020-00865-4
  • Barua, P. D., Gowdh, N. F. M., Rahmat, K., Ramli, N., Ng, W. L., Chan, W. Y., Kuluozturk, M., Dogan, S., Baygin, M., Yaman, O., Tuncer, T., Wen, T., Cheong, K. H., & Acharya, U. R. (2021). Automatic COVID-19 Detection Using Exemplar Hybrid Deep Features with X-ray Images. International Journal of Environmental Research and Public Health, 18(15), 8052. https://doi.org/10.3390/ijerph18158052
  • Basu, S., Mitra, S., & Saha, N., (2020). “Deep learning for screening COVID-19 using chest X-ray images,”. IEEE Symp. Ser. Comput. Intell. SSCI 2020, 2020, (pp. 2521–2527). Symposium Series on Computational Intelligence (SSCI). IEEE. https://doi.org/10.1109/SSCI47803.2020.9308571
  • Campanella, G., Hanna, M.G., Geneslaw, L., Miraflor, A., Silva, V.M.K., Busam, K.J., Reuter, V.E., Klimstra, D.S., & Fuchs, T.J. (2019, July). Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nature Medicine. 2019 258, 25 (8), 1301–1309. https://doi.org/10.1038/s41591-019-0508-1
  • Chollet, F., (2017). “Xception: Deep learning with depthwise separable convolutions,”. Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017 Janua, (pp. 1800–1807). IEEE. https://doi.org/10.1109/CVPR.2017.195
  • Cohen, J. P., Morrison, P., Dao, L., Roth, K., Duong, T. Q., & Ghassemi, M. (2020, June). COVID-19 image data collection: prospective predictions are the future. The Journal of Machine Learning for Biomedical Imaging, 2020, 2–3. https://doi.org/10.48550/arxiv.2006.11988
  • “Covid-19 X-ray - Two proposed Databases | Kaggle.”. https://www.kaggle.com/datasets/edoardovantaggiato/covid19-xray-two-proposed-databases ( accessed August 9, 2022).
  • Dong, D., Tang, Z., Wang, S., Hui, H., Gong, L., Lu, Y., Xue, Z., Liao, H., Chen, F., Yang, F., Jin, R., Wang, K., Liu, Z., Wei, J., Mu, W., Zhang, H., Jiang, J., Tian, J., & Li, H. (2021). The role of imaging in the detection and management of COVID-19: A Review. IEEE Reviews in Biomedical Engineering, 14, 16–29. https://doi.org/10.1109/RBME.2020.2990959
  • Dorj, U. O., Lee, K. K., Choi, J. Y., & Lee, M. (2018). The skin cancer classification using deep convolutional neural network. Multimedia Tools and Applications, 77(8), 9909–9924. https://doi.org/10.1007/s11042-018-5714-1
  • Farooq, M., & Hafeez, A., (2020). “COVID-ResNet: A Deep Learning Framework for Screening of COVID19 from Radiographs,”. ArXiv: http://arxiv.org/abs/2003.14395
  • Gaál, G., Maga, B., & Lukács, A. (2020, March). Attention U-net based adversarial architectures for chest X-ray lung segmentation. CEUR Workshop Proc, 2692. https://doi.org/10.48550/arxiv.2003.10304
  • Ghoshal, B., & Tucker, A., (March, 2020). “Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) Detection,”. ArXiv. https://doi.org/10.48550/arxiv.2003.10769
  • Hemdan, E. E.-D., Shouman, M. A., & Karar, M. E., (2020). “COVIDX-Net: A Framework of Deep Learning Classifiers to Diagnose COVID-19 in X-Ray Images,”. ArXiv: http://arxiv.org/abs/2003.11055
  • He, K., Zhang, X., Ren, S., & Sun, J., (2016). “Deep residual learning for image recognition,”. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016, pp. 770–778. IEEE. https://doi.org/10.1109/CVPR.2016.90
  • Hosseinzadeh Kassani, S., & Hosseinzadeh Kassani, P. (2019). A comparative study of deep learning architectures on melanoma detection. Tissue and Cell, 58(April), 76–83. https://doi.org/10.1016/j.tice.2019.04.009
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W, Weyand, T., Andreetto, M., & Adam, H.(October 2017). “MobileNets: Efficient convolutional neural networks for mobile vision applications,”. ArXiv: http://arxiv.org/abs/1704.04861
  • Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q., (2017). “Densely Connected Convolutional Networks,”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
  • Huh, M., Agrawal, P., & Efros, A. A., (2016). “What makes imagenet good for transfer learning?,”. ArXiv, (pp. 1–10). http://arxiv.org/abs/1608.08614
  • Hussain, E., Hasan, M., Rahman, M. A., Lee, I., Tamanna, T., & Parvez, M. Z. (2021, January). CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images. Chaos, Solitons & Fractals, 142, 110495. https://doi.org/10.1016/J.CHAOS.2020.110495
  • Ibrahim, A. U., Ozsoz, M., Serte, S., Al-Turjman, F., & Yakoi, P. S. (2021, January). Pneumonia classification using deep learning from chest X-ray images during COVID-19. Cognitive Computation, 1, 1–13. https://doi.org/10.1007/S12559-020-09787-5/TABLES/9
  • Ibrahim, D. A., Zebari, D. A., Mohammed, H. J., & Mohammed, M. A. (2021). Effective hybrid deep learning model for COVID-19 patterns identification using CT images. Expert Systems. https://doi.org/10.1111/exsy.13010
  • Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpanskaya, K., Seekins, J., Mong , D.A., Halabi , S.S., Sandberg, J.K., Jones, R., Larson, D.B., Langlotz, C.P., Patel, B.N., Lungren, M.P., & Ng, A.Y. (2019, July). CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. Proc. AAAI Conf. Artif. Intell, 33 (1), 590–597. AAAI. https://doi.org/10.1609/AAAI.V33I01.3301590
  • Islam, M. M., Karray, F., Alhajj, R., & Zeng, J. (2021). A review on deep learning techniques for the diagnosis of novel coronavirus (COVID-19). IEEE Access, 9, 30551–30572. https://doi.org/10.1109/ACCESS.2021.3058537
  • Jaeger, S., Candemir, S., Antani, S., Wáng, Y.-X. J., Lu, P.-X., & Thoma, G. (2014). Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quantitative Imaging in Medicine and Surgery, 4(6), 475–477. https://doi.org/10.3978/j.2223-4292.2014.11.20
  • Jaiswal, A., Gianchandani, N., Singh, D., Kumar, V., & Kaur, M. (2020). Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning Journal of Biomolecular Structure & Dynamics. vol. 39, no. 15, pp. 5682–5689. https://doi.org/10.1080/07391102.2020.1788642
  • Jaiswal, A. K., Tiwari, P., Kumar, S., Gupta, D., Khanna, A., & Rodrigues, J. J. P. C. (2019, October). Identifying pneumonia in chest X-rays: A deep learning approach. Measurement, 145, 511–518. https://doi.org/10.1016/J.MEASUREMENT.2019.05.076
  • Ji, T., Liu, Z., Wang, G., Guo, X., Akbar Khan, S., Lai, C., Chen, H., Huang, S., Xia, S., Chen, B., Jia, H., Chen, Y., & Zhou, Q. (2020). Detection of COVID-19: A review of the current literature and future perspectives. Biosensors and Bioelectronics, 166, 956–5663. https://doi.org/10.1016/j.bios.2020.112455
  • Kermany, D. S., Goldbaum, M., Cai, W., Valentim, C. C. S., Liang, H., Baxter, S. L., McKeown, A., Yang, G., Wu, X., Yan, F., Dong, J., Prasadha, M. K., Pei, J., Ting, M. Y. L., Zhu, J., Li, C., Hewett, S., Dong, J., Ziyar, I., … Zhang, K. (2018). Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, 172(5), 1122–1131.e9. https://doi.org/10.1016/j.cell.2018.02.010
  • Khan, E., Rehman, M. Z. U., Ahmed, F., Alfouzan, F. A., Alzahrani, N. M., & Ahmad, J. (2022, February). Chest X-ray classification for the detection of COVID-19 using deep learning techniques. Sensors 2022, 22(3), 1211. https://doi.org/10.3390/S22031211
  • Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review, 53(8), 5455–5516. https://doi.org/10.1007/s10462-020-09825-6
  • Kobat, M. A., Kivrak, T., Barua, P. D., Tuncer, T., Dogan, S., Ru-San, T., Ciaccio, E. J., & Acharya, U. R. (2021111962). Automated COVID-19 and heart failure detection using DNA pattern technique with cough sounds. Diagnostics, 11(11), 1962. https://doi.org/10.3390/diagnostics11111962
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (6, 2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60. https://doi.org/10.1145/3065386
  • Lecun, Y., & Bengio, Y., (1995). Convolutional networks for images, speech, and time-series. The handbook of brain theory & neural networks.
  • Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
  • Li, D., & Li, S. (2022, April). An artificial intelligence deep learning platform achieves high diagnostic accuracy for Covid-19 pneumonia by reading chest X-ray images. iScience, 25(4), 104031. https://doi.org/10.1016/J.ISCI.2022.104031
  • Liu, S., & Deng, W. (2016). Very deep convolutional neural network based image classification using small training sample size. Proc. - 3rd IAPR Asian Conf. Pattern Recognition, ACPR 2015, 730–734. https://doi.org/10.1109/ACPR.2015.7486599
  • Loey, M., Smarandache, F., & Khalifa, N. E. M. (4, 2020). Within the lack of chest COVID-19 X-ray dataset: A novel detection model based on GAN and deep transfer learning. Symmetry (Basel), 12. https://doi.org/10.3390/SYM12040651
  • Mahmoudi, R., Benameur, N., Mabrouk, R., Mohammed, M. A., Garcia-Zapirain, B., & Bedoui, H. (2022). A deep learning-based diagnosis system for COVID-19 detection and pneumonia screening using CT Imaging. Applied Sciences, 12(10), 4825. https://doi.org/10.3390/app12104825
  • Mahmud, T., Rahman, M. A., & Fattah, S. A. (2020, July). CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Computers in Biology and Medicine, 122, 103869. https://doi.org/10.1016/J.COMPBIOMED.2020.103869
  • Mohammed, M. A., Al-Khateeb, B., Yousif, M., Mostafa, S. A., Kadry, S., Abdulkareem, K. H., & Garcia-Zapirain, B (2022). Novel crow swarm optimisation algorithm and selection approach for optimal deep learning COVID-19 diagnostic model. Computational Intelligence and Neuroscience, 22 Article ID 1307944, 22. https://doi.org/10.1155/2022/1307944
  • Nagi, A. T., Awan, M. J., Mohammed, M. A., Mahmoud, A., Majumdar, A., & Thinnukool, O. (2022). Performance analysis for COVID-19 diagnosis using custom and state-of-the-art deep learning models. Applied Sciences, 12(13), 6364. https://doi.org/10.3390/app12136364
  • Narayan Das, N., Kumar, N., Kaur, M., Kumar, V., & Singh, D. (2020). Automated deep transfer learning-based approach for detection of COVID-19 infection in chest X-rays. Irbm, 1, 1–6. https://doi.org/10.1016/j.irbm.2020.07.001
  • Narin, A., Kaya, C., & Pamuk, Z. (2021). “Automatic detection of coronavirus disease (COVID ‑ 19) using X ‑ ray images and deep convolutional neural networks,”. Pattern Anal. Appl, (123456789). https://doi.org/10.1007/s10044-021-00984-y
  • Nayak, J., Naik, B., Dinesh, P., Vakula, K., Dash, P. B., & Pelusi, D. (2021). Significance of deep learning for Covid-19: State-of-the-art review. Res. Biomed. Eng https://doi.org/10.1007/s42600-021-00135-6
  • Obaro, S. K., & Madhi, S. A. (2006, March). Bacterial pneumonia vaccines and childhood pneumonia: Are we winning, refining, or redefining? The Lancet Infectious Diseases, 6(3), 150–161. https://doi.org/10.1016/S1473-3099(06)70411-X
  • Ouchicha, C., Ammor, O., & Meknassi, M. (2020). CVDNet: A novel deep learning architecture for detection of coronavirus (Covid-19) from chest x-ray images. Chaos, Solitons and Fractals, 140, 110245. https://doi.org/10.1016/j.chaos.2020.110245
  • Ozturk, T., Talo, M., Yildirim, E. A., Baloglu, U. B., Yildirim, O., & Rajendra Acharya, U. (2020). Automated detection of COVID-19 cases using deep neural networks with X-ray images. Computers in Biology and Medicine, 121(April), 103792. https://doi.org/10.1016/j.compbiomed.2020.103792
  • Painuli, D., Mishra, D., Bhardwaj, S., & Aggarwal, M. (2021, January). Forecast and prediction of COVID-19 using machine learning. Data Sci. COVID-19 Vol. 1 Comput. Perspect, 381–397. https://doi.org/10.1016/B978-0-12-824536-1.00027-7
  • Panwar, H., Gupta, P. K., Siddiqui, M. K., Morales-Menendez, R., & Singh, V. (2020, September). Application of deep learning for fast detection of COVID-19 in X-Rays using nCOVnet. Chaos, Solitons & Fractals, 138(p), 109944. https://doi.org/10.1016/J.CHAOS.2020.109944
  • Rajpurkar, P., Irvin , J., Zhu , K., Yang, B., Mehta, H., Duan , T., Ding , D., Bagul, A., Langlotz, C., Shpanskaya, K., Lungren, M.P., & Ng, A.Y. (November 2017). “CheXNet: Radiologist-Level pneumonia detection on chest X-rays with deep learning,”. ArXiv. https://doi.org/10.48550/arxiv.1711.05225
  • Razavian, N. (2019, September). Augmented reality microscopes for cancer histopathology. Nat. Med. 2019 259, 25(9), 1334–1336. https://doi.org/10.1038/s41591-019-0574-4
  • Ribli, D., Horváth, A., Unger, Z., Pollner, P., & Csabai, I. (2018). Detecting and classifying lesions in mammograms with Deep Learning. Scientific Reports, 8(1), 1–7. https://doi.org/10.1038/s41598-018-22437-z
  • Roosa, K., Lee, Y., Luo, R., Kirpich, A., Rothenberg, R., Hyman, J. M., Yan, P., & Chowell, G. (2020, January). Real-time forecasts of the COVID-19 epidemic in China from February 5th to February 24th, 2020. Infectious Disease Modelling, 5, 256–263. https://doi.org/10.1016/J.IDM.2020.02.002
  • Saba, T., Sameh Mohamed, A., El-Affendi, M., Amin, J., & Sharif, M. (2020). Brain tumor detection using fusion of hand crafted and deep learning features. Cognitive Systems Research, 59, 221–230. https://doi.org/10.1016/j.cogsys.2019.09.007
  • Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018, January). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, 4510–4520. https://doi.org/10.48550/arxiv.1801.04381
  • Sarker, I. H. (2021). Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science, 2(6), 1–20. https://doi.org/10.1007/s42979-021-00815-1
  • Sethy, P. K., Behera, S. K., Ratha, P. K., & Biswas, P. (2020). Detection of coronavirus disease (COVID-19) based on deep features and support vector machine. The journal International Journal of Mathematical, Engineering and science, 5(4), 643–651. https://doi.org/10.33889/IJMEMS.2020.5.4.052
  • Shin, H. C., Hoo-chang, H.R., Mingchen, G., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D., & Summers, R.M. (2016). Deep convolutional neural networks for computer-aided detection: cnn architectures, dataset characteristics and transfer learning. IEEE Transactions on Medical Imaging, 35(5), 1285–1298. https://doi.org/10.1109/TMI.2016.2528162
  • Shi, F., Wang, J., Shi, J., Wu, Z., Wang, Q., Tang, Z., He, K., Shi, Y., & Shen, D. (2021). Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19. IEEE Reviews in Biomedical Engineering, 14, 4–15. https://doi.org/10.1109/RBME.2020.2987975
  • Shuja, J., Alanazi, E., Alasmary, W., & Alashaikh, A. (2021). COVID-19 open source data sets: A comprehensive survey. Applied Intelligence, 51(3), 1296–1325. https://doi.org/10.1007/s10489-020-01862-6
  • Simonyan, K., & Zisserman, A., (2014). “Very Deep Convolutional Networks for Large-Scale Image Recognition,”. ArXiv. https://arxiv.org/abs/1409.1556v6
  • Subramanian, N., Elharrouss, O., Al-Maadeed, S., & Chowdhury, M. (2022). A review of deep learning-based detection methods for COVID-19. Computers in Biology and Medicine, 143, 105233. https://doi.org/10.1016/j.compbiomed.2022.105233
  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A. et al., “Going deeper with convolutions,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE), 2015, . 07-12-June, pp. 1–9, https://doi.org/10.1109/CVPR.2015.7298594.
  • Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z.(2016). “Rethinking the Inception Architecture for Computer Vision,”. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. vol. 2016, 2818–2826. IEEE. https://doi.org/10.1109/CVPR.2016.308
  • Tan, M., & Le, Q. V. (2019, May). EfficientNet: Rethinking model scaling for convolutional neural networks. 36th Int. Conf. Mach. Learn. ICML 2019, 2019(June), 10691–10700. https://doi.org/10.48550/arxiv.1905.11946
  • Tuncer, T., Dogan, S., & Ozyurt, F. (2020). An automated residual exemplar local binary pattern and iterative reliefF based COVID-19 detection method using chest X-ray image. Chemometrics and Intelligent Laboratory Systems, 203, 104054. https://doi.org/10.1016/j.chemolab.2020.104054
  • Tuncer, T., Ozyurt, F., Dogan, S., & Subasi, A. (2021). A novel COVID-19 and pneumonia classification method based on F-transform. Chemometrics and Intelligent Laboratory Systems, 210, 104256. https://doi.org/10.1016/j.chemolab.2021.104256
  • Umair, M., Khan, M. S., Ahmed, F., Baothman, F., Alqahtani, F., Alian, M., & Ahmad, J. (2021, September). Detection of COVID-19 Using Transfer Learning and Grad-CAM Visualization on Indigenously Collected X-ray Dataset. Sensors (Basel), 21(17), 5813. https://doi.org/10.3390/S21175813
  • Usama Khalid Bukhari, S., Safwan, S., Bukhari, K., Syed, A., Sajid, S., & Shah, H. (2020, March). The diagnostic evaluation of Convolutional Neural Network (CNN) for the assessment of chest X-ray of patients infected with COVID-19. medRxiv, 03(26), 20044610. https://doi.org/10.1101/2020.03.26.20044610
  • Vantaggiato, E., Paladini, E., Bougourzi, F., Distante, C., Hadid, A., & Taleb-Ahmed, A. (2021, March). COVID-19 recognition using ensemble-CNNs in two new chest X-ray Databases. Sensors, 21(5), 1742. https://doi.org/10.3390/S21051742
  • Wang, S., Kang, B., Ma, J., Zeng, X., Xiao, M., Guo, J., Cai, M., Yang, J., Li, Y., Meng, X., & Xu, B. (2021, August). A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19). European Radiology, 31(8), 6096. https://doi.org/10.1007/S00330-021-07715-1
  • Wang, L., Lin, Z. Q., & Wong, A. (2020, November). COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. science Reports 2020 101, 10(1), 1–12. https://link.springer.com/content/pdf/10.1038/s41598-020-76550-z.pdf
  • Wang, W., Xu, Y., Gao, R., Lu, R., Han, K., Wu, G., & Tan, W. (2020, May). Detection of SARS-CoV-2 in different types of clinical specimens. JAMA, 323(18), 1843–1844. https://doi.org/10.1001/JAMA.2020.3786
  • Weiss, K., Khoshgoftaar, T. M., & Wang, D. D. (2016, December). A survey of transfer learning. Journal of Big Data, 3(1), 1–40. https://doi.org/10.1186/S40537-016-0043-6/TABLES/6
  • Xu, X., Jiang, X., Ma, C., Du, P., Li, X., Lv, S., Yu, L., Ni, Q., Chen, Y., Su, J., & Lang, G. (2020, October). A deep learning system to screen novel coronavirus disease 2019 Pneumonia. Eng. (Beijing, China), 6 (10), 1122–1129. https://doi.org/10.1016/J.ENG.2020.04.010
  • Yan, L. ., “Prediction of criticality in patients with severe Covid-19 infection using three clinical features: A machine learning-based prognostic model with clinical data in Wuhan,”. MedRxiv, 2020–02. https://doi.org/10.1101/2020.02.27.20028027
  • Yildirim, O., Talo, M., Ay, B., Baloglu, U. B., Aydin, G., & Acharya, U. R. (2019). Automated detection of diabetic subject using pre-trained 2D-CNN models with frequency spectrum images extracted from heart rate signals. Computers in Biology and Medicine, 113(June), 103387. https://doi.org/10.1016/j.compbiomed.2019.103387
  • Yu, X., Kang, C., Guttery, D. S., Kadry, S., Chen, Y., & Zhang, Y. D. (2021). ResNet-SCDA-50 for Breast Abnormality Classification. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 18(1), 94–102. https://doi.org/10.1109/TCBB.2020.2986544
  • Zhang, J., Xie, Y., Li, Y., Shen, C., & Xia, Y., “COVID-19 screening on chest X-ray images using deep learning based anomaly detection.”. ArXiv preprint, 12338(27), 141.
  • Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2017, July). Learning transferable architectures for scalable image recognition. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit , 8697–8710. IEEE. https://doi.org/10.48550/arxiv.1707.07012