214
Views
0
CrossRef citations to date
0
Altmetric
Soil & Crop Sciences

POA optimized VGG16-SVM architecture for severity level classification of Ascochyta blight of chickpea

ORCID Icon & ORCID Icon
Article: 2336002 | Received 12 Jan 2024, Accepted 25 Mar 2024, Published online: 08 Apr 2024

Abstract

Chickpeas (Cicer arietinum L.) are a nutritious legume crop farmed on 17.8 million hectares in 56 countries throughout the world, with an estimated yearly yield of 14.78 million tones. Ethiopia is the leader in chickpea production on the African continent and the sixth-largest producer globally. However, Ascochyta rabiei remains a serious disease of chickpeas. If Ascochyta rabies is not managed, its effects on chickpea output could be either partial or complete under favourable environmental conditions. Knowing the severity level of this disease in farmlands where chickpeas are grown has an impact on the rates of yield and quality losses. Currently, Ethiopian farmers and pathologists in the field use traditional procedures to figure out the severity of Ascochyta blight, lead to invalid fungicide treatment. In this work, we created customized version of VGGNet model to identify the Ascochyta blight’s severity level. For noise reduction, we combined the Gaussian and Adaptive Median Filters; for optimization, we employed the Pelican Optimization Algorithm (POA). The model categorizes the chickpea images into five groups according to the severity of the disease: Asymptomatic, Resistant, Moderately Resistant, Susceptible, and Highly Susceptible. The study’s findings indicate that the customized VGGNet outperformed the other models, achieving an accuracy of 96%.

1. Introduction

Agriculture employed 27% and 40% of the world’s workforce in 2000 and 2021, respectively (FAOSTAT, Citation2022). Chickpeas (Cicer arietinum L.) are a nutritious legume crop cultivated on 17.8 million hectares in 56 countries throughout the world, with an estimated yearly yield of 14.78 million tones (FAOSTAT, Citation2020, Citation2021). Chickpeas are ranked third globally in terms of both consumption and importance, and they are a key source of protein in underdeveloped countries (Dangmei et al., Citation2023) and Chickpea seeds are abundant in minerals and typically include 23% protein, 64% carbohydrates, 5% fat, 6% crude fibre, and 3% ash (Kumar et al., Citation2023).

The majority of Ethiopia’s income and job possibilities come from agriculture, which is the main driver of the country’s economy. It comprises up an enormous percentage of the national economy, leading 35.8% of GDP, exporting a lot of goods, and creating a lot of jobs (Worku et al., Citation2023). Ethiopia is the leader in chickpea cultivation on the African continent and the sixth-largest producer globally (Rawal & Navarro, Citation2019). After faba and haricot beans, chickpeas are Ethiopia’s third-most valuable export legume, bringing in roughly US$61 million annually (CSA, Citation2019). Over 900,000 households cultivate it on an estimated 239,786 hectares of land and In 2018, total production reached 459,173 t, with the smaller desi type yielding an average of 2.25 t ha−1 and the lighter-colored Kabuli type yielding an average of 1.68 t ha−1 (CSA, Citation2019).

Global chickpea yield is negatively impacted by a number of abiotic and biotic issues (Caballo et al., Citation2019). In chickpeas, Ascochyta rabiei is a dangerous disease that is frequently seen in damp, cool, and humid environments (Benzohra et al., Citation2012). All plant parts (seedling, leaves, flowers, pods, stem) are susceptible to Ascochyta blight and may develop lesions on them and outbreaks of Ascochyta blight may result in a partial or whole loss of yield in favorable environmental conditions (Mahmood et al., Citation2019; Pastor et al., Citation2022), yield loss and reduction can occur if the disease is not controlled.

In the agricultural process, disease severity measurement is an important concern that needs to be addressed by all areas of agriculture in order to safeguard crops from infection. Understanding the severity of the disease is crucial for managing and protecting crops from agricultural diseases (Haque et al., Citation2022). The ratio of plant units having noticeable disease signs to the entirety of plant units is known as the crop disease severity (Shi et al., Citation2023). In chickpea farming areas, the severity of the disease can affect the rates of losses in quality and yield (Hayit et al., Citation2023). Currently, Ethiopian farmers and pathologists in the field use traditional procedures to figure out the severity of Ascochyta blight. This manual prediction of disease types can lead to issues, and failure to promptly detect and classify plant diseases may lead to crop destruction. A decrease in chickpea production and a failure to implement the appropriate preventative measures in place could arise from an inaccurate assessment of the disease’s severity in the field. In Ethiopia, Ascochyta blight was controlled through efficient fungicide management (Amin & Melkamu, Citation2014). Therefore, the ultimate objective of this research study was to find out which fungicide treatment was most suitable and efficient regardless of the disease’s severity.

Efficient methods for tracking and regulating Ascochyta blight under field settings are essential to minimize economic losses, as the disease affects not only the production but also the quality of chickpeas. To ensure efficient monitoring and management of the plant production process, it is necessary to determine both the type of disease and its severity. In recent decades, computer vision and artificial intelligence have become widely used in the field of agricultural diagnosis, particularly in the areas of plant species classification, leaf disease identification, and plant disease severity estimation. This is because computer imaging technology is developing quickly and related electrical equipment’ hardware performance is always being improved (Shi et al., Citation2023). Convolutional Neural Network (CNN) is a regularized neural network with feed-forward algorithms that trains features on its own by means of filter or kernel optimization (Venkatesan & Li, Citation2017). When it comes to image identification and other tasks involving the processing of pixel data, this particular form of network architecture for deep learning algorithms has been chosen than others (Chauhan et al., Citation2018).

The power of CNN is improved further using the technique of transfer learning to specific tasks from per trained models, such as Alex Net (Krizhevsky & Ilya Sutskever, Citation2012), VGG16 (Simonyan & Zisserman, Citation2014), InceptionNet (Szegedy et al., Citation2016), LeNet (Lecun, Citation1998) and other similar models, which are trained on large ImageNet dataset. Reusing these previously learned model knowledge for another task is known as transfer learning, and its goal is to minimize the duration and expense of the training process (Iman et al., Citation2023; Smith & Wilson, Citation2019).

Detecting plant diseases is a crucial and challenging task in agriculture, primarily due to image noise. The initial step in enhancing image quality involves removing this noise. The basic image processing filter known as the Gaussian filter varies when adjusted according to the size of its kernels (Singh, Citation2016). To identify the pixels in an image that have been impacted by noise, the Adaptive Median Filter applies spatial processing (Jubair, Citation2011).

Zhang and his partners (Zhang et al., Citation2023) proposed a grading model of cotton verticillium wilt disease severity into five categories using a combination of support vector machine (SVM) with genetic algorithm (GA), grid search (GS), particle swarm optimization (PSO), and grey wolf optimizer (GWO) optimization methods. They employ multiplicative scatter correction (MSC) and MSC-continuous wavelet analysis techniques. In their experiments MSC-db3(23)-GWO-SVM model gain best on the classification rates of 100%, 88%, 84%, 84%, and 100% for class 1, 2, 3, 4, and 5 respectively.

Kundu and his colleagues (Kundu et al., Citation2022) suggested deep learning algorithm for maize crop disease detection, severity prediction, and crop loss estimation. Grad-CAM is used for feature visualization after the region of interest was extracted using the K-Means clustering approach. The model reports the best accuracy of 98.50%.

The authors (Hayit et al., Citation2023) examined the suitability of deep learning-based pre-trained models, which may aid in identifying the kind of infection at an early stage of the Fusarium wilt of chickpea. These models classify images of chickpea plants into five stages, regardless of the severity of the disease. Based on the study’s findings, DenseNet 201 outperformed the other models with an accuracy of 90%.

The authors (Lamba et al., Citation2023) proposed a feasible approach for detecting just how serious bacterial blight, blast, and leaf smut infections are in rice crops. The model has a 98.43% accuracy rate and a 41.25% loss rate in predicting the type and intensity of paddy disease.

Even though research using deep learning approaches to classify the severity of various plant diseases has made great improvements, in Ethiopia, Ascochyta blight is still a major chickpea disease that degrades its production. As such, more research is necessary to determine its severity levels, just as it is with other plants. This work uses a POA-optimized deep learning model to identify the severity level of Ascochyta blight in chickpeas.

2. Materials and methods

2.1. Image acquisition

The dataset was constructed completely within the objectives of the study through image acquisition, labelling, and data augmentation tasks. Techno Spark 10 mobile camera with full HD and 1280x720 pixel resolution was used to acquire the images in the JPG (Joint Photographic Group) file format. With the assistance of experts, we collected images of all the parts of the chickpea crop diseases from four Ethiopian chickpea harvesting areas: Ebinat, Maksegnit, Belesa, and Dembiya. shows how many images were taken for each class both before and after augmentation.

Table 1. Number of samples for each class before and after augmentation.

2.2. Labeling

Four highly experienced experts in the field of plant pathology rated each image of a chickpea plant using a rating system of 1 to 9, 1 represents no symptoms that can be observed, 2 small, prominent lesions on the apical stem, 3 infections up to 5 mm in size and slight drooping of the apical stem, 4 indicates spots obvious on all plant parts and clear drooping of the apical stem, 5 indicates spots on all plant parts, defoliation initiated, breaking and drying of branches slight to moderate, 6 indicates spots as in 5, defoliation, broken, dry branches common, and some plants killed; 7 indicates spots as in 5, defoliation, broken, dry branches very common, up to 25% of plants killed, 8 indicates signs similar to 7 but up to 50% of the plants killed, and 9 indicates signs similar to 7 but up to 100% of the plants died. In this investigation, Ascochyta blight was classified into 5 primary severity categories based on how each level responded to infection: 1 indicates Asymptomatic (A), 1.1–3 indicates Resistant (R), 3.1–5 indicates Moderately Resistant (MR), 5.1–7 indicates Susceptible (S), and 7.1–9 indicates Highly Susceptible (HS) (Khan et al., Citation2020; Pande et al., Citation2011).

2.3. Image resizing

For additional processing, the images included in the dataset should be of a consistent size, and the original image must be scaled before being fed into the model. As a result, the image’s size was adjusted to 64x64 pixels.

2.4. Noise filtering

The acquired images are most likely tainted by noise, so eliminating this noise has an impact on the suggested model’s classification accuracy. In order to determine which filtering technique would best enhance the proposed model’s classification performance, we conducted a comparison between the Gaussian filter, the adaptive median filter, and a combination of the two.

2.5. Image augmentation

We employed image augmentation techniques such rotation (45°,90°,180°), random flips of the vertical and horizontal axes, scaling (0.90,0.75), and random shifts of the vertical and horizontal axes. To prevent the deep learning algorithm from overfitting, which occurs when there is an imbalance or small amount of data, the amount of dataset was raised by making sure that there was a proper balance between the classes. As illustrated in , the sample size of our datasets was increased from 1564 to 7820 by applying the data augmentation technique.

2.6. The proposed approach

In this study, we used VGG16 (Simonyan & Zisserman, Citation2014) as the base model. It contains 16 layers that need to be trained, 5 convolutional blocks, and 3 fully connected layers. It achieves 92.7% top-5 test accuracy on the 14 million images in the ImageNet dataset, which comprises more than 1000 classes. We choose VGGNet due to its strong performance in the 2014 ImageNet challenge as well as many plant disease classifications. The dataset was subjected to the customized version of the VGGNet16 network in order to solve the challenge of determining the severity of Ascochyta blight of Chickpea (ABC). Images are resized augmented. The dataset was split into three groups: both before and after augmentation, the training set made up 70% of the dataset, the validation set made up 10%, and the test set made up 20%, as presented in . Before the testing phase, the training and validation steps are completed, which trains the model. After training and validation, the testing phase is conducted to assess how well the learned model works with the new image dataset.

Table 2. Number of images used for training, validation, and testing before and after augmentation.

The architecture of the optimized VGGNet models is used to extract the most significant features from the images. The classifiers Softmax and multiclass Support Vector Machine (SVM) (Williams, Citation2003) are exposed to the extracted features for classifying the input image of chickpea to five severity levels of Ascochyta blight diseases and to determine which ones are the best. Within the test set, the previously trained Softmax and SVM models are fed the features of the VGGNet model. Using 7820 images of chickpea plants, the customized model which includes ten convolution layers, three batch normalizations, four MaxPoolings, four dropout layers, and two fully connected layers was trained. In accordance with (Vinothini et al., Citation2023) methodology, we employed the POA optimization algorithm. All epochs were shuffled, and hyper parameters were adjusted until the model reached its optimal and robust state. The model was built at the end of the training process, and it is efficient at Momentum of 0.9, Learning Rate of 0.001, Epoch of 125, and Batch Size of 32. showed the main steps in the model-creation process.

Figure 1. Architecture of the proposed model.

Figure 1. Architecture of the proposed model.

Using the Python programming language, the free CNN package Tensor Flow as the backend, and the Keras software, the model was trained, validated, and tested in the Anaconda environment. The Python code was written in Jupiter Notebook. The Google Colab environment was utilized to develop the suggested model. We tested the generated model on a hardware-based, 64-bit Windows 10 HP Pavilion laptop with Intel (R) Core (TM) i3-8145UMQ, which has a 2.30 GHz CPU, 8 GB of RAM, and a 700 GB test hard drive. For training, we used a 12-GB GPU NVIDIA GeForce RTX 3090 Ti.

2.7. Performance metrics

Metrics like accuracy, precision, recall, and the F1-Score, which is shown in , were used to assess the CNN models’ performance. The measures of precision, recall, and F1-Score offer valuable perspectives into many facets of the model’s performance and help in evaluating its effectiveness in precisely identifying plant diseases (Belay et al., Citation2022; Hayit et al., Citation2023; Sanderson, Citation2010).

Table 3. Performance metrics.

2.8. Results

Under the specified experimental settings, the suggested model achieved 96% accuracy in the Ascochyta blight severity classification.

As shown in , we carried out a total of eight experiments to obtain these findings. There are three phases to the experiment: noise filtering, optimization techniques, and augmentation. Before augmentation the accuracy of the pre-trained model was 82%, the end-to-end customized VGGNet model was 83%, and the accuracy of the VGGNet + SVM was 84.5%. A customized VGGNet model with SVM gained 85% accuracy after augmentation and its increases to 88% after applying the POA optimization procedure. Next, we use Gaussian and adaptive median noise filtering strategies separately and in combination to create optimized customized VGGNet model using SVM. For the Gaussian, adaptive median, and combined (Gaussian-adaptive median) noise filtering strategies, the model recorded 92%, 95%, and 96% of accuracy respectively.

Table 4. Experimental results.

display both the training and validation accuracy. Throughout the training, it observed that both training and validation accuracy increased. As illustrated in , the training and validation losses show that our model is effectively learning from the data and making accurate predictions on unseen data as well.

Figure 2. Training and validation accuracy.

Figure 2. Training and validation accuracy.

Figure 3. Training and validation loss.

Figure 3. Training and validation loss.

As depicted in , out of a total of 1250 images, 1190 images of A were accurately identified as A, while only 60 images were misclassified incorrectly to other classes: 5 as R, 15 as MR, and 40 as S. Out of a total of 1450 images, 1404 images of R were accurately identified as R, while only 46 were misclassified, 28 as MR, and 18 as S. Out of a total of 1620 images, 1530 images of MR were accurately identified as MR, while 90 images were misclassified: 20 as A, 46 as R, and 24 as S. Additionally, 1500 images of MR were accurately identified as S, while only 47 images were wrongly classified: 15 as A and 32 as MR. Finally, out of 2000 images, 1930 images were accurately identified as having the HS disease, while only 70 images were incorrectly classified as wrong: 10 images as A, 16 images as R, 34 images as MR, and 10 images as S. We conclude that the model generally consistently classifies almost all data correctly into the proper class of the images, even if there are some problems classifying diseases.

Figure 4. Confusion matrix of the proposed model.

Figure 4. Confusion matrix of the proposed model.

shows that 95% of the images were correctly identified as A, even if 5% of the images were incorrectly categorized as other remaining classes. Also, 97% of the images were correctly identified as R, even if 3% of the images were mistakenly categorized as belonging to other classes. 94% of the images were correctly identified as MR, even if 6% of the images were wrongly categorized as either of the remaining classes. 97% of the images were correctly identified as S, even if 3% of the images were mistakenly classified as either of the other classes. Finally, 96% of the images were correctly identified as HS, even if 4% of the images were wrongly classified as either of the other classes. As a result, the classifier model that was created was able to correctly classified approximately 96% of the images.

Table 5. The results of precision, recall, and f1 score of the proposed model for each class.

2.9. Discussion

Chickpea is highly susceptible to Ascochyta blight, with severe impacts on yields. Pathogenesis and plant defense mechanisms against the pathogen remain poorly understood despite molecular and pathological studies due to its high variability (Foresto et al., Citation2023). The proposed model produced significant results with the integration of the SVM classifier, POA optimization algorithm, and Gaussian- adaptive median noise filtering technique, as demonstrated by the experimental results in . Multiclass SVMs can differentiate different Ascochyta blight severity levels that show minimal variations when the model features are supplied to Softmax and SVM classifiers. Because of this, the classification accuracy of the recommended model using SVM remained greater than that of Softmax as presented in . Our model is also better than other related works on chickpea disease classification, as presented in . Since noise affects the dataset because it was gathered from agricultural fields, utilizing noise filtering techniques improves the model’s performance.

Table 6. Summary of related studies on chickpea plant diseases classification.

The POA optimization process also contributes to improved accuracy by fitting the model with the optimal values for the hyper parameters. The model was trained using images of resolutions 28x28, 32x32, and 64x64. Despite improved accuracy with larger image sizes, due to our hardware limitations the maximum image resolution was adjusted to 64 * 64 before feeding it into the model. After attempting many epochs, the suggested model learns that epoch 125 yields the best results.

The final goal of this research was to create a model that could be utilized in the real world by pathologists or farmers. The model would be integrated with hardware components and would classify Ascochyta blight images based on severity. In this specific situation, there is a chance that this study will increase chickpea production.

2.10. Conclusion

There are a number of computer vision methods for classifying and detecting plant diseases, but there is still a lack of research in this area of study. This work investigated the applicability of a modified VGGNet model for a five-way classification task using a newly gathered dataset in order to assess its potential for assisting in the determination of the Ascochyta blight of chickpea disease severity levels. The proposed customized VGGNet model achieved an approximate average test accuracy rate of 96%, based on the results of the conducted experiments. The success of the suggested model was supported not only by the average test accuracy value but also by the criteria like precision, recall, and F1-Score presented in . The results of the experiments showed that the model’s performance was enhanced by the combination of the POA optimization method and the Gaussian with adaptive median filter. We propose that the Ascochyta blight of chickpeas can be detected and classified according to severity using the suggested customized version of VGGNet CNN models with POA optimization algorithm and SVM classifier. For future work, increasing the image size and the number of images in the dataset may improve the accuracy and robustness of the model.

Authors’ contributions

Melaku Bitew Haile and Abebech Jenber Belay were responsible for the Study conception and design, data collection, analysis and interpretation of results, and draft manuscript preparation. Melaku Bitew Haile was the supervisor of the research. All authors reviewed the results and approved the final version of the manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The author wants to declare that they can submit the data at any time based on publisher’s request. The datasets used and/or analyzed during the current study will be available from the author on reasonable request.

Additional information

Funding

The authors received no funding for this research.

Notes on contributors

Melaku Bitew Haile

Mr. Melaku Bitew Haile received the B.Sc. degree in July 2017 and the M.Sc. degree in July 2021 in Information Technology from the University of Gondar, Gondar, Ethiopia. His research interests include cyber security, deep learning, image processing, machine learning, and artificial intelligence. Mr. Melaku serves as a reviewer for an international journal. His research has been published in major international journals. Presently, Mr. Melaku works at the University of Gondar in the department of Information Technology.

Abebech Jenber Belay

Abebech Jenber Belay received the B.Sc. degree in JUNE, 2016 at Debre Markos University, Ethiopia in Information Technology and M.Sc. degree in JANUARY, 2021 in Information Technology from the University of Gondar, Ethiopia. Her research interests include Data Science, Deep learning, Image processing, Machine learning, Cyber security, and Artificial Intelligence. Currently, she is lecturer at the University of Gondar in the department of Information Technology.

References

  • Amin, M., & Melkamu, F. (2014). Management of Ascochyta blight (Ascochyta rabiei) in Chickpea Using a New Fungicide. Research in Plant Sciences, 2(1), 1–10.
  • Belay, A. J., Salau, A. O., Ashagrie, M., & Haile, M. B. (2022). Development of a chickpea disease detection and classification model using deep learning. Informatics in Medicine Unlocked, 31, 100970. https://doi.org/10.1016/j.imu.2022.100970
  • Benzohra, I. E., Bendahmane, B. S., Labdi, M., & Benkada, M. Y. (2012). Determination of pathotypes and physiological races in Ascochyta rabiei, the agent of Ascochyta blight in chickpea (Cicer arietinum L.) in Algeria. African Journal of Agricultural Research, 7(7), 1214–1219. https://doi.org/10.5897/AJAR11.1810
  • Caballo, C., Castro, P., Gil, J., Millan, T., Rubio, J., & Die, J. V. (2019). Candidate genes expression profiling during wilting in chickpea caused by Fusarium oxysporum f. Sp. Ciceris race 5. PloS One, 14(10), e0224212. https://doi.org/10.1371/journal.pone.0224212
  • Chauhan, R., Ghanshala, K. K., & Joshi, R. (2018). Convolutional Neural Network (CNN) for Image Detection and Recognition [Paper presentation].2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC). https://www.academia.edu/44667910/Convolutional_Neural_Network_CNN_for_Image_Detection_and_Recognition https://doi.org/10.1109/ICSCCC.2018.8703316
  • CSA. (2019). Agricultural sample survey, Area and production of major Crops, private holdings for 2011/12 mehere season, vol I, pp. 1–54.
  • Dangmei, G., Singh, V. K., & Sharma, A. (2023). Effect of organic source of nutrients on growth, yield and quality of chickpea Effect of organic source of nutrients on growth, yield and quality of chickpea (Cicer arietinum L.). April 2023.
  • FAO. (2020). World food and agriculture - statistical yearbook, 1–366. Rome. https://doi.org/10.4060/cb1329en
  • FAO. (2021). World food and agriculture - statistical yearbook 2021, 1–368.  Rome. https://doi.org/10.4060/cb4477en
  • FAO. (2022). World food and agriculture – statistical yearbook, 1–382. Rome. https://doi.org/10.4060/cc2211en
  • Foresto, E., Carezzano, M. E., Giordano, W., & Bogino, P. (2023). Ascochyta blight in Chickpea: An update. Journal of Fungi (Basel, Switzerland), 9(2), 203. https://doi.org/10.3390/jof9020203
  • Haque, M. A., Marwaha, S., Arora, A., Deb, C. K., Misra, T., Nigam, S., & Hooda, K. S. (2022). A lightweight convolutional neural network for recognition of severity stages of maydis leaf blight disease of maize. Frontiers in Plant Science, 13, 1077568. https://doi.org/10.3389/fpls.2022.1077568
  • Hayit, T., Endes, A., & Hayit, F. (2023). The severity level classification of Fusarium wilt of chickpea by pre ‑ trained deep learning models. Journal of Plant Pathology, 106(1), 93–105. https://doi.org/10.1007/s42161-023-01520-z
  • Iman, M., Arabnia, H. R., & Rasheed, K. (2023). A review of deep transfer learning and recent advancements. Technologies, 11(2), 40. https://doi.org/10.3390/technologies11020040
  • Jubair. (2011). An Enhanced Decision Based Adaptive Median Filtering Technique to Remove Salt and Pepper Noise in Digital Images. Proceedings of 14th International Conference on Computer and Information Technology (ICClT 2011), ICClT, 22–24.
  • Khan, M. A., Khan, M. A., Ahmed, F., Mittal, M., Goyal, L. M., Jude Hemanth, D., & Satapathy, S. C. (2020). Gastrointestinal diseases segmentation and classification based on duo-deep architectures. Pattern Recognition Letters, 131, 193–204. https://doi.org/10.1016/j.patrec.2019.12.024
  • Krizhevsky, A., & Ilya Sutskever, G. E. H. (2012). Imagenet classifcation with deep convolutional neural networks. Advances in Neural Information Processing Systems. Handbook of Approximation Algorithms and Metaheuristics, 25, 1–9.
  • Kumar, S., Bhambri, M. C., Porte, S. S., & Saxena, R. R. (2023). Evaluation of chickpea (Cicer arietinum L.). Cultivars under Organic Production System, 12(9), 603–607.
  • Kundu, N., Rani, G., Singh, V., Gupta, K., Chandra, S., Vocaturo, E., & Zumpano, E. (2022). Disease detection, severity prediction, and crop loss estimation in MaizeCrop using deep learning. Artificial Intelligence in Agriculture, 6, 276–291. https://doi.org/10.1016/j.aiia.2022.11.002
  • Lamba, S., Kukreja, V., Rashid, J., Gadekallu, T. R., Kim, J., Baliyan, A., Gupta, D., & Saini, S. (2023). A novel fine-tuned deep-learning-based multi-class classifier for severity of paddy leaf diseases. Front. Plant Sci. 14:123406, 01–18. https://doi.org/10.3389/fpls.2023.1234067
  • Lecun. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.47. https://doi.org/10.1080/0144929X.2020.1856928
  • Mahmood, M. T., Ahmad, M., & Ali, I. (2019). Chickpea blight : former efforts on pathogenicity, resistant germplasm and disease management 2 regional agricultural research institute, Bahawalpur. Pakistan Gomal University Journal of Research, 35(1), 1–10.
  • Pande, S., Sharma, M., Gaur, P. M., Tripathi, S., Kaur, L., Basandrai, A., Khan, T., Gowda, C. L. L., & Siddique, K. H. M. (2011). Development of screening techniques and identification of new sources of resistance to Ascochyta blight disease of chickpea. Australasian Plant Pathology, 40(2), 149–156. https://doi.org/10.1007/S13313-010-0024-8/TABLES/2
  • Pastor, S., Crociara, C., Valetti, L., Peña Malavera, A., Fekete, A., Allende, M. J., & Carreras, J. (2022). Screening of chickpea germplasm for Ascochyta blight resistance under controlled environment. Euphytica, 218, 12, 1–11. https://doi.org/10.1007/s10681-021-02963-0
  • Rawal, V., & Navarro, D. K. (2019). The Global Economy of Pulses. Rome, FAO. 1–190. http://www.fao.org/3/i7108en/I7108EN.pdf
  • Sanderson, M. (2010). Christopher D. Manning, prabhakar raghavan, hinrich schütze, introduction to information retrieval, cambridge university press. Natural Language Engineering, 16(1), 100–103. https://doi.org/10.1017/S1351324909005129
  • Shi, T., Liu, Y., Zheng, X., Hu, K., Huang, H., Liu, H., & Huang, H. (2023). Recent advances in plant disease severity assessment using convolutional neural networks. Scientific Reports, 13(1), 2336. https://doi.org/10.1038/s41598-023-29230-7
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 1–14.
  • Singh, A. (2016). Comparative analysis of gaussian filter with wavelet denoising for various noises present in images. Indian Journal of Science and Technology, 9(1), 1–8. https://doi.org/10.17485/ijst/2016/v9i47/106843
  • Smith, R. A., & Wilson, M. S. (2019). Transfer learning in deep neural networks: A review. Medical Image Analysis, 30, 1–18.
  • Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the Inception Architecture for Computer Vision [Paper presentation]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-Decem, 2818–2826. https://doi.org/10.1109/CVPR.2016.308
  • Venkatesan, R., & Li, B. (2017). Convolutional neural networks in visual computing : A concise guide (1st ed.). CRC Press. https://doi.org/10.4324/9781315154282
  • Vinothini, R., Niranjana, G., & Yakub, F. (2023). A novel classification model using optimal long short ‑ term memory for classification of COVID ‑ 19 from CT images. Journal of Digital Imaging, 36(6), 2480–2493. https://doi.org/10.1007/s10278-023-00852-7
  • Williams, C. K. I. (2003). Learning with kernels: Support vector machines, regularization, optimization, and beyond. Journal of the American Statistical Association, 98(462), 489–489. https://doi.org/10.1198/jasa.2003.s269
  • Worku, C., Adugna, M., Mussa, E. C., & Mengstu, M. (2023). Analysis market outlet choice of smallholder chickpea producers in Northwest part of Ethiopia. Cogent Food & Agriculture, 9(2), 1–12. https://doi.org/10.1080/23311932.2023.2285226
  • Zhang, N., Zhang, X., Shang, P., Ma, R., Yuan, X., Li, L., & Bai, T. (2023). Detection of cotton verticillium wilt disease severity based on hyperspectrum and GWO-SVM.