[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.026542
images
Article

Feature Extraction and Classification of Plant Leaf Diseases Using Deep Learning Techniques

K. Anitha1 and S. Srinivasan2,*

1Department of Electronics and Communication Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University, Thandalam, Chennai, 602105, India
2Instituteof Bio-Medical Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University, Thandalam, Chennai, 602105, India
*Corresponding Author: S. Srinivasan. Email: srinivasan.me.03@gmail.com
Received: 29 December 2021; Accepted: 02 March 2022

Abstract: In India’s economy, agriculture has been the most significant contributor. Despite the fact that agriculture’s contribution is decreasing as the world’s population grows, it continues to be the most important source of employment with a little margin of difference. As a result, there is a pressing need to pick up the pace in order to achieve competitive, productive, diverse, and long-term agriculture. Plant disease misinterpretations can result in the incorrect application of pesticides, causing crop harm. As a result, early detection of infections is critical as well as cost-effective for farmers. To diagnose the disease at an earlier stage, appropriate segmentation of the diseased component from the leaf in an accurate manner is critical. However, due to the existence of noise in the digitally captured image, as well as variations in backdrop, shape, and brightness in sick photographs, effective recognition has become a difficult task. Leaf smut, Bacterial blight and Brown spot diseases are segmented and classified using diseased Apple (20), Cercospora (60), Rice (100), Grape (140), and wheat (180) leaf photos in the suggested work. In addition, a superior segmentation technique for the ROI from sick leaves with living backdrop is presented here. Textural features of the segmented ROI, such as 1st and 2nd order WPCA Features, are discovered after segmentation. This comprises 1st order textural features like kurtosis, skewness, mean and variance as well as 2nd procedure textural features like smoothness, energy, correlation, homogeneity, contrast, and entropy. Finally, the segmented region of interest’s textural features is fed into four different classifiers, with the Enhanced Deep Convolutional Neural Network proving to be the most precise, with a 96.1% accuracy.

Keywords: Convolutional neural network; wavelet based pca features; leaf disease detection; agriculture disease remedies; bat algorithm

1  Introduction

Agronomy has a significant economic impact; agriculture employs around 75 percent of rural households. Several vegetables, fruits, and cereals are cultivated and exported in our country. As a result, quality yields and optimum quantity are needed. Plant diseases are unavoidable due to environmental causes, necessitating the deployment of a preventative system in agriculture [1]. Each of the parts, such as the fruit, leaf, and stem can cause common diseases such as bacterial, fungal, and viral infections. Canker, Anthracnose, Alternaria, Bacterial Spot and other germs and bacteria detected in the plant damage the plant’s growth. Fungus disease in leaves is caused by environmental factors. Leaf disease is recognized by seeing a few signs on plant leaves with the naked eye, and the complexity of the disease is fast increasing [2]. Psychopathological issues arise as a result of the complexity and density of the crops. Even professionals in agriculture and plant pathology are often unable to predict the origin of disease, resulting in the provision of negative solutions and findings [3]. Symptoms in plant leaves aid in predicting the etiology of disease and are enriched by disease detection automation. This one, which is automated, is proposed here.

Machine learning classification is used to estimate the mapping function. To map target class/label, it takes into account sample input [4]. The classification issues of single-label and multi-label categorization must be solved. In classical classification, a sample of input corresponds to a single target class. Multi-label classification, on the other hand, refers to when a sample of input is associated with multiple target labels. Multi-label classification is a growing area in machine learning for predicting plant diseases through broad usage of applications. Several approaches to image processing that involve disease detection using image processing are already available [5]. describes a method for identifying brinjal leaves used to artificial neural systems and image processing [6]. uses K-means picture classification and segmentation by NN to classify and detect diseases in leaves. To change the color, the image is segmented using the K-means clustering method. The characteristics that determine the texture of infected objects are determined after they are fragmented. For the prediction of infected ones, the features of a pre-trained neural network are extracted. In [7], disease categorization is supported by extraction utilizing a deep Convolutional Neural Networks (CNN) and rice blast feature. To perform Rice Blast Disease Recognition, the LBPH and HWT are extracted. With a high true positive rate, it delivers improved accuracy. It does not, however, take into account multi-label categorization in order to reach multiple stages of disease classification.

2  Literature Review

Identification of diverse plant leaf, fruit, and vegetable illnesses requires early diagnosis of various plant diseases. Efforts have been made to build multiple algorithms that can consistently and swiftly segment the sick area of a diseased plant image and extract various attributes, allowing for the early diagnosis of various plant-related illnesses using various classification methodologies. The results of recent study are being used to detect disorders and monitor parameters. Ali Hussam, et al. (2017) used (PCA) and K-Nearest Neighbor to detect diseases on cotton plants (KNN). Prior to segmentation, a Gaussian filter was using to a removal of noise from the images. Along with shape elements as characteristics, the color scheme descriptor is using to a variety of visualization and likeness-based reclamation. Myrothecium, bacterial blight, and alternaria were chosen as the illnesses to study [8]. Khan Muhammad Attique, et al. (2018) examine strategies for identifying and classifying plant disease utilizing digitally acquired photos in the visible band that include image processing algorithms. Recognition, severity measurement, and cataloguing were the three categories for the selected proposals. After then, each class was divided into groups based on the system’s basic technical solution. K-means clustering was utilised to segment the necessary ROI in order to locate sick sections of the unhealthy leaves [9]. The features gathered from the segmented ROI were then utilised to classify the data. When it came to disease categorization, the SVM technique outperformed ANNs. Padol et al. (2016) use a Support Vector Machine (SVM) classifier to recognize grape leaf [10]. The data is collected in the form of digital leaf clicks, which are then utilized to train and test the system. The visual quality has significantly improved. When the image is resized to 300 × 300 pixels, thresholding is used to generate green color components. Noise is removed using Gaussian filtering. The diseased region is identified using K-means clustering, and color and texture data are recovered. As a result, a kind of leaf infection is determined using the 88.89 percent accurate Linear Support Vector Machine (LSVM) classification algorithm [11].

Bashish et al. established a method for classifying plant stem infections (2010). K-Means approach does segmentation [12] and sends it to a pre-trained neural network in an image-processing-based process. In a test bed, data from Jordan’s Al-Ghor area is used as input. Here, high precision is obtained, and leaf diseases are detected automatically. The experimental results automatically attain accuracy and detection. The proposed Neural Network classifier’s statistical classification-based technique achieves 93 percent precision in detection and classification [13].

3  System Design

This proposed research project includes the following the Enhanced deep Convolutional Neural Network (EDCNN) is used to classify multi label leaf diseases taken away from leaf photos. As inputs, Rice Leaf Diseases and the Plant Village datasets are used. A median filtering strategy is used to preprocess the input photos. To reflect more precise sick zones, the pre-processed pictures are segmented using a ROI based bat optimization technique. The ROI imageregion is used to construct the Wavelet decomposition Personal Component Analysis (WPCA) and features. Lastly, an Enhanced Deep Convolutional Neural Network (EDCNN) was employed to classify multi-label leaf illness.

3.1 Preprocessing Using Median Filtering

As inputs, Rice Leaf Diseases and the Plant Village datasets are used. The median filtering strategy is used to pre-process the input leaf images. To improve segmentation accuracy, median filtering is used to remove lighting, contrast difficulties, and brightness effects. As a result, noise is efficiently eliminated from photos [13] by retaining edges. The median filter is used to eliminate pixels from an image that have been removed. The median filter values take over the pixel value through adding the values based on adjacent pixels. The median value of all surrounding pixel values is calculated, and the middle pixel value is utilized to replace the pixel. The RGB colors of the filtered image are used to calculate the colour space parameter. Hue Saturation Value is used to determine color perception (HSV).

3.2 Segmentation Using ROI Based Segmentation

First, images that have been cleared of noise are used as segmentation input. An Automatic Region of Interest (ROI) based bat optimization method recognizes photos with diseased patches in numerous leaves in this provided study. The Bat Algorithm is based on a meta-heuristic optimization process that is applied to a fundamental truth about bat behavior. Such a technique’s echolocation capacity can be used to track bats and their prey/foods. The following stages are used to create this model: [14,15]:

1)   Echolocation is used by all bats to sense distance and to discriminate prey/food from background obstacles.

2)   Vi velocity at xi point, calibration of fmin a constant frequency With A0 sound, a bi bat searches for food and flies according to dynamic wavelength. PER of r [1, 0] is altered according on target proximity, and wavelength is automatically varied based on pulses emitted.

3)   Despite the loudness being dynamic, they depart 0 large (positive) from the preset Amin low value. The position of the bat in the Plant Village and rice leaf diseases datasets {d 1j, d 2j,…, d nj} datasets is represented by the amount of pixels in the images of leaves with pixel numbers in BA. The Fitness function fi represents the ROI segmentation accuracy of the multi label leaf disease classification for the goal of bat locates. The virtual bat movement is determined by the velocity and position at iteration of next j with a large number of iterations as T, as follows:

fi=fmin+(fminfmax)β(1)

Random numbers in the range [0, 1] are created and assign to as. The value of decision variables j is assign to as xij(t) for bat at time step t. The spacing and movement of bats in range is regulated by the bats themselves fi. For decision variable j, the global optimal placements for the impacted region are designated by the x^j variable, this is discovered and analysed among all of the m bats’ solutions. In a dynamic approach, random walks are employed to improve the likely responses. From the available alternatives, a single solution is picked, and a random walk is utilized to locate a new bat solution that fits the conditions.

xnew=xold+ϵA¯t(t)(2)

Bacterial blight, Brown spot, and Leaf smut are all classified in the post-processing stage using four different classification algorithms. Three distinct techniques, including SVM, CNN, and DCNN, were used, with DCNN proving to be the most efficient.

Ai(t+1)=αAi(t)(3)

ri(H+1)=rj(0)[1exp(γv)](4)

Finally, based on the precision of the segmentation, the disease-affected areas are segregated. The technologies used in this system were a combination of soft computing and image processing methods that worked on a large number of diseased plants. The results of the experiments suggest that the technique can correctly categorise the condition. The strategy established a theoretical basis for perceiving a sick leaf.

The typical bat method has numerous advantages, one of which is the ability to create exceptionally quick convergence at an extremely important step by moving from adventure to exploitation. As a result, it is an effective algorithm for services such as classifications, while others require a simple option. However, if we allow the algorithm to jump straight to the exploitation step, it may result in stagnation after a specific first stage. Several approaches and even procedures have already been investigated to increase the diversity of the key, hence enhancing the functionality, which provided superior results, in order to improve overall performance.

images

3.3 Feature Extraction

When the segmentation is finished, the feature extraction is accomplished. Plant diseases are detected using texture and shape features. The texture features (WPCA) of Wavelet decomposition-based Principle Component Analysis are calculated using correlation, energy, contrast, andhomogeneity.

3.3.1 Wavelet Decomposition Based Principle Component Analysis (WPCA)

An image for d distance and θ angular orientation is described by the WPCA feature extraction method. As well as specified intensities of occurrence frequency of two pixels. WPCA feature extraction is used to extract angular direction from four sides at a 45° interval; 0°, 45°, 90°, and 135°. Grayscale features of Features extraction are used to distinguish an image from other objects. The features of contrast, dependency, energy, and homogeneity are extracted.

3.3.1.1 Contrast

The degree of greyness difference in an image determines contrast features. If the contrast value is high, the greyness difference is large. Contrast defined to act p(i, j) in WPCA matrix.

Contrast=ij(ij)2p(i,j)(5)

3.3.1.2 Correlation

Correlation is the term used to describe the relationship between a reference pixel and its neighbouring image, and it’s written like this:

Correlation=ijijpd(i,j)μxμyσxσy)(6)

where the mean is μx,μy and the σx,σy are the WPCA probability matrix’s standard deviations and columnwise y and row wise x come from the WPCA probability matrix’s standard deviations.

3.3.1.3 Energy

The Energy value, which is determined as follows, determines the greyness distribution of an image:

Energy=ijp2(i,j)(7)

3.3.1.4 Homogeneity

Homogeneity characteristics are used to calculate the image’s greyness and homogeneity degree. When the degree of greyness is great, so is the value of homogeneity. Homogeneity is a phrase used to describe the state of being homogeneous.

Homogeneity=ijp(i,j)1+|ij|)(8)

3.3.1.5 Shape Features Extraction

The suggested algorithm was divided within two phases: The plant was first identified based on its leaf characteristics, which included pre-processing the leaf images and then removing the features, followed by Shape Features-based training for leaf cataloguing. As indicated in Tab. 1, the diseased leaf was classified in the second stage, which included k means based segmentation of the sick zone, featuring extraction of the segmented region, and disease classification.

Ed(i,j)={1,ifconnectivityofb(i,j)==20,else(9)

Th=12[maxf(i,j)minf(i,j)](10)

images

Where,

Ed (i, j) – whole picture pixels of the region

ThThreshold value 

The texture and shape aspects of leaf photos are used to determine whether they are healthy or unhealthy. Textural features of 1st and 2nd order are determined, with Contrast Area (A) and Circularity (C) as 1st order texture features and smoothness, energy, correlation, homogeneity, contrast, and entropy as 2nd order texture features. These traits will be used to define three sick leaves: Bacterial blight, Brown spot, and Leaf smut, all of which will be addressed in the detection section below.

3.4 Classification Approach Using Enhanced Deep Convolutional Neural Networks (EDCNN)

For multi label illness assortment, the collected features are fed into the Enhanced Deep Convolutional Neural Network (EDCNN) algorithm. In Enhanced CNN, the weighted mean is used to reduce the error value. Leaf image properties are considered input in the network’s input layer, learned output in the output layer, and hidden layers in the intermediate layers. As previously mentioned, the network contains convolution and sub-sampling layers [16,17]. Because imposing and involving neurons from adjacent layers, CNNs spatially leverage a local connection pattern over local correlation. As illustrated in Fig. 1, the neighbouring receptive fields are present in the (m-1) layer of neurons (2) and (m-1) local subset of neurons is connected to ‘m’ in the layer of neurons.

images

Figure 1: Flow diagram of the proposed system

Every sparse file in the CNN algorithm reproduces the entire image. Fig. 2 shows an identical feature map with three concealed units (3). Identical means they have the same colour weights. The gradient of the shared parameters determines the gradient of the shared weights. Replication is a technique for detecting features in the visual field regardless of their placement. The number of free learning parameters is reduced when weights are shared which is shown in Fig. 3. On visual challenges, CNN and its control accomplish better generalization. CNN uses non-linear down-sampling to implement the max-pooling concept. Non-overlapping rectangles are assigned to the input image’s partition. Sub-region output has the highest maximum value.

images

Figure 2: Flow of network layers connectivity to other layers

images

Figure 3: Sharing of weights in graphical flow of layers

3.4.1 Convolution Layer

The convolution layer is initial level of CNN. The layer structure is depicted in Fig. 4. It contains a function expression, bias terms and a convolution mask. At a convolution layer, learnable kernels convolve the feature maps of the preceding layer, and the output feature map is formed using the activation function. Each output map combines several input maps.

images

Figure 4: Convolution layer working

Mj refers to a collection of input maps. b The convolution input maps along unique kernels or each output map individually are used to calculate additive bias. When the conditions of j and k output maps are integrated over input map i using kernels on map i unique j and k output maps are formed. In the diagram below, a 5 × 5 mask is used to perform convolution on a 32 × 32 input feature map. WPCA-based features were used to extract texture features, while mean values of 28*28 matrix values were used to extract colour features.

3.4.2 Sub-Sampling Layer

Then, using standard feature extraction methods, features were extracted from segmented areas. WPCA features were then used to categorise the features into illness categories. Finally, texture and colour features were retrieved, and EANN was utilised to generate a conventional layer and map for illness recognition.

Sub sampling layer is used to create the input maps for down sampled versions shown in Fig. 5. Even with small output maps, from N input maps, N output maps are created.

images

Figure 5: Sub sampling layer working

Xl=f(βjlDown(Xjl1)+bjl)(11)

The symbol for a sub-sampling function is (•). Each input image is summed into n-by-n blocks, resulting in an output image with n-times spatial dimensions. Each output map’s β multiplication and supplement are taken from it. All sample photos are just discarded. The best vision-based approaches for identifying and observing the outside illness signs were recommended in this proposed study. The weights are proportionate to the testing set’s accuracy.The weighted mean is used to determine the best weights.

αk=Aki=1nAi(12)

Where Ak is the accuracy of the testing feature set for network k, and I denotes the number of participating networks.

The training set for this proposed study is derived from multi-label leaf diseases. Various classifiers like the convolutional neural network (ANN), the support vector machine (SVM), and suggested augmented convolutional neural networks are used to make decisions in classification (EDNN). Thus, leaf photos are divided into multi-label unhealthy categories and healthy like cedar apple rust, apple scab, apple leaf and black rot.

4  Results and Discussion

Rice Leaf Disease sand Plant Village data sets are taken into account when validating the newly developed diseased leaf identification method. The Plant Village collection contains 500 photographs of healthy leaves and five hundred photographs of diseased leaves like cedar apple rust, apple scab, apple leaf and blackrot. Leaf photos of rice, grape, wheat, apple and Cercospora are used to classify the various illnesses. Sick leaves are also classified using the Rice Leaf Diseases Data Set. The training dataset includes 40 photos each of the three classes/diseases: Brown spot, Leaf smut, Bacterial blight disease. The proposed research employs MATLAB to assess the effectiveness of a classifier. In terms of sensitivity and accuracy metrics, the newly constructed Enhanced CNN is compared to the existing SVM based classification and deep CNN based classification.

The photographs of the diseased and non-diseased patients were first pre-processed. Fig. 6 shows the Cercospora leaf image that was used as an input. The median technique was used to preprocess the given input image. Fig. 7 depicts the preprocessed image.

images

Figure 6: Input image

images

Figure 7: Preprocessed image

Fig. 8 depicts the contrast enhanced image and the binary image. The segmentation was followed by the extraction of colour, shape, and texture information, which were then used for classification using the suggested approach. A combination of features is using to the evaluate most appropriate features, so find distinguishing features and improved photos were used to accomplish reliable diseased leaf identification. An intent search process was also provided, which proved to be useful in locating the user’s intent. Performance evolution produced the best results out of the three retrieved features. The accuracy comparison is shown in Fig. 9.

images

Figure 8: Enhanced image

images

Figure 9: Accuracy comparison

4.1 Performance Analysis

The disease is classified based on whether the plant’s afflicted leaf is Bacterial blight, Brown spot, or Leaf smut disease. Pixel classifications and pixel misclassifications are two of the four occurrences that could occur. The two pixel classifications are True Positive (TP) and True Negative (TN), while the two pixel misclassifications are False Positive (FP) and False Negative (FN). If a diseased pixel is accurately identified as diseased, an event is classed as TP, and if a non-diseased pixel is correctly identified as non-diseased, it is classified as TN. If the predicted pixel indicates a non-sick pixel but the actual pixel is diseased, the event is classed as FN. If the anticipated pixel represented diseased pixel but was really non-diseased pixel, the event is said to be FP. For all of the classification methods shown in Equation, two performance metrics are calculated using these possible events: false positive rate (FPR) and true positive rate (TPR) (13).

There are four classes of grape leaf such as Leaf blight, Healthy with Black measles, and Black rot. The “True Positive (TP)”, “True Negative (TN)”, “False Negative (FN)”, and “False Positive (FP)” are some of the duration terms of this context; true positive represents a healthy leaf. In case of true negative the leaves are incorrectly identified as healthy leaf. The false positive represents the correct identification of diseased leaf images.

4.1.1 Accuracy

Accuracy is a perceptual performance measure that compares the proportion of true observations expected to the total number of observations. It’s depicted as

Accuracy = (TP + TN)/(TP + TN + FP + FN)(13)

The accuracy parameter is calculated and compared in Tab. 2 between the new enhanced DCNN-based multi label illnesses classification strategy, the current SVM, and CNN-based classification approaches. Using leaf photos of damaged and healthy plants, deep learning was utilised to propose a DCNN system for recognizing plant diseases. The system was experienced using an open database of thousands of images featuring twenty-five distinct plants in a mix of fifty-eight different plant classes, disease mixes, and non-diseased plants. Various kinds of plant diseases is shown in Tab. 3.

images

images

Accuracy parameters related to the newly proposed Enhanced Deep Convolutional Neural Network (EDCNN) based multi label illnesses classification system are compared to those of existing CNN and SVM based classification methods. Images are represented by x axes, while accuracy is represented by y axes. As a result, this created work employs a bat optimization approach based on Region of Interest (ROI) to reflect more precise sick regions of leaf images. The accuracy rate improves with proper segmentation of the sick ROI region. According to validated results, the newly developed Enhanced deep CNN-based categorization scheme outperforms earlier systems in terms of accuracy.

4.1.2 Sensitivity (Recall)

Yes, the proportion of positively anticipated observations to each and every one of the actual class observations is called recall.

Sensitivity = TP/TP + FN(14)

The sensitivity parameter is compared to currently used methods and the newly created Enhanced CNN in Tab. 4.

images

Fig. 10 depicts the suggested enhanced DCNN-based multi label defects classification’s sensitivity performance as well as a comparison of existing SVM and DCNN-based classification algorithms. WPCA features are retrieved from segmented diseased regions in this proposed study. The multiclass leaf diseases are predicted using an upgraded deep CNN approach based on the retrieved features. Computing the weighted mean function improves the augmented CNN’s performance. For multiclass disease prediction, the designed classification approach achieves a higher true positive rate. The developed details of the projected result when compared to current systems, enhanced deep CNN achieves higher sensitivity. Finally, the input image was catalogued into the previously mentioned classes using a feature vector layer. The results showed that the suggested technology outperforms existing methods for detecting vegetable illnesses.

images

Figure 10: Sensitivity comparison

5  Conclusion

The developed work incorporates ROI-based bat optimization and EDCNN, an enhanced DCNN model that improves multi label leaf disease prediction. The proposed ROI based bat optimization technique uses a biologically driven selection model to more precisely distinguish disease patches from leaf images. It enhances the accuracy of segmentation. The segmented region extracts WPCA and shape parameters, which are then given to the EDCNN classifier, which conducts the multi-label classification task. The proposed EDCNN classifier makes use of weighted mean in particular to increase classification performance and accuracy. When compared to existing techniques, the proposed system achieves 96 percent high accuracy on 20 photographs. When using leaf image datasets, the results of the studies reveal that the proposed EDCNN classifiers are successful and greatly outperform other current systems in multi label leaf disease classification.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

 1.  R. M. Prakash, G. P. Saraswathy, G. Ramalakshmi, K. H. Mangaleswari and T. Kaviya, “Detection of leaf diseases and classification using digital image processing,” in  Proc. 2017 Int. Conf. on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, no. 17558798, pp. 1–4, 2017. [Google Scholar]

 2.  S. Phadikar, J. Sil and A. K. Das, “Classification of rice leaf diseases based on morphological changes,” International Journal of Information and Electronics Engineering, vol. 2, no. 3, pp. 460–463, 2012. [Google Scholar]

 3.  S. Zhang, X. Wu, Z. You and L. Zhang, “Leaf image based cucumber disease recognition using sparse representation classification,” Computers and Electronics in Agriculture, vol. 134, pp. 135–141, 2017. [Google Scholar]

 4.  M. J, R. Venkatesan and N. Wang, “An online universal classifier for binary, multi-class and multi-label classification,” in IEEE Int. Conf. on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, pp. 003701–003706, 2016. [Google Scholar]

 5.  S. Naikwadi and N. Amoda, “Advances in image processing for detection of plant disease,” International Journal of Application or Innovation in Engineering & Management, vol. 2, no. 11, pp. 1–14, 2013. [Google Scholar]

 6.  D. Al Bashish, M. Braik and S. Bani-Ahmad, “Detection and classification of leaf diseases using K-means-based segmentation,” Information Technology Journal, vol. 10, no. 2, pp. 267–275, 2011. [Google Scholar]

 7.  W. J. Liang, H. Zhang, G. F. Zhang and H. X. Cao, “Rice blast disease recognition using a deep convolutional neural network,” Scientific Reports, vol. 9, no. 1, pp. 2869, 2019. [Google Scholar]

 8.  H. Ali, M. I. Lali, M. Z. Nawaz, M. Sharif and B. A. Saleem, “Symptom based automated detection of citrus diseases using color histogram and textural descriptors,” Computers and Electronics in Agriculture, vol. 138, pp. 92–104, 2017. [Google Scholar]

 9.  A. Rastogi, R. Arora and S. Sharma, “Leaf disease detection and grading using computer vision technology & fuzzy logic,” in 2015 2nd Int. Conf. on Signal Processing and Integrated Networks (SPIN), Noida, India, pp. 500–505, 2015. [Google Scholar]

10. P. B. Padol and A. A. Yadav, “SVM classifier based grape leaf disease detection,” in 2016 Conf. on Advances in Signal Processing (CASP), Pune, India, pp. 175–179, 2016. [Google Scholar]

11. M. A. Khan, T. Akram, M. Sharif, M. Awais, K. Javed et al., “CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features,” Computers and Electronics in Agriculture, vol. 155, pp. 220–236, 2018. [Google Scholar]

12. D. Al Bashish, M. Braik and S. Bani-Ahmad, “A framework for detection and classification of plant leaf and stem diseases,” in 2010 Int. Conf. on Signal and Image Processing, Chennai, India, IEEE, pp. 113–118, 2010. [Google Scholar]

13. D. Maheswari and V. Radha, “Noise removal in compound image using median filter,” International Journal on Computer Science and Engineering, vol. 2, no. 4, pp. 1359–1362, 2010. [Google Scholar]

14. T. Jayabarathi, T. Raghunathan and A. H. Gandomi, “The bat algorithm, variants and some practical engineering applications: A review,” in Nature-Inspired Algorithms and Applied Optimization, United Sates: Springer, pp. 313–330, 2018. [Google Scholar]

15. D. Oliva, M. Elaziz and S. Hinojosa, “Multilevel thresholding for image segmentation based on metaheuristic algorithms,” in Metaheuristic Algorithms for Image Segmentation: Theory and Applications, United Sates: Springer, pp. 59–69, 2019. [Google Scholar]

16. S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk and D. Stefanovic, “Deep neural networks based recognition of plant diseases by leaf image classification,” Computational Intelligence and Neuroscience, vol. 2016, pp. 1–12, 2016. [Google Scholar]

17. M. Dyrmann, H. Karstoft and H. S. Midtiby, “Plant species classification using deep convolutional neural network,” Biosystems Engineering, vol. 151, pp. 72–80, 2016. [Google Scholar]

images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.