[BACK]
Computer Systems Science & Engineering
DOI:10.32604/csse.2022.023016
images
Article

CNN Based Automated Weed Detection System Using UAV Imagery

Mohd Anul Haq*

Department of Computer Science, College of Computer and Information Sciences, Majmaah University, AL-Majmaah, 11952, Saudi Arabia
*Corresponding Author: Mohd Anul Haq. Email: m.anul@mu.edu.sa
Received: 25 August 2021; Accepted: 27 September 2021

Abstract: The problem of weeds in crops is a natural problem for farmers. Machine Learning (ML), Deep Learning (DL), and Unmanned Aerial Vehicles (UAV) are among the advanced technologies that should be used in order to reduce the use of pesticides while also protecting the environment and ensuring the safety of crops. Deep Learning-based crop and weed identification systems have the potential to save money while also reducing environmental stress. The accuracy of ML/DL models has been proven to be restricted in the past due to a variety of factors, including the selection of an efficient wavelength, spatial resolution, and the selection and tuning of hyperparameters. The purpose of the current research is to develop a new automated weed detecting system that uses Convolution Neural Network (CNN) classification for a real dataset of 4400 UAV pictures with 15336 segments. Snapshots were used to choose the optimal parameters for the proposed CNN LVQ model. The soil class achieved the user accuracy of 100% with the proposed CNN LVQ model, followed by soybean (99.79%), grass (98.58%), and broadleaf (98.32%). The developed CNN LVQ model showed an overall accuracy of 99.44% after rigorous hyperparameter tuning for weed detection, significantly higher than previously reported studies.

Keywords: CNN; weed; detection; classification; uav

1  Introduction

Sustainable agriculture is one of the priority areas of the Kingdom of Saudi Arabia. However, the cultivation of several crops, such as sorghum, maize, and coffee, has a long history of cultivation for more than 5000 years [1]. The utilization of modern technologies (Satellite Remote Sensing, UAV etc.) and types of equipment (such as drip irrigation, IoT etc.) have started in recent times. Therefore, the issues and challenges of weeds in cultivation were also historical. Although, the magnitude of weed problems in recent times is significantly higher compared to the previous times.

Agricultural weed is a kind of unwanted vegetation or plant that competes with the intentional crops for space, food, and light to grow, resulting in a decrease in quality and quantity of agricultural production and increases costs by limiting their spread. Although the weeds are not generally cropping specific, however, weeds may be perennials, annual, or biannual similar to the crops. Suppressing the annual weeds is more manageable than those which are perennials. Annual weeds are typically the plants that sprout from seed, grow for a single year, and then die.

Across the globe, weed suppression is one of the significant issues that farmers face in the present time. In an arid or semi-arid region like Saudi Arabia, the encroachment of native weeds is also common, including annual and perennial weeds such as (Datura innoxia, Cynodon dactylon, and Cenchrus ciliaris etc.). The best way to get rid of weed is Early detection and suppression, especially before producing flowers. The detection of weeds is traditionally carried out manually. Monitoring invasive weeds requires efficient detection in near real-time and integrated assessment to allow proper investigation and effort to suppress weeds.

The availability of efficient UAVs and advances in computer vision and artificial intelligence make it possible to detect weed for a broader coverage with less time and effort based on the UAV imagery. The classification of weed gained attention with the advancements of ML and DL techniques, such as ML-based Support Vector Machine (SVM), Random Forest (RF), ANN (Artificial Neural Network), and DL-based Convolution Neural Network (CNN). ML has been utilized for weed detection, such as: [2] compared ML-based feature classification of an area infested due to weed and concluded that the Relief method outperformed other methods based on f-score values. Other models applied by [2] were SVM, DT, and RF to map the parthenium weed. Reference [2] achieved an accuracy ranging between 70.3% to 82.3% for the weed classification; however, the significant limitations of this study were the lower spatial resolution of the images used (10 m) and static data splitting ratio of 1:3 and 3:1. Another study by [3] used SVM with radial basis function (RBF) to classify broadleaf weeds using UAV images and achieved an overall accuracy of 93%. The limitation of [3] was the imbalanced dataset used in the investigation with 254 images for middle-tillering and six images for end tillering and stem extension. A study [4] applied SVM for weed classification using a ground-based camera and achieved an accuracy of 97.3%. The images taken from the ground-based camera have a limitation of coverage and fixing the height of the ground-based camera equally for taller and shorter weed types. The method used by [4] also suffers from segmentation errors due to plant holes and noisy backgrounds. The advantage of automatic feature extraction in DL-based models such as CNN makes it a popular choice for classification [5,6]. The feature extraction is otherwise difficult to define manually. CNN has recently been used in applications such as: [7] applied a single shot detector and faster CNN to classify the weeds using UAV imagery and achieved 84% and 85% accuracy, respectively. The models developed in [7] can be used as near-real-time as the models require extensive training, limiting them to a real-time solution. [8] used pre-trained CNN models ResNet–50 Xception and VGG16 and weed identification in potatoes, sunflower, and maize crops. [9] used modified U-Net for weed density mapping and calculation using UAV images. [10] used CNN for weed mapping using Phantom 4 UAV images and obtained an accuracy of 93.50%. A large amount of labeling data for training in the proposed model limits [10]; therefore, unsupervised learning was applied, which added the subjectivity of interpretation. Another limitation of [10] work was that the crops and weeds exist in different fields; therefore, this method was not tested on weeds associated with crops.

A fully convolutional network has limited accuracy with detail segmentation [11]. Reference [12] used CNN and other ML models to detect the weed in the soybean crop. Therefore, it is essential to study a UAV-based image for weed classification in the associated environment of crops. In this study, the objective was to develop and utilize the CNN-LVQ model to identify the broadleaf weeds for the soybean and classify them in grass and broadleaf. Reference [12] used DJI phantom 3 data to apply weed classification based on RF, SVM, Adaboost, and CNN. CNN demonstrated high accuracy of 99.50% for weeds classification at the cost of a high training time of 30 minutes for an unbalanced dataset. Previous studies showed a gap for handling unbalanced data, selection of parameters, data splitting, the time complexity of the models for real-time systems, and quality of images used for classification. Generally, the plants and weeds mixed in an image make it a bit complex with its background. In the proposed paper, we developed a CNN model in assimilation with Learning Vector Quantization (LVQ) algorithm for classification purposes based on the merits of adaptation and topology [13].

Present investigation contributes to developing a novel model CNNLVQ, which detects weeds in a complex set of different crops such as soybean crop images and discriminates between the grass and broadleaf weeds. The dataset used in the proposed study was composed of 4400 UAV images for soybean, soil, and broadleaf. The novelty of the present investigation was the rigorous hyperparameters optimization, extending the utility of LVQ for better training and utilization of the rich UAV weed dataset.

The rest of the paper is organized as follows: Section 2 introduces the CNN and LVQ components. Later, Section 3 describes the dataset followed by methodology to develop the CNN LVQ model in Section 4. Section 5 describes the experimental setup, including hardware and software resources with performance metrics of the proposed CNN LVQ model developed in the present investigation. Section 6 focuses on results obtained in the present work, including comparison with other studies and limitations of the current study. Finally, Section 7 draws the main conclusions about the current investigation.

2  Convolutional Neural Network

DL is a variation of ML-based algorithms consisting of sequential layers. DL’s primary advantage is that it automatically selects the features, unlike ML methods where feature extraction needs to be done manually. The CNN is a type of DL model used to extract suitable features from the input data. CNN, which leads to the identification and classification of elements/pixels with less requirement of pre-processing. CNN is thriving, especially to analyze the images, where it can easily extract the features with its multiple layers of architecture. The CNN model generally used four main layers: a convolutional layer, activation function layer, pooling layer, and finally, a fully connected layer (FCN) used for classification purposes.

2.1 Convolution Layer

In the convolution layer, an array operation is performed on input data based on the filter value of the neighboring elements. Then the weighted sum of the array operation becomes the output of the convolution layer. The input image is reduced using a weighted sum operation to a smaller size. This procedure is shifted step by step for all elements, and after each step, the value of the element is multiplied with a filter, and the result is summed up. The output of this operation is a new smaller size matrix.

2.2 Pooling Layer

Generally, the pooling layer applies after the operation of the convolution layer. Convolution and pooling layers are the same functions to predict the outputs based on neural network loss optimization. Herein the pooling layer, different size filters can be applied for the pooling layer, e.g., 3 × 3. Different pooling functions can be used in the pooling layer, such as min/max/average pooling. Fig. 1 shows the operation of the max pool layer where it chooses the highest value in the sub-windows and transfers it to form a max pool matrix.

images

Figure 1: Max pooling using 2 X 2 filter and stride of 2

2.3 Activation Layer

A neural network needs an activation function in the output layer to make the prediction. The rectifier activation function (ReLU) is one of the default activation functions for deep learning applications; it adds nonlinearity to the network. ReLU output 0 for negative value and output the same value for non-negative values. Another activation function is the Sigmoid or logistic function. The output value of the sigmoid function between 0 and 1 and the S-shape also have similar values. A sigmoid is an ideal approach for binary classification, get the result based on Binomial probability distribution. However, the sigmoid function is unsuitable for multiclass classification environments; it needs the multinomial probability distribution for a mutually exclusive class. Instead, softmax is a function used to activate the function in the output layer of a neural network to deal with a multiclass classification problem. This activation function predicts a multinomial probability distribution with more than two classes.

Suppose we have an input of [1,2,3]. In that case, the max function will output the largest number, which is 3, argmax will output the index of the largest number, which is 2, the softmax function, which is the probabilistic or “softer” version of the argmax function in which the unit with the largest input has output +1. In contrast, all other units have output 0 [0,0,1] in the current example.

2.4 Fully Connected Layer

After applying the convolution, pooling and activation operations, the last obtained matrix is fed to the FCN layer to perform the classification. In the current study, the LVQ algorithm has been used for training the soybean, soil, grass, and broadleaf weed. Learning Vector Quantization (LVQ) is a well-established heuristic technique was utilized in the assimilation of CNN. The LVQ layer was added as a second FCN layer in the proposed CNN-LVQ model. LVQ is a 3-layer neural network that utilizes competitive and supervised learning to solve classification problems. The three-layer architecture shown in (Fig. 2) includes an input layer (orange), a Kohonen layer (or competition layer, white), and an output layer (green layer). The learning takes place in the Kohonen layer, and the results are then transferred to the output layer.

images

Figure 2: Architecture of LVQ

In the current research, weighting parameters were selected based on the LVQ technique for classification. In LVQ, the first step is to set the initial synaptic weight for random values. Then the learning rate can be chosen, and the input vector needs to initialize with the random value. If the class label and the weight vector are close, the LVQ will move in this direction. If labels have different average values for class and the weight vector, they will move away. Assuming that the weight vector in parallel is close to the input vector, the mathematical equation representation is as follows for Eq. (1)

ΔVW(t)=argminaiΔVW(t)2 (1)

Let the class of ΔVW(t) denoted as CVW and the class of ai is stated as Cai . The weight vector ΔVW(t) is adjusted as follows: if both classes are similar CVW=Cai , the ANN parameters are represented as Eq. (2). The model selects the model’s parameters. It implies that the algorithm optimizes these parameters while learning and outputs various parameters that minimize the error.

ΔVW(t+1)=VW(t)+η(t)(aiVW(t)) (2)

where η(t) is denoted as the learning rate of the adaptation procedure. If both classes are different CVWCai , then LVQ is denoted by Eq. (3).

ΔVW(t+1)=VW(t)η(t)(aiVW(t)) (3)

Based on the condition, either Eq. (2) or Eq. (2) can be used to update the weighting function in ANN., The output of the predicted value is determined using the weighting function given in Eq. (4).

bi=f(ai×VW(t+1)) (4)

The output bi . classify the weeds for all four classes such as soybean, soil, grass, and broadleaf weed.

3  Dataset

The dataset used in the current investigation was taken from [12]. The 400 UAV images were captured by the DJI Phantom 3 Professional from 4 m altitude from the surface and having a ground sampling distance of 1 cm (see Fig. 3). The UAV images were cropped to the size of length, width, and no of channels, i.e., 220 × 200 × 3 (see Fig. 4). These images were processed for segmentation using the SLIC algorithm. This image dataset was segmented into 15336 segments, 7376 for soybean, 3520 for grass, 3249 for soil, and 1191 for the broadleaf weeds. For more information on the dataset, please refer to [12].

4  Methodology

We extract 15000 images out of 15336 images randomly. The dataset was split for the ratio of 70:10:20, i.e., 10500 images for training, 1500 images for validation, and 3000 images for the final testing. CNN with 18 layers was developed in the current study for classification (see Fig. 5). Python 3.8 anKeras 2.3.0 API with TensorFlow 2.0 backend was used in this research. Firstly, we have done the data pre-processing.

We have used four hidden convolutional layers that operate on UAV image segments (Fig. 5). The rectifier activation function (ReLU) was also used in each convolution layer. The batch normalization layer and dropout were used following each convolution layer. The batch normalization standardized the inputs with mean value and standard deviation as 0 and 1 respectively to each mini-batch layer. The work of the batch normalization layer is to stabilize the training process and decrease the number of training epochs needed to train the deep CNN networks. The dropout layer is added between two convolution layers, and outputs of the last layer are fed to the subsequent layer to prevent overfitting. It works by "dropping out" or probabilistically removing inputs to a layer, which may be input variables from a previous layer. A value of 0.5 was chosen with two dropout layers.

Two max-pooling layers were used between the second to the third convolution and the third to the fourth. A flattening layer was added as the 10th layer; it was required to utilize the fully connected layers after convolutional/max-pool layers. The flattening layer combines all the observed local features of the previous convolutional layers. After flattening the layer, the LVQ layer was used to classify the images. In our proposed method, weighting parameters are selected by using the LVQ algorithm for classification. The first fully-connected dense layers and second fully connected LVQ layer were added LVQ layer, was used as the output layer to make the predictions to specify the output’s transform and structure. The Kohonen layer consists of 40 neurons, i.e., ten neurons for each class. The number of epochs for LVQ was 30 after trying the combination of 10, 20, 30, and 40. The learning rate of 0.01 was used, and the input vector was initiated with the random value. The 29th epoch resulted in better training accuracy but lower validation accuracy than the previous (28th). Thus, the training terminated at the 30th epoch, notwithstanding the maximum epochs set to 30.

images

Figure 3: Raw UAV images belong to broadleaf, grass, soil, and soybean

images

Figure 4: Segmented dataset for all four categories

images

Figure 5: Architecture of the proposed CNNLVQ method

In CNNLVQ training, the number of parameters such as iterations, learning rate, batch size, and the dropout rate was obtained after empirical attempts of different combinations. DL models such as CNN might be very complex while tuning the parameters. One of the ways is to use a mean ensemble of different models to achieve lower generalization error than single models; however, it might be complex to develop given the computational cost of training every single model. The alternate is model snapshots provided in the sklearn library, which can work during a single training run and combine the predictions to obtain the ensemble prediction. Testing of different parameter branches was performed using snapshots to select the best combination to validate the results, and this weight optimization becomes snapshots of the model. For example, the number of iterations (10, 20, 30, 40, 50), the value of learning rates from (0.00001, 0.0001, 0.001, 0.01, 0.1), batch size (50, 100, 150, 200) and dropout rate from (0.1, 0.2, 0.3, 0.4, 0.5) were assessed to select the best architecture.

5  Experimental Setup

The proposed CNN LVQ model was initially implemented with a Core i5 processor of 2.5 GHz and a 1 Gb graphics card. Then the model was tested on the Google cloud platform with 12 GB NVIDIA Tesla K80 for computational time assessment with GPU support. Python 3.8 and Keras 2.3.0 API and TensorFlow 2.0 backend were used in this research with NumPy, matplotlib, cv2, sklearn, and glob libraries. The dataset used in the current investigation was taken from [12]. The 400 UAV images were captured by the DJI Phantom 3 Professional from 4 m altitude from the surface and having a ground sampling distance of 1 cm. The complete methodology is given in Fig. 6.

images

Figure 6: Flowchart of the proposed methodology

5.1 Performance Metrics

5.1.1 Accuracy Assessment of the CNN Model Performance

We evaluated our model’s performance based on loss and accuracy (Fig. 7). These metrics are defined as follows: Accuracy of a method on a test dataset is the percentage used to correctly identifies the test occurrences, and it is computed as

Accuracy=(TP+TN)/(TP+FP+TN+FN) (5)

images

Figure 7: CNN model performance (a) Training loss and validation loss for CNNLVQ classification, (b) Training accuracy and validation accuracy for CNNLVQ classification

An attempt was made to see if the models were overfitted. Overfitting can be detected if training loss is comparatively less than validation loss or a significant variance between the validation and training loss. It was observed that the variance between validation loss and training loss was significantly lower; therefore, it indicated that the overfitting was not existed (see Fig. 7). The dropouts were also utilized to prevent overfitting issues. The main dropout features were to disable neurons. Some information loss might occur for each sample, and the successive layers attempt to construct the representation based on incomplete representations. It was observed that the training loss was higher since it was more challenging for the network to provide the correct representation. However, all of the units were available during validation so that the network can utilize its full computational power-and therefore, it may perform better than in training. The Training accuracy and validation accuracy for weed classification are significantly promising.

5.1.2 Accuracy Assessment of the CNN Classification

Overall accuracy defines the sites where the proportion was mapped correctly out of all of the references. Errors of omission are the sites of reference which were omitted from the actual class in the classified map. The calculation of commission errors was performed by obtaining the number of classified images for misclassifications. Producer’s accuracy was also calculated, which is the complement of the omission error, or it can be calculated from the difference of 100 and omission error. This user accuracy, referred to as reliability, can be calculated from the complement of the error of commission. The user’s accuracy and producer accuracy are from the point of view of a mapmaker and map user, respectively. The Cohen’s kappa coefficient was also calculated to assess the classification vs. random chance of assigning positive and negative values. The calculation of errors and accuracy was performed based on [14].

6  Results and Discussion

A confusion matrix was used to verify the CNNLVQ weed classification performance (Tab. 1). Total 3000 images or test images (20% of total images) were used for all four classes. As seen in Tab. 1, that soil class has 100% accuracy due to easily distinguishable tone and texture in the surface reflectance, which leads to efficient classification. The broadleaf class classified as weed has achieved a UA of 97.50% and PA of 98.32%. However, these broadleaf classification values are significantly higher than the soil, soybean, and grass class. Only a few classes have been incorrectly misclassified for every class. In the image of the broadleaf weed, the presence of grass induced misclassification. Similarly, some soybean images were associated with broadleaf causing the error in the classification. Additionally, soybeans were captured in their early stages make them quite similar to broadleaf weeds images. It can be observed from Tab. 1 that out of a total of 3000 images, 2987 images were correctly classified using CNNLVQ model, which leads to an overall accuracy of 99.44%.

images

6.1 Comparison with Other Studies

It is subjective to compare the current study results with the other existing studies due to the different boundary conditions for every study. Although, we have selected 17 other recent studies for weed classification using ML and DL (Tab. 2). In Tab. 2, the best accuracy of 99.50% was obtained by [12] using the CNN model; the reason might be the higher number of iterations (i.e., 15000) used by which add to the computational time required [12]. The second-best accuracy of 99.44% was achieved from the proposed novel CNNLVQ model based on only 150 iterations. Tab. 2 compares the proposed CNNLVQ method and 17 recent studies used for weed classification based on AI/ML. It was observed that the proposed model achieves a significant classification accuracy of 99.44% with a total of 17 misclassifications. The significance of the proposed model is due to the conjunction of LVQ and the selection of optimal parameters used in the CNN model to find the best model.

images

6.2 Limitations and Computational Complexity of Proposed Model

The 400 UAV image dataset used in the current study has 15336 segments, in which 1191 (8%) segments belong to the broadleaf weeds. Additionally, soybean, grass, and soil segments were 7376 (48%), 3520 (23%), and 3249 (21%), respectively. Although the CNN model used in the current investigation yields a good accuracy of 99.44%, there might be an issue due to imbalance classes. Another limitation of the proposed algorithm is computation time, which is a crucial issue for evaluating the performance of a real-time system. The proposed model was implemented with a Core i5 processor of 2.5 GHz. The average computation time to classify the 15000 segments image was 2 hours; however, [12] took around 30 minutes. The reason for this computation time difference might be the difference in system configuration and complexity of the model. The proposed CNN LVQ was tested on the Google cloud platform for computational time assessment with GPU support. The CNN LVQ model took 13.75 minutes using the cloud to complete the execution, only 11.45% of the time compared to the i5 processor. ML-based models such as SVM took less than 4.31 seconds and 0.72 seconds for [4,12], respectively. Less computation time is required for real-time decision-making [1823]. The future scope of the present investigation is to utilize more image datasets and reduce computation time by optimizing parameters and utilizing TPU based cloud computing.

7  Conclusion

In the proposed work, the novel CNNLVQ model detects weeds in soybean crop images and discriminates between the grass and broadleaf weeds. The dataset used in the proposed study was composed of 15336 images of soybean, soil, and broadleaf. The results were compared with recent ML and DL applications used for weed classification. The developed CNNLVQ demonstrates promising classification results, with an overall accuracy of 99.44%. The novelty of the present investigation was the development of a novel CNNLVQ model, rigorous hyperparameter optimization, and utilization of the real dataset. The future scope of the present research will be to collect more images for different areas [2427]. Generalization testing of the current high accuracy model will be of more interest to apply crop-wise weed detection for different areas. Although the current model achieves higher accuracy than available studies for weed classification, the actual field evaluation will be a way forward. The current model used processed data in a controlled environment.

Funding Statement: The author extends their appreciation to the deputyship for Research & Innovation, Ministry of Education in Saudi Arabia, for funding this research work through the Project Number (IFP-2020-14).

Conflicts of Interest: The author declares that they have no conflicts of interest to report regarding the present study.

References

  1. B. Reilly, “Traditional Arabian agriculture,” in Slavery, Agriculture, and Malaria in the Arabian Peninsula, 1st ed., vol. 1. Ohio, USA: Ohio University Press, pp. 22–48, 2019.
  2. Z. Kiala, O. Mutanga, J. Odindi and K. Peerbhay, “Feature selection on Sentinel-2 multispectral imagery for mapping a landscape infested by Parthenium weed,” Remote Sensing, vol. 11, no. 16, pp. 1892, 2019.
  3. C. Gée and E. Denimal, “RGB image-derived indicators for spatial assessment of the impact of broadleaf weeds on wheat biomass,” Remote Sensing, vol. 12, no. 18, pp. 1–19, 2020.
  4. F. Ahmed, H. A. A. Mamun, A. S. M. H. Bari, E. Hossain and P. Kwan, “Classification of crops and weeds from digital images: A support vector machine approach,” Crop Protection, vol. 40, no. 12, pp. 98–104, 2012.
  5. S. A. Hussain, A. Tahir, J. A. Khan and A. Salman, “Pixel-based classification of hyperspectral images using convolutional neural networks,” PFG–Journal of Photogrammetry, Remote Sensing and Geoinformation Science, vol. 87, no. 1-2, pp. 33–45, 2019.
  6. B. Alotaibi and M. A. Alotaibi, “Hybrid deep resnet and inception model for hyperspectral image classification,” PFG–Journal of Photogrammetry, Remote Sensing and Geoinformation Science, vol. 88, no. 6, pp. 463–476, 2020.
  7. A. N. V. Sivakumar, J. Li, S. Scott, E. Psota, A. J. Jhala et al., “Comparison of object detection and patch-based classification deep learning models on mid-to late-season weed detection in UAV imagery,” Remote Sensing, vol. 12, no. 13, pp. 1–22, 2020.
  8. G. G. Peteinatos, P. Reichel, J. Karouta, D. Andújar and R. Gerhards, “Weed Identification in maize, sunflower, and potatoes with the aid of convolutional neural networks,” Remote Sensing, vol. 12, no. 24, pp. 4185, 2020.
  9. K. Zou, X. Chen, F. Zhang, H. Zhou and C. Zhang, “A field weed density evaluation method based on uav imaging and modified u-net,” Remote Sensing, vol. 13, no. 2, pp. 1–19, 2020.
  10. H. Huang, J. Deng, Y. Lan, A. Yang, X. Deng et al., “A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery,” PLOS One, vol. 13, no. 4, pp. e0196302, 2018.
  11. H. Wang, N. Gao, Y. Xiao and Y. Tang, “Image feature extraction based on improved FCN for UUV side-scan sonar,” Marine Geophysical Research, vol. 41, no. 4, pp. 8002, 2020.
  12. A. D. S. Ferreira, D. M. Freitas, G. G. D. Silva, H. Pistori and M. T. Theophilo, “Weed detection in soybean crops using ConvNets,” Computers and Electronics in Agriculture, vol. 143, no. 11, pp. 314–324, 2017.
  13. M. Sardogan, A. Tuncer and Y. Ozen, “Plant leaf disease detection and classification based on cnn with lvq algorithm,” in Proc. IEEE UBMK, 2018-3rd Int. Conf. on Computer Science and Engineering, Sarajevo, Bosnia and Herzegovina, pp. 382–385, 2018.
  14. A. J. T. Ballesteros and J. C. Riquelme, “Data mining methods applied to a digital forensics task for supervised machine learning,” Studies in Computational Intelligence, vol. 555, no. Jan, pp. 413–428, 20
  15.  W. Deng, Y. Huang, C. Zhao and X. Wang, “Sensors and transducers discrimination of crop and weeds on visible and visible/near-infrared spectrums using support vector machine, artificial neural network and decision tree,” Sensors & Transducers, vol. 26, no. March, pp. 26–34, 2014.
  16.  A. Olsen, D. A. Konovalov, B. Philippa, P. Ridd, W. C. Jake et al., “DeepWeeds: A multiclass weed species image dataset for deep learning,” Scientific Reports, vol. 9, no. 1, pp. 574, 2019.
  17.  K. A. Rangarajan and R. Purushothaman, “Disease classification in eggplant using pre-trained vgg16 and msvm,” Scientific Reports, vol. 10, no. 1, pp. 1–11, 2020.
  18. A. Faruq, A. Marto and S. S. Abdullah, “Flood forecasting of Malaysia Kelantan river using support vector regression technique,” Computer Systems Science and Engineering, vol. 39, no. 3, pp. 297–306, 2021.
  19. M. Kolhar and A. Alameen, “Multi criteria decision making system for parking system,” Computer Systems Science and Engineering, vol. 36, no. 1, pp. 101–116, 2021.
  20. D. Elavarasan, D. R. Vincent, V. Sharma, A. Y. Zomaya and K. Srinivasan, “Forecasting yield by integrating agrarian factors and machine learning models: A survey,” Computers and Electronics in Agriculture, vol. 155, no. 2, pp. 257–282, 2018.
  21. D. Elavarasan and P. M. D. R. Vincent, “A reinforced random forest model for enhanced crop yield prediction by integrating agrarian parameters,” Journal of Ambient Intelligence and Human Computing, vol. 12, pp. 1–14, 20
  22. D. Elavarasan and P. M. D. R. Vincent, “Fuzzy deep learning-based crop yield prediction model for sustainable agronomical frameworks,” Neural Computing and Applications, vol. 4, pp. 1–20, 2021.
  23. R. N. Abirami, P. M. D. R. Vincent, K. Srinivasan, U. Tariq and C. Y. Chang, “Deep cnn and deep gan in computational visual perception-driven image analysis,” Complexity, vol. 2021, pp. 1–30, 2021.
  24. M. A. Haq, M. F. Azam and C. Vincent, “Efficiency of artificial neural networks for glacier ice-thickness estimation: A case study in western Himalaya, India,” Journal of Glaciology, vol. 67, no. 264, pp. 671–684, 2021.
  25. M. A. Haq, M. Alshehri, G. Rahaman, A. Ghosh, P. Baral et al., “Snow and glacial feature identification using hyperion dataset and machine learning algorithms,” Arabian Journal of Geosciences, vol. 14, no. 15, pp. 1–21, 2021.
  26. M. A. Haq, G. Rahaman, P. Baral and A. Ghosh, “Deep learning based supervised image classification using UAV images for forest areas classification,” Journal of the Indian Society of Remote Sensing, vol. 49, no. 3, pp. 601–606, 2021.
  27. M. A. Haq and P. Baral, “Study of permafrost distribution in Sikkim Himalayas using Sentinel-2 satellite images and logistic regression modelling,” Geomorphology, vol. 333, pp. 123–136, 2019.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.