[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.026783
images
Article

Autonomous Unmanned Aerial Vehicles Based Decision Support System for Weed Management

Ashit Kumar Dutta1,*, Yasser Albagory2, Abdul Rahaman Wahab Sait3 and Ismail Mohamed Keshta1

1Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Ad Diriyah, Riyadh, 13713, Kingdom of Saudi Arabia
2Department of Computer Engineering, College of Computers and Information Technology, Taif University, Taif, 21944, Kingdom of Saudi Arabia
3Department of Archives and Communication, King Faisal University, Al Ahsa, Hofuf, 31982, Kingdom of Saudi Arabia
*Corresponding Author: Ashit Kumar Dutta. Email: drashitkumar@yahoo.com
Received: 04 January 2022; Accepted: 23 February 2022

Abstract: Recently, autonomous systems become a hot research topic among industrialists and academicians due to their applicability in different domains such as healthcare, agriculture, industrial automation, etc. Among the interesting applications of autonomous systems, their applicability in agricultural sector becomes significant. Autonomous unmanned aerial vehicles (UAVs) can be used for suitable site-specific weed management (SSWM) to improve crop productivity. In spite of substantial advancements in UAV based data collection systems, automated weed detection still remains a tedious task owing to the high resemblance of weeds to the crops. The recently developed deep learning (DL) models have exhibited effective performance in several data classification problems. In this aspect, this paper focuses on the design of autonomous UAVs with decision support system for weed management (AUAV-DSSWM) technique. The proposed AUAV-DSSWM technique intends to identify the weeds by the use of UAV images acquired from the target area. Besides, the AUAV-DSSWM technique primarily performs image acquisition and image pre-processing stages. Moreover, the Adam optimizer with You Only Look Once Object Detector-(YOLOv3) model is applied for the detection of weeds. For the effective classification of weeds and crops, the poor and rich optimization (PRO) algorithm with softmax layer is applied. The design of Adam optimizer and PRO algorithm for the parameter tuning process results in enhanced weed detection performance. A wide range of simulations take place on UAV images and the experimental results exhibit the promising performance of the AUAV-DSSWM technique over the other recent techniques with the accy of 99.23%.

Keywords: Autonomous systems; object detection; precision agriculture; unmanned aerial vehicles; deep learning; parameter tuning

1  Introduction

In recent times, application of remote sensing with UAV showed greater potential in precision agriculture since they could be armed with several imaging sensors to gather higher temporal, spatial, and spectral resolution images [1]. The benefits of their high flexibility and lower-cost in-flight scheduling make them prevalent for research fields. Regarding UAV-based remote sensing, Object-based Image Analysis (OBIA) is one of the traditional methods in object classification [2]. First, The OBIA identifies spatially and spectrally homogenous objects based on its segmentation result and later integrates geometry, spectral and textural data from that object to increase classification results [3]. Earlier research about OBIA in precision agriculture examined for instance weed detection and crop classification through UAV images. Precision agriculture is described as the application of technology with the aim of enhancing environmental quality and crop performance [4]. The primary objective of the presented approach is to choose the accurate management practice for allocating the right doses of inputs, like herbicides, fertilizers, fuel, seed, and so on, at the right time and to the right place.

Weed characterization and detection represent the main problems of precision agriculture, because, in present farming practice, herbicides are widely employed across fields, even though that weed exhibits uneven spatial distribution [5]. The traditional approach utilized for controlling the weed in crops is manual weeding. But it is labour and time-consuming, making it ineffective for largescale crops [6]. In order to solve this problem, UAV network is utilized. In addition, this UAV is armed with multi-spectral cameras which provide further details when compared to RGB digital images, because they capture spectral band which is not identified by the human eye—like near infrared (NIR)—provide data on the factors like the reflectance of vegetation indices and visible light [7]. This component allows us to detect significant correlations which assist in making distinct estimations.

In spite of considerable developments in UAV acquisition systems, the automated detection of weed remains a challenge. Recently, deep learning (DL) approaches have been demonstrated significant advances for several computer vision (CV) tasks, and current development showed the significance of this technique for the detection of weed [8]. Still, they aren’t typically employed in agriculture, but, the large number of data needed in the learning process have emphasized the problems of the manual annotation of this dataset [9]. The same issues emerge in agriculture data, whereas labelling plant in image fields is time-consuming. Until now, very little consideration is given to the unsupervised annotation of data for training DL methods, especially for agriculture [10].

In spite of current progress and efforts that have been made, further work is still needed for enhancing weed map robustness and accuracy to conquer difficult agricultural conditions. While taking a realtime situations of weed detection in row crop fields into account, crop rows are highly effective to assist inter-row weed recognition by analyzing the images [11]. The major benefits of this detection method are effective but it failed to identify intra-row weed. Rather, OBIA has the capacity to identify weeds nevertheless of their distribution, when it relies largely on extracted features and has the possibility to categorize inter-row weed inaccurately.

This paper presents an autonomous UAV with decision support system for weed management (AUAV-DSSWM) technique. The proposed AUAV-DSSWM technique initially undergoes image acquisition and image pre-processing stages. Then, the Adam optimizer with You Only Look Once Object Detector-(YOLOv3) model is utilized as an automated weed detection. Moreover, the poor and rich optimization (PRO) algorithm with softmax layer is used effective classification of weeds and crops. The design of Adam optimizer and PRO algorithm for the parameter tuning process results in enhanced weed detection performance. A detailed simulation analysis is carried out on the test UAV images and the results are inspected under varying aspects.

The rest of the paper is organized as follows. Section 2 briefs the related works, Section 3 provides proposed model, Section 4 offers experimental validation, and Section 5 draws conclusion.

2  Literature Review

This section provides a detailed survey of existing weed detection techniques using UAV images. In Islam et al. [12], the performance of various ML methods like RF, SVM, and KNN, are analyzed for detecting weeds through UAV images gathered from chilli crop fields. Osorio et al. [13] introduced three methodologies for estimating weed according to the DL image processing in lettuce crop and compared with the visual estimation by the specialists. One approach is depending on SVM using HOG as feature descriptors. Another one is depending on YOLOV3 uses its effective framework for the detection of objects, and last one is depending on Mask RCNN for getting an instance segmentation for all the individuals.

In Islam et al. [14], RGB images captured by drones were utilized for detecting weeds in chilli fields. This process has been tackled by feature extraction, orthomasaicking of images, labeling of images for training ML approaches, and utilize of unsupervised learning with the classification of RF. In Huang et al. [15], UAV images were captured in rice fields. A semantic labelling model has been adapted for generating the weed distribution map. An ImageNet pretrained CNN using residual architecture has been adopted in a full convolution form and transmitted to this data set by finetuning. Atrous convolution has been employed for extending the views of convolution filter; the performances of multiscale processing were estimated, and an FC-CRF method was employed afterward the CNN for additionally refining the spatial information.

Bah et al. [16] integrated the DL with line detection for enforcing the classification method. The presented approach is employed for higher-resolution UAV image of vegetables taken around 20 m above the soil. Also, they implemented a wide-ranging assessment of the algorithm with actual information. Gao et al. [17] designed an approach for detecting inter- and intra-row weeds in earlier season maize fields from aerial visual images. Especially, the Hough transform algorithm (HT) has been used in the orthomosaicked image for detecting inter-row weeds. A semi-automated Object-Based Image Analysis (OBIA) process has been presented using RF integrated to FS methods for classifying maize, soil, and weeds.

Bah, et al. [18] presented a fully automated learning approach with CNN using an unsupervised trained data set for detecting weeds from UAV image. The presented approach includes three primary stages. Firstly, detect the crop row and utilize them for identifying the inter-row weed. Next, inter-row weed is employed for constituting the trained data set. Lastly, execute CNN on this data set for building an algorithm which is capable of detecting the crop and the weeds in the image. Gašparović et al. [19] experimented with four classification methods for creating of weed map, combining manual and automatic models, and pixel-based and object-based classification models that are separately utilized on two subsets. Input UAV data were gathered by a lower-cost RGB camera because of its competitiveness than multi-spectral cameras. The classification algorithm is depending on the RF-ML method for weed and bare soil extraction follows an unsupervised classification with the K-means method for additional evaluation of weed and bare soil existence in non-soil and non-weed regions.

3  The Proposed Model

In this study, a new AUAV-DSSWM technique has been developed for the detection and classification of weeds on UAV images. The AUAV-DSSWM technique encompasses several subprocesses such as UAV image collection, image pre-processing, YOLO-v3 based object detection, Adam optimizer based hyperparameter tuning, SM layer based classification, and PRO based parameter optimization. Fig. 1 illustrates the overall process of AUAV-DSSWM technique. The detailed work of each module is elaborated in the succeeding sections.

images

Figure 1: Overall process of AUAV-DSSWM technique

3.1 Data Collection and Pre-processing

For data collection, sensors and camera mounted UAVs are utilized for capturing agricultural field crops. In this study, RGB cameras are placed on the UAVs and it acquired the images by the use of a Phantom 3 Advanced drone mounted camera with the 1/2.300 CMOS sensor. Generally, the basic processes involved in UAV image preprocessing are photo alignment, dense cloud building, 3D mesh building, texture building, digital elevation model building, and orthomosaic photo generation. The blending mode is kept as mosaic to generate orthomosaic photos. Besides, the excessive green (ExG) vegetation index can be determined as follows.

ExG=2grb(1)

r=RG+R+B,g=GG+R+B,b=BG+R+B(2)

where R, G and B indicate the red, green, and blue channel pixels, correspondingly.

3.2 YOLOv3 with Adam optimizer Based Object Detection

During the object detection process, the UAV images are passed into the YOLOv3 model and the outcome will be the identified objects that exist in it. The family of YOLO algorithm looks at the whole images while recognizing and detecting objects and extracts deep data regarding the appearance and classes, different from other methods like R–CNN based algorithm or sliding window-based method. This algorithm processes the recognition of objects as an individual regression problem which gives fast responses with a decrease in the model difficulty of the detector. Although the considerable speed attainment, the algorithm lags based on the accuracy particularly with smaller objects. The newest process in the YOLO, i.e., YOLOv3, has demonstrated its performance over other advanced detectors. TheYOLOv3 framework has 107 layers in overall allocated as {route =4; convolution =75; upsample =2; detection =3;residual =23 }. The framework employs a novel methodology for feature extraction represented as Darknet-53. The novel method is considerably large when compared to the previous versions; nonetheless, it has been demonstrated to be very effective when compared to other advanced models. The method employs 53 convolutional layers that take input images of size 416×416. Fig. 2 illustrates the framework of the YOLOv3 object detectors. The Darknet-53 method is pre-trained on ImageNet [20].

images

Figure 2: Architecture of YOLOv3

For the detecting process, the network is altered by eliminating its previous layers and stacking up to other layers which results in the ultimate network framework. The initial seventy-five layers in the network represent fifty-two convolution layers of the Darknet-53 models pre-trained on ImageNet. The residual thirty-two layers are included for qualifying YOLOv3 for object recognition on distinct data sets with additional training. As well, YOLOv3 employs remaining layers like skip connection which integrates feature map from 2 layers with component-wise addition result in finer-grained data.

The YOLOv3 substitutes the softmax based activation utilized in old version with independent logistic classifier. The feature is extracted by the same idea to feature pyramid network. Likewise, binary cross-entropy loss is currently employed to class prediction, i.e., helpful while confronted with images having over-lapping labels. The K-means is applied to anchor box generation; but, 9 bounding boxes are currently employed instead of 5. The amount of bounding boxes is also shared over the 3 recognition scales. In the current model, route layers are utilized which results from a layer feature map.

3.2.1 Prediction

YOLOv3 process image by separating them in N×N grid, also when the center of objects falls in a grid cell, the cells are accountable for object detection. The network forecasts bounding box at 3 distinct scales. The initial detecting scale is employed to detect larger-sized objects. The next detection scale is employed for medium-sized objects and finally for smaller objects. All the cells predict B bounding box and have five predictions: x, y, w, h, and confidence score represent the measure of predictions having an object. The x and y variables are the center of boxes for the grid cell, whereas h&w represent the height and width of the forecasted boxes for the whole image. The confidence score is the Intersection over Union (IoU) among the ground truth and predicted boxes. The output predictions are N×N×B×(5+C) tensor in which five is the amount of predicted value for each bounding boxes (the confidence values and x,y,w,h,) and C denotes the overall amount of object classes.

3.2.2 Loss Function

The YOLOv3 consist of a loss function box and precisely categorize the identified object with a (3) which instruct the network to appropriately forecast bounding provision to penalize false positives:

λcoordi=0N2j=0B1ijobj[(xix^i)2+(yiy^i)2]+λcoordi=0N2j=0B1ijobj[(wiw^i)2+(hih^i)2](3)

+i=0N2j=0B1ijobj(CiC^i)2+λnoobji=0N2j=0B1ijnoobj(CiC^i)2i=0N2j=0Bδilog(p^i(c))(4)

+(1δi)log(1p^i(c)).

The symbols under hat represent respective prediction values. The loss function has 3 error mechanisms: classification, localization, and confidence as noted in Eq. (3). Distinct loss mechanisms are integrated with sum-squared method as it is easy for optimization. The localization loss is accountable for reducing the error among the ground truth object and the “responsible” bounding box when objects are identified in a grid cell.

3.2.3 Hyperparameter Tuning

For optimally adjusting the hyperparameters that exist in the YOLOv3 model, the Adam optimizer is used. The hyperparameter selection of the DNN model is carried out by the Adam optimizer. The Adam is a 1st order optimization model used for replacing the conventional stochastic gradient descent procedure. It connects the 2nd moment computation based on the 1st order moment approximation and appends a moment to Adadelta. The learning rate of all the variables can be adaptively modified utilizing the 1st and 2nd order moment approximation of the gradients. In addition, bias correction is appended that produces the variables highly stable. The iterated mathematical equations can be defined using Eq. (5):

g=(hθ(xiyi))xi,mt=β1mt1+(1β1)g,νt=β2νt1+(1β2)g2,mt=mt1β2t,(5)

νt=νt1β2t, θj=θj1mtανt+e,

where g indicates the computed gradient, mt represents the 1st moment of gradient g, that is also an expectation of gradient g, νt implies the 2nd moment of gradient g, β1 indicates the 1st order moment attenuation coefficient, β2 symbolizes the 2nd moment attenuation coefficient, θ implies a variable requires to be resolved, and mt and νt indicates offset correction of mt and νt, correspondingly.

3.3 Softmax Classification

Once the objects are detected in the UAV images, the classification process is carried out by the use of PRO with SM classifier and thereby effectually identified the weeds from the crop. The SM layer could forecast the label probability of the input data xi through the feature learned from the 3rd hidden layer representations hi(3). The node count existing in SM layer is selected. In this technique method, SM layers consist of 5 nodes equivalent to grade group from one to five. Although classifiers like SVM could also be utilized, softmax LR enables us to enhance the entire deep network via finetuning the network as well as softmax layer.

JSSAESMC(W,b,x,z^)=minW,bJ(x,z^)+λsmcWsmc22(6)

whereas W and b represent the weights and bias of the entire deep network, consist of SM and SSAE layers, J(x,z^) indicates the logistic regression costs among the classification attained by the input feature x and the unsupervised results z^ of SSAE and Wsmc represent the weight and λsmc signifies the weight decay variable on SM regression layer. While executing the finetuning process, the biases and weights of SSAE and SM are collectively enhanced, and the SM layer is employed to the classification [21]. Consider yj signifies label of trained instance xj. Likelihood of xj belonging to the kth class as follows

P(yi=k|xi;Wsmc,bsmc)=eWsmc(k)Txi+bsmc(k)j=1NeWsmc(j)Txi+bsmc(j)(7)

Let bsmc(k) and Wsmc(k) be the distribution of the biases and weights in the kth class. N represent the overall amount of classes. As per the maximum probability, we could calculate the grade group of the instanced xi by Eq. (8):

Grade(xi)=arg maxk=1N(yi=k|xi;Wsmc,bsmc)(8)

To optimize the weight value that exists in the SM layer, the PRO algorithm is applied in such a way that the weed detection outcomes can be improved to a maximum extent.

The PRO technique is presented by the author [22]. The PRO was dependent upon people's wealth performance under the society. Usually, the people are clustered as to 2 financial classes from the society. An initial group has of wealthier people (wealth has superior to average). The next group has worse people (wealth is lesser than average). All persons from these sets are seeking for improving their financial place in society. The people of lesser economic class are trying to enhance their financial place and decrease the class gap with learned in wealthier peoples. The rich economic class people attempt to extend its class gap with observed in individuals from least economic class. During the optimized issue, all individual solutions from the Poor population move nearby the global optimum solutions from the search space with learning in the rich solution under the Rich population. Assume that ‘N’ represents the population size. Arbitrarily, it can be ‘N’ solutions with arbitrary real values amongst zero and one. Then, the digitized procedure was executed for all places of individual solution to change altering real values to binary values dependent upon in Eq. (9)

χi,j={1,χi,j>rand0,otherwise(9)

At this point, rand refers to the arbitrary number amongst zero and one. The candidate solution from the population is ordered dependent upon main purpose. The top part of population is represented as rich economical set of people and bottom part of population is signified as worse economical set of people. Eq. (12) illustrates the essential population from the BPRO technique.

POPMain=POPrich+POPpoor(10)

The FF roles are an essential play from the optimized issues. It computes a positive integer for indicating optimum the candidate solution is. The classifier error rate has been assumed as minimized FF that is expressed in Eq. (13). The rich solution is minimal fitness score (error rate) and worse solutions have maximum fitness score (error rate).

fitness(χi)=ClassifierErrorRate(χi)

=numberofmisclassifieddocumentsToralnumberofdocuments100(11)

The rich people are moving nearby for improving their economical class gap with observed in individuals from the worse economic classes [23]. The worse economic class people are moving nearby decrease its economical class gap with learning in individuals from rich economic class to enhance its financial status. This general performance of rich as well as poor people has been utilized for generating the novel solution.

χnew=χrich,i,jold+α[χrich,i,jold|χpoor,best,jold](12)

χpoor,i,jnew=χpoor,i,jold+α[(χrich,best,j0+χrich,mean,j0+χrich,worst,j03)χpoor,i,j0](13)

4  Performance Validation

The experimental result analysis of the AUAV-DSSWM technique is carried out in this section. The classification results of the AUAV-DSSWM technique is examined using a benchmark dataset [24]. It comprises of 287 images containing crops and 2713 images comprising weed. A few sample images are exhibited in Fig. 3.

Fig. 4 demonstrate the sample visualization result analysis of the AUAV-DSSWM technique. Fig. 4a illustrates the original image containing both crops and weeds. Fig. 4b indicated the presence of weeds and are identified by the red bounding boxes. These figures revealed that the AUAV-DSSWM technique has effectually identified the weeds out of the crops.

Fig. 5 showcases the sample results sample set of original images along with the ground truth of the crops. Fig. 5a demonstrates the original image with few crops. Fig. 5b depicts that the crops are bounded by boxes, representing the ground truth which is helpful for the training process.

The confusion matrices generated by the AUAV-DSSWM technique under dissimilar epochs is portrayed in Fig. 6. The results shown that the AUAV-DSSWM technique has effectually classified the images into crop and weed. For instance, under 10 epochs, the AUAV-DSSWM technique has identified 275 images into crop and 2700 images into weed. Likewise, under 50 epochs, the AUAV-DSSWM technique has categorized 279 images into crop and 2700 images into weed.

images

Figure 3: Dataset Images

imagesimages

Figure 4: (a) Original image, (b) Ground truth of weeds

images

Figure 5: Sample results (a) Original image, (b) Ground truth of crops

imagesimages

Figure 6: Confusion matrix of AUAV-DSSWM technique

Tab. 1 and Fig. 7 portrays the overall weed detection outcome of the AUAV-DSSWM technique under distinct epochs. The results notified that the AUAV-DSSWM technique has accomplished effective outcome under all epochs. For instance, with 10 epochs, the AUAV-DSSWM technique has offered pren,recl, accy, Fscore, and kappa of 95.49%, 95.82%, 99.17%, 95.65%, and 99.16% respectively. Moreover, with 50 epochs, the AUAV-DSSWM technique has attained pren,recl, accy, Fscore, and kappa of 95.55%, 97.21%, 99.30%, 96.37%, and 99.29% respectively.

images

images

Figure 7: Result analysis of AUAV-DSSWM technique with varying approaches

The ROC analysis of the AUAV-DSSWM technique on the test weed dataset is shown in Fig. 8. The figure revealed that the AUAV-DSSWM technique has resulted to an increased ROC of 99.9732. It implies that the AUAV-DSSWM technique has the ability of attaining improved weed classification performance.

images

Figure 8: ROC analysis of AUAV-DSSWM technique

In order to showcase the betterment of the AUAV-DSSWM technique, a detailed comparison study is made in Tab. 2.

images

Fig. 9 shows the precn, recl, and Fscore analysis of the AUAV-DSSWM system with existing methods. The result shows that the FE-KNN, SVM, and RF approaches have revealed poor performance with lower values of precn, recl, and Fscore. Simultaneously, the FE-RF, FE-KNN, ResNet-101, and VGG-16Net techniques have gained somewhat reasonable values of precn, recl, and Fscore. But, the AUAV-DSSWM system has attained maximal weed detection outcome with the precn, recl, and Fscore of 95.39%, 96.59%, and 95.98%.

images

Figure 9: Comparative analysis of AUAV-DSSWM technique with different measures

Fig. 10 demonstrates the accy analysis of the AUAV-DSSWM technique with existing methods. The results shown that the FE-KNN, SVM, and RF models have shown poor performance with lower values of accy. At the same time, the FE-RF, FE-KNN, ResNet-101, and VGG-16Net models have obtained somewhat reasonable values of accy. However, the AUAV-DSSWM technique has gained maximum weed detection outcome with the accy of 99.23%.

images

Figure 10: Accuracy analysis of AUAV-DSSWM technique with existing approaches

Tab. 3 and Fig. 11 shows the CT analysis of the AUAV-DSSWM technique with recent approaches [25]. The results depicted that the FE-RF, FE-KNN, and FSVM models have obtained higher CT of 204, 185 and 172 s respectively.

In line with, the SVM, RF, ResNet-101, and VGG-16Net models have obtained certainly reduced CT of 157, 141, 125, and 97 s respectively. However, the AUAV-DSSWM technique has outperformed the existing methods with the lower CT of 64 s. The above mentioned results and discussion portrayed that the AUAV-DSSWM technique has the ability of attaining maximum weed detection performance.

images

images

Figure 11: CT analysis of AUAV-DSSWM technique with recent methods

5  Conclusion

In this study, a new AUAV-DSSWM technique has been developed for the detection and classification of weeds on UAV images. The AUAV-DSSWM technique encompasses several subprocesses such as UAV image collection, image pre-processing, YOLO-v3 based object detection, Adam optimizer based hyperparameter tuning, SM layer based classification, and PRO based parameter optimization. The utilization of the Adam optimizer and PRO algorithm for the parameter tuning process results in enhanced weed detection performance. A detailed simulation analysis is carried out on the test UAV images and the results are inspected under varying aspects. The comprehensive comparative results demonstrate the significant outcomes of the AUAV-DSSWM technique over the other recent techniques. Therefore, the AUAV-DSSWM technique can be extended to the design of automated image annotation techniques to reduce the manual labelling task.

Acknowledgement: The authors would like to acknowledge the support provided by AlMaarefa University while conducting this research work.

Funding Statement: This research was supported by the Researchers Supporting Program (TUMA-Project-2021-27) Almaarefa University, Riyadh, Saudi Arabia. Taif University Researchers Supporting Project number (TURSP-2020/161), Taif University, Taif, Saudi Arabia.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

 1.  J. T. Sánchez, F. J. M. Carrascosa, F. M. J. Brenes, A. I. de Castro and F. L. Granados, “Early detection of broad-leaved and grass weeds in wide row crops using artificial neural networks and UAV imagery,” Agronomy, vol. 11, no. 4, pp. 749, 2021. [Google Scholar]

 2.  M. P. Ortiz, J. M. Peña, P. A. Gutiérrez, J. T. Sánchez, C. H. Martínez et al., “Selecting patterns and features for between-and within-crop-row weed mapping using UAV-imagery,” Expert Systems with Applications, vol. 47, pp. 85–94, 2016. [Google Scholar]

 3.  M. Du and N. Noguchi, “Monitoring of wheat growth status and mapping of wheat yield’s within-field spatial variations using color images acquired from UAV-camera system,” Remote Sensing, vol. 9, no. 3, pp. 289, 2017. [Google Scholar]

 4.  J. Rasmussen, J. Nielsen, F. G. Ruiz, S. Christensen and J. C. Streibig, “Potential uses of small unmanned aircraft systems (UAS) in weed research,” Weed Research, vol. 53, no. 4, pp. 242–248, 2013. [Google Scholar]

 5.  D. J. Mulla, “Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps,” Biosystems Engineering, vol. 114, no. 4, pp. 358–371, 2013. [Google Scholar]

 6.  V. Alchanatis, L. Ridel, A. Hetzroni and L. Yaroslavsky, “Weed detection in multi-spectral images of cotton fields,” Computers and Electronics in Agriculture, vol. 47, no. 3, pp. 243–260, 2005. [Google Scholar]

 7.  A. D. S. Ferreira, D. M. Freitas, G. G. D. Silva, H. Pistori and M. T. Folhes, “Weed detection in soybean crops using ConvNets,” Computers and Electronics in Agriculture, vol. 143, no. 11, pp. 314–324, 2017. [Google Scholar]

 8.  M. J. Expósito, F. L. Granados, J. L. G. Andújar and L. G. Torres, “Characterizing population growth rate of convolvulus arvensis in wheat-sunflower no-tillage systems,” Crop Science, vol. 45, no. 5, pp. 2106–2112, 2005. [Google Scholar]

 9.  C. Gée, J. Bossu, G. Jones and F. Truchetet, “Crop/weed discrimination in perspective agronomic images,” Computers and Electronics in Agriculture, vol. 60, no. 1, pp. 49–59, 2008. [Google Scholar]

10. G. Jones, C. Gée and F. Truchetet, “Assessment of an inter-row weed infestation rate on simulated agronomic images,” Computers and Electronics in Agriculture, vol. 67, no. 1–2, pp. 43–50, 2009. [Google Scholar]

11. F. L. Granados, “Weed detection for site-specific weed management: Mapping and real-time approaches: Weed detection for site-specific weed management,” Weed Research, vol. 51, no. 1, pp. 1–11, 2011. [Google Scholar]

12. N. Islam, M. M. Rashid, S. Wibowo, C. Y. Xu, A. Morshed et al., “Early weed detection using image processing and machine learning techniques in an australian chilli farm,” Agriculture, vol. 11, no. 5, pp. 387, 2021. [Google Scholar]

13. K. Osorio, A. Puerto, C. Pedraza, D. Jamaica and L. Rodríguez, “A deep learning approach for weed detection in lettuce crops using multispectral images,” AgriEngineering, vol. 2, no. 3, pp. 471–488, 2020. [Google Scholar]

14. N. Islam, M. M. Rashid, S. Wibowo, S. Wasimi, A. Morshed et al., “Machine learning based approach for weed detection in chilli field using RGB images,” in Int. Conf. on Natural Computation, Fuzzy Systems and Knowledge Discovery, Cham, Springer, pp. 1097–1105, 2020. [Google Scholar]

15. H. Huang, Y. Lan, J. Deng, A. Yang, X. Deng et al., “A semantic labeling approach for accurate weed mapping of high resolution UAV imagery,” Sensors, vol. 18, no. 7, pp. 2113, 2018. [Google Scholar]

16. M. D. Bah, E. Dericquebourg, A. Hafiane and R. Canals, “Deep learning based classification system for identifying weeds using high-resolution UAV imagery,” in Science and Information Conf., Cham, Springer, pp. 176–187, 2018. [Google Scholar]

17. J. Gao, W. Liao, D. Nuyttens, P. Lootens, J. Vangeyte et al., “Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery,” International Journal of Applied Earth Observation and Geoinformation, vol. 67, pp. 43–53, 2018. [Google Scholar]

18. M. Bah, A. Hafiane and R. Canals, “Deep learning with unsupervised data labeling for weed detection in line crops in UAV images,” Remote Sensing, vol. 10, no. 11, pp. 1690, 2018. [Google Scholar]

19. M. Gašparović, M. Zrinjski, Đ. Barković and D. Radočaj, “An automatic method for weed mapping in oat fields based on UAV imagery,” Computers and Electronics in Agriculture, vol. 173, no. 4, pp. 105385, 2020. [Google Scholar]

20. L. Zhou, G. Deng, W. Li, J. Mi and B. Lei, “A lightweight SE-YOLOv3 network for multi-scale object detection in remote sensing imagery,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 35, no. 13, pp. 2150037, 2021. [Google Scholar]

21. T. B. Do, H. H. Nguyen, T. T. N. Nguyen, H. Vu, T. T. H. Tran et al., “Plant identification using score-based fusion of multi-organ images,” in 2017 9th Int. Conf. on Knowledge and Systems Engineering (KSE), Hue, pp. 191–196, 2017. [Google Scholar]

22. S. H. S. Moosavi and V. K. Bardsiri, “Poor and rich optimization algorithm: A new human-based and multi populations algorithm,” Engineering Applications of Artificial Intelligence, vol. 86, no. 12, pp. 165–181, 2019. [Google Scholar]

23. K. Thirumoorthy and K. Muneeswaran, “Feature selection using hybrid poor and rich optimization algorithm for text classification,” Pattern Recognition Letters, vol. 147, no. 10, pp. 63–70, 2021. [Google Scholar]

24. K. Sudars, J. Jasko, I. Namatevs, L. Ozola and N. Badaukis, “Dataset of annotated food crops and weed images for robotic computer vision control,” Data in Brief, vol. 31, no. 1, pp. 105833, 2020. [Google Scholar]

25. R. Kamath, M. Balachandra and S. Prabhu, “Paddy crop and weed discrimination: A multiple classifier system approach,” International Journal of Agronomy, vol. 2020, no. 3 and 4, pp. 1–14, 2020. [Google Scholar]

images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.