iconOpen Access

ARTICLE

crossmark

An Adaptive Edge Detection Algorithm for Weed Image Analysis

Yousef Alhwaiti1,*, Muhammad Hameed Siddiqi1, Irshad Ahmad2

1 College of Computer and Information Sciences, Jouf University, Sakaka, Aljouf, 73211, Saudi Arabia
2 Department of Computer Science, Islamia College (Chartered University), Peshawar, 25000, Pakistan

* Corresponding Author: Yousef Alhwaiti. Email: email

Computer Systems Science and Engineering 2023, 47(3), 3011-3031. https://doi.org/10.32604/csse.2023.042110

Abstract

Weeds are one of the utmost damaging agricultural annoyers that have a major influence on crops. Weeds have the responsibility to get higher production costs due to the waste of crops and also have a major influence on the worldwide agricultural economy. The significance of such concern got motivation in the research community to explore the usage of technology for the detection of weeds at early stages that support farmers in agricultural fields. Some weed methods have been proposed for these fields; however, these algorithms still have challenges as they were implemented against controlled environments. Therefore, in this paper, a weed image analysis approach has been proposed for the system of weed classification. In this system, for preprocessing, a Homomorphic filter is exploited to diminish the environmental factors. While, for feature extraction, an adaptive feature extraction method is proposed that exploited edge detection. The proposed technique estimates the directions of the edges while accounting for non-maximum suppression. This method has several benefits, including its ease of use and ability to extend to other types of features. Typically, low-level details in the form of features are extracted to identify weeds, and additional techniques for detecting cultured weeds are utilized if necessary. In the processing of weed images, certain edges may be verified as a footstep function, and our technique may outperform other operators such as gradient operators. The relevant details are extracted to generate a feature vector that is further given to a classifier for weed identification. Finally, the features have been used in logistic regression for weed classification. The model was assessed against logistic regression that accurately identified different kinds of weed images in naturalistic domains. The proposed approach attained weighted average recognition of 98.5% against the weed images dataset. Hence, it is assumed that the proposed approach might help in the weed classification system to accurately identify narrow and broad weeds taken captured in real environments.

Keywords


1  Introduction

Precision farming has the potential for higher output and lower production costs while making the greatest use of available resources and reducing environmental impact [1]. Imaging tools provide useful information to identify in-field heterogeneities in precision agriculture [2].

Herbicide automatic spray control systems with computer assistance present not only a technical as well as a significant economic problem. Weed harms crops because it competes with them for resources like water, light, nutrients, and space. This results in lower crop yields and less effective use of machinery. There are various spraying techniques that are related to these herbicide treatments, such as selective spraying, spot spraying, or intermittent spraying. Therefore, farmers need alternatives for weed control since they want to use fewer chemicals, pay less for production, and waste less time (means the time which consumes during the hand hoeing).

Precision agriculture still faces substantial challenges with the classification of weed images [3]. The enormously complex natural characteristics of weeds are the cause of the classification of weeds still being a major difficulty [3]. Weeds are pervasive and appear in a variety of forms. Their arbitrary growth and wide range of leaf sizes and shapes produce a variety of textural traits. Farmers need an accurate and robust weed classification system that automatically controls various kinds of herbicides in different agricultural fields.

Generally, there are four steps in a typical weed classification system: Preprocessing, feature extraction, feature selection, and classification. In the preprocessing step, the unnecessary details are diminished which usually decreases the efficiency of the system. In the feature extraction step, the dimensionality of the feature space is reduced which means the raw data of every image is converted to more manageable groups of information. Though, the information is managed in the feature extraction step; however, there might be some redundancy in that information. Therefore, in the feature selection step, the most prominent and informative features are selected to form a feature vector. Finally, in the classification step, the incoming weed image is labeled based on the trained information.

There are lots of systems that have been proposed for weed image analysis in various agricultural fields for herbicide applications. The authors of [4] presented a deep learning-based weed detection approach for bell pepper fields. They employed various deep learning methods such as Xception, GoogLeNet, Alexnet, and InceptionV3 for the identification of weed images in bell pepper fields, and they achieved the best performance against weed images. Similarly, a custom lightweight deep-based learning approach was designed by [5] for weed identification in the field of soybean. They utilized a vision-based method under deep learning models like ResNet50, MobileNetV2, and convolutional neural networks (CNN) which significantly identified the weed images, and claimed significant performance. Likewise, the authors of [6] proposed an integrated system for the classification of weeds and crop species. This system was based on texture features and support vector machines coupled with deep-based learning models such as VGG16. Moreover, a robust feature selection method was also employed which selects the most prominent features for model prediction. The system was tested and validated on a large dataset and showed the best accuracy of classification. On the other hand, various deep learning models are combined by [7] for the recognition of weed images. They assessed their system against multiple experiments on different small datasets. Moreover, they used the transfer learning method by conserving the pre-trained weights for selecting the features and managing them on crop and weed datasets. Furthermore, in [8], various kinds of weed images are recognized that was based on different types of features. They employed shape, texture, and color-based feature extraction methods to extract the most important features followed by a support vector machine coupled with a deep learning model which recognized different types of weed images in an agricultural field. They showed comparable recognition rates against small datasets. The latest architecture was designed for the detection of various kinds of weed images by [9] that was based on different models of deep learning such as EfficientNetB7, MobileNetV2, ResNet152V2, EfficientNetB1, and DenseNet121. Moreover, they utilized various types of methods like zoom, height shift, width shift, rotation, and horizontal flip for data augmentation. They employed a small dataset to show the performance of their system and they claimed the best recognition rate. However, most of these systems are tested and validated on small datasets under static environmental domains. Also, their accuracies of classification are degraded against the naturalistic agricultural domains.

Therefore, in this work, an accurate and robust weed image analysis and classification framework is developed, which has the following main contributions:

•   To reduce the impact of environmental factors, the Homomorphic filter is utilized for preprocessing. This technique involves nonlinear mapping of the image to a different domain, where linear filter techniques are applied, and then the image is mapped back to its original domain.

•   For feature extraction, an adaptive method is designed that calculates edge direction while taking into account non-maximum suppression. This method has the advantage of being easy to use and can be extended to extract other types of features. To identify weeds, low-level features are typically extracted, and cultured weed detection procedures are used as needed. However, the presence of features called sludge can complicate edge detection by various operators. The feature vectors are generated through hysteresis thresholding, with manual selection of thresholds for optimal performance. This method improves the ratio of SNR and enhances separation and localization. Our approach may outperform other operators when detecting certain types of edges. The resulting feature vectors are fed into a logistic regression classifier for weed image analysis. The significance of this proposed feature extraction technique is evaluated using a comprehensive experimental setup with a depth dataset containing thirteen different types of actions.

•   After feature extraction, the weeds are identified using logistic regression that determines the crucial pixels for deciding the class of a sample. Per-class probability and conditional probability are respectively calculated for each class, with the predicted label being the class with the highest probability.

The remaining article is structured as: In Section 2, some related studies along with their respective shortcomings are presented. The designed framework for the weed image analysis is briefly described in Section 3. The experimental setup and their respective results for the proposed framework are presented in Sections 4 and 5. Finally, the weed image analysis framework is concluded followed by some future directions in Section 6.

2  Related Work

Weed image analysis is a complex, vital, and important issue in precision agricultural fields. Their systems are designed for the identification of different types of weed images against various agricultural domains, which are prescribed with their respective shortcomings.

A combined system was designed by [10] that was based on a deep convolutional neural network (VGG16) coupled with a support vector machine (SVM) for the identification of weeds and crops. They achieved the best recognition rate against the public dataset. However, the images used in this system have low spatial and temporal information which are not suitable for such a framework. Moreover, in terms of performance, SVM has several limitations [11]. A state-of-the-art regional convolutional neural network (R-CNN)-based approach was proposed by [12] for the automatic prediction of weed images against agricultural fields. In this approach, the authors also segmented various kinds of weeds from the input images, and they claimed significant performance against the weed images dataset. Though, the performance of R-CNN is comparatively better; however, for the selected region of interest, computational-wise, this system is expensive against large weed datasets [13].

On the other hand, the latest fused feature extraction algorithm was designed by [14] that relied on random forest, support vector machine, and k-nearest neighbors for the analysis of weed and crop images. They compared the performances of every method against weed and crop images and claimed that the random forest and support vector machine is better than the others. However, the performance of SVM has several limitations. Also, this method was justified against a small dataset of weeds. Furthermore, the latest architecture was developed by [15] which was based on a faster region convolutional neural network (FRCNN) coupled with ResNet-101 for the identification of different types of weeds. In this architecture, different combinations of the anchor boxes were combined in respective RFCNNs to improve the accuracy of identification. However, the performance of R-CNN is comparatively better; but, for the selected region of interest, computational-wise, this approach is more expensive against large weed datasets [13].

An alternative convolutional neural network (CNN) based system was proposed by [16] for the detection of weeds. This system has two phases. In the first phase, the images were collected and labeled then the features were extracted using the base image; while, in the second phase, the CNN is constructed under 20 layers which were further coupled with pooling and dense layers respectively. The dataset has been trained and tested on a public dataset and achieved better performance. However, the system is tested and validated on very small datasets whose accuracy is respectively decreased against large datasets. Likewise, the authors of [17] designed an integrated deep neural network that was based on ResNet-50, Inception-ResNet-v2, VGG16, MobileNetV2, and Inception-V3 for the recognition of various types of weeds. They validated their system against the combination of multiple small datasets and achieved better performance. However, this approach is not efficient due to various deep-learning models. Recent vision-based techniques are described in [18] for the detection of weeds within weeds and crop fields. The authors exploited a non-supervised stepwise linear discriminant analysis in a weed classification system for feature extraction and the categories were identified by employing the SVM. They claimed the best accuracy of classification. However, this approach cannot perform accurate discrimination against dynamic circumstances due to a lack of time [19]. Moreover, due to the large dataset, the accuracy of the classification of linear kernel SVM is affected because of unstable training sample points [20].

On the other side, an integrated approach was developed by [21] which was based on random forest, kNN, decision tree, and YOLOv5 neural network for the detection of different types of weeds. The proposed approach was assessed against a public dataset and achieved an 84% weighted average recognition rate under static circumstances. However, in this approach, for every new incoming data, the distance to the k nearest neighbor is iteratively calculated due to which this approach is quite expensive [22]. Similarly, a naturalistic approach was presented by [23,24] for the classification of weeds in corn fields. In this approach, the authors extracted the region of interest (ROI) through connected component analysis, and the classification is done via CNN. The approach was tested and validated against a large dataset and claimed the best accuracy of classification. However, in various growth phases against different environments, the struggle for CNN training along with plant species is massive that might need joint steps of several working groups [25]. Likewise, a state-of-the-art technique was developed by [26] which was based on deep learning for the classification of weeds. In this technique, the authors utilized CNN coupled with long-short term memory (LSTM). CNN is used to extract discriminative features from the corresponding input image due to its unique structure; while LSTM is used to optimize the classification. The technique was tested and validated against a public dataset that showed significant performance. However, LSTM cannot encode temporal dependencies which extend to more than a limited number of steps [27].

The latest methods have been proposed by [28,29] for the identification of weeds, which were based on various kinds of deep learning methods such as Faster R-CNN, YOLOv3, and YOLOv5. They respectively achieved 92% and 85% against the weed dataset. Similarly, emerging systems were proposed by [30,31] for the weed classification; where the authors respectively combined GoogLeNet and AlexNet, and DenseNet169 and MobileNetV2. Among these algorithms, one algorithm was employed to achieve a higher recognition rate; while the other one was utilized to reduce the complexity issue. Furthermore, a unified application of image processing and IoT-based method was developed by [32] for the detection of weeds. They exploited CNN to deliver a complete architecture for the agricultural domains. This model is used to classify the grayscale and color segmented image, due to which they claimed higher classification accuracy. A modified algorithm was designed by [33] that was based on a line filter for the analysis of weed images. This algorithm successfully distinguished the morphological variances such as the directions of shape features among two corresponding weed images. They showed their performance against binary classification using a weed image dataset and claimed the best accuracy of classification. However, computational-wise, these methods are more expensive and their accuracies of classification degrade against those images which are taken in various weather conditions.

Accordingly, this work presents an accurate, robust, and dynamic approach to the classification of various kinds of weed images. The proposed approach has been tested and validated against state-of-the-art dataset which has been collected under the settings of naturalistic environments and in various day times such as morning, noon, afternoon, evening, cloudy, sunny, rainy, etc. The proposed approach showed significant performance against existing state-of-the-art works using the aforementioned dataset of weed images.

3  Proposed Methodology

The overall working diagram for the proposed approach is given in Fig. 1.

images

Figure 1: The overall flowchart of the proposed feature extraction approach

3.1 Removing Environmental Factors Using Homomorphic Filter

An image can be considered a 2D function represented by Im(i,j), where the amplitude or value at 3D coordinates (i,j) is an optimistic scalar extent. The physical interpretation of this value is calculated by the corresponding source of the image. For instance, when an image is made from a physical process, its values are the ratio of the energy emitted and the physical source, which means that it is an array of calculated light intensities that is dependent on the number of lights replication produced by the objects in the scene. The function Im(i,j) has respectively two components: illumination which represents the illumination occurrence on the viewed portion, and reflectance which represents the illumination mirrored from the objects in the portion. By illumination and reflectance are respectively indicated by f(i,j) and g(i,j), an image can be expressed in Eq. (1).

Im(i,j)=f(i,j)×g(i,j)(1)

The illumination-reflectance model, which was previously described, is a well-known method of image formation. The illumination component of an image typically exhibits slow spatial variations, while reflectance component tends to be abrupt, especially at the junctions of dissimilar objects. When the illumination distribution is too high, objects in an image can be difficult to distinguish. To address this issue, Homomorphic Filtering can be utilized. This is a frequency domain filtering technique that enhances the reflectance while reducing the contribution of illumination. As a result, objects in the enhanced image can be more easily discerned.

Using Eq. (1) to directly operate on the frequency components of illumination and reflectance is not feasible since the Fourier transform of the product of two functions is not separable, which means that it is difficult to apply a filter to illumination and reflectance independently. To overcome this issue, a straightforward solution is to take the natural logarithm of both sides of Eq. (1) to get Eq. (2).

y(i,j)=log[Im(i,j)]=log[f(i,j)]+log[g(i,j)](2)

Usage of the Fourier transform as shown in Eq. (3).

T{y(i,j)}=T{log[Im(i,j)]}=T{log[f(i,j)]}+T{log[g(i,j)]}(3)

Or, it might be summarized as in Eq. (4):

Z(x,y)=Imi(i,j)+Imr(i,j)(4)

By applying a filter function H(x,y) in the frequency domain, a high-pass version of Z(x,y) as S(x,y) is obtained Eq. (5).

S(x,y)=H(x,y)×Z(x,y)=H(x,y)×Imi(i,j)+H(x,y)×Imr(i,j)(5)

The sum of Fi(u,v) and Fr(u,v) results in Z(u,v), allowing the filter to operate independently on the illumination and reflectance components. To return to the spatial domain, an inverse Fourier transform is applied as given in Eq. (6).

s(x,y)=T1{S(x,y)}=T1{H(x,y)×Imi(i,j)}+T1{H(x,y)×Imr(i,j)}(6)

Finally, the filtered image I´m(i,j) can be obtained by performing an exponential operation as shown in Eq. (7).

I´m(i,j)=es(x,y)(7)

The high pass filter used in this procedure is typically the Butterworth filter or the Gaussian filter. The Butterworth filter can be defined in Eq. (8).

H(x,y)=(gHgL)11+[D0D(x,y)]2m+gL(8)

where the order of the filter is defined by n in the Butterworth filter equation. The values of the parameters gH and gL are selected in such a way that gL<1 and gH>1, which results in the filter function reducing the contribution made by low frequencies and enhancing the contribution made by high frequencies. The cutoff distance from the center is denoted by D0 and D(x,y) is calculated as in Eq. (9).

D(x,y)=[(xR/2)2+(yC/2)2]0.5(9)

where R and C represent accordingly the number of rows and columns in the corresponding image. The Gaussian filter is described in Eq. (10).

H(x,y)=(gHgL)[1en(D2(x,y)D02)]+gL(10)

where the constant n is introduced to control the slope of the filter function and determine its sharpness.

3.2 Edge Detection-Based Feature Extraction Technique

It is revealed that taking the difference between neighboring pixels in the analysis of the Taylor series can provide an estimate of the derivative at a given point. While considering the variation, the points of an image are separated through the gradient (represented by i), and the extension of the Taylor-like f(x+x) is given in Eq. (11).

f(x+x)=f(x)+x×f´(x)+x22!×f(x)+R(x3)(11)

Substituting the term f(x) is presented in Eq. (12).

f(x)=f(x+x)f(x)xR(x)(12)

The aforementioned observations signify the differences among adjacent pixels, which provide an estimate of the first-order derivative with an accompanying error term R(x) that depends on the magnitude of x and the complexity of the edge. A high value of x may result in a higher error. This assumption is considered appropriate for fast feature point selection and the compression of high-frequency content during training. Specifically, this involves computing the first-order difference between two adjacent pixels along the horizontal axis, such as Egii, as expressed in Eq. (13).

Egiix,y=Egix+1,y+Egix,y=ρx+1,yρx,y+ρx,yρx1,y=ρx+1,yρx1,y(13)

which is equivalent to participating space to observe the edges Egii as presented in Eq. (14):

Egiix,y=|ρx+1,yρx1,y|x2,N1,y1,N(14)

Moreover, to assess the series of the Taylor, f(xx) has been extended in Eq. (15).

f(xx)=f(x)x×f´(x)+x22!×f(x)R(x3)(15)

Combining Eqs. (1) and (5) to obtain Eq. (16).

f´(x)=f(x+x)f(xx)2xR(x2)(16)

Eq. (6) suggests the approximation among the image pixels separated by one point accompanied by Q(i2). If i<1 that is significantly lower than the combined fault of one-on-one image pixels (like depicted in Eq. (6)), it might be exploited for noise lessening using Eq. (17).

Egx,y=max{|K+×ρx,y|,|K×ρx,y|}x,y1,N1(17)

During the implementation of edge detection, the distance M between the vector and the direction of the edges is measured. The patterns that provide the peak value during convolution are then kept as the values of an edge at corresponding pixels. This highest value of the pixel like Eg(x,y) is obtained from the two image pattern convolutions at point ρ(x,y). The other approach considers the maximum value to simply sum the results of the two patterns to generate edge vectors on respectively x-axis and y-axis. These two patterns are considered facilitating mechanisms that might detect different kinds of edges. Edge detection is used to differentiate between different features by detecting variations that may be due to noise and step-like differences in image intensity. Hence, practically, it includes averaging in the edge detection process. The horizontal (Kx) and vertical (Ky) templates can be spread with their corresponding rows and columns, to deliver certain results: the illumination on each axis and the size of the edge. The angle φ of the vector is described as in Eqs. (18) and (19), respectively.

K(x,y)=Kx(x,y)2+Ky(x,y)2(18)

φ(x,y)=tan1(Kx(x,y)Ky(x,y))(19)

The distance K among the direction of the edges and vector can be determined by utilizing Kx and Ky. The proposed technique exploits the Sobel filter that outperforms other contemporary operators such as the gradient filter and also considers optimal averaging measures and alterations. In this approach, two windows are utilized that provide respectively two coefficient sets of triangle forms, as shown in Fig. 2.

images

Figure 2: (a) shows the filter of the pascal triangle for addition; while (b) presents the filter of the pascal triangle for subtraction using a group of coefficients

In Fig. 2a, the rows display the coefficients that are enlarged for the filter which further represents an optimum discrete sharpening operator. The sharpening coefficients in the Sobel operator are 3 × 3 in size. Fig. 2b presents the coefficients of the Pascal triangle that are employed for subtraction. These coefficients can be obtained by subtracting the patterns inherited from the extension of one-on-one correspondence for the size of a minor mask. Hence, a filter is needed which delivers the coefficients of the Pascal triangle used for the parameters of the filter like the size ρ and position β. This filter is known as Pascal(β,ρ), which is demonstrated in Eq. (20).

Pascal(β,ρ)=|ρ!(ρβ)!×β!if (0βρ)0otherwise(20)

For the measurement of edges, there are four opportunities for measurement presented by the Sobel operator, as given in Fig. 3.

images

Figure 3: Measurement of the direction of the edges; (a) shows (Kx, Ky), (b) presents (–Kx, Ky), (c) describes (Ky, Kx), and (d) is (–Ky, –Kx)

Fig. 3 illustrates that the retrogressive pattern of Kx does not stipulate the discontinuity at the corners, it means that the edge magnitude using the Sobel filter is not presented by the square, but it is similar to those filters that were generated from the application of other filters. If the patterns of the Sobel operator are changed, then the edge directions of measurement can be organized by itself. However, if the corresponding edges are required to be found, then the rearrangement may help in constructing the algorithm to find the target. Once the entire edges along with their respective directions are identified, the whole details are kept in a feature vector form that can be used in logistic regression for identifying different kinds of weed images.

3.3 Identification of Weeds Images Using Logistic Regression

Logistic regression is widely used in linear models for the classification of various images. This model utilizes a logistic function to model the likelihood of potential results in a single trial. This model may be binomial or multinomial, and it can be regularized with r1, r2, or Elastic-Net. For binary class r2 regularized logistic regression, the optimization problem seeks to minimize the cost function of Eq. (21).

minm,n12mTm+Nx=1nln(exp(jx(IxTm+n)))(21)

Likewise, r1 is the normalized logistic regression which enhances the function of Eq. (22).

minm,nm1+Nx=1nln(exp(jx(IxTm+n)))(22)

The normalization of Elastic-Net is the integration of r1 and r2 that decreases the function of Eq. (23).

minm,n1Φ2mTm+Φm1+Nx=1nln(exp(jx(IxTm+n))+1)(23)

where Φ controls the comparative magnitude of r1 regulation vs. r2 regulation. It should be noted that, according to this representation, the target variable jx is assumed to take the values from the range of [−1, 1] during trial x. Furthermore, when ρ = 1, the Elastic-Net can be shown to be equivalent to r1, and when ρ = 0, it can be shown to be equivalent to r2. For a more thorough explanation of logistic regression, please refer to [34].

4  Methodology Evaluation

The proposed approach has been assessed under the following settings.

4.1 Weed Images Dataset

The dataset used in this study consisted of 3000 (.jpg files) images that were respectively divided into three classes broads, narrow weeds, and mixed weeds. The images were taken in real agricultural farms against various naturalistic scenarios such as morning, noon, afternoon, and evening times on sunny days, cloudy days, and rainy days. Most of these pictures were taken in different farms of the Swat valley, Pakistan. Moreover, the images were taken at different angles. While collecting the images, the camera has flexible positions to produce robustness in the dataset. The images were manually annotated and taken through a Cannon 800D RGB camera. For a fair comparison, the size of the entire image is normalized to 340 × 280. The sample images for the three categories against different conditions are respectively presented in Fig. 4.

images

Figure 4: Sample images of the broad, narrow, and mixed weeds in various environmental conditions

4.2 Experiments Arrangements

The proposed model has been assessed against the following comprehensive set of experiments. The entire experiments are performed in MATLAB under the presence of a 3.7 GHz processor and 8 GB RAM using offline lab settings.

•   The first experiment describes the significance of the proposed approach against the state-of-the-art dataset under the setting of a k-fold cross-validation scheme. In our case, the value for k = 10.

•   The second experiment prescribes the strength of the developed approach in the systems of weed classification. In this experiment, a comprehensive set of sub-experiments is performed using various types of classifiers instead of employing the proposed technique.

•   The last experiment indicates the comparison results of the proposed technique along with the latest existing works under the exact situations as presented in their particular research. Moreover, in this experiment, the significance of the developed model along with the existing works are evaluated based on various evaluation indicators.

5  Experimental Results

The comprehensive set of experiments are prescribed in the following order.

5.1 Assessment of the First Experiment

The first experiment shows the performance of the developed technique against the state-of-the-art weed images dataset. The overall accuracy of classification is presented in Fig. 5 and Table 1.

images

Figure 5: Accuracy of the proposed approach against weed images dataset

images

The results presented in Fig. 5 and Table 1 demonstrate that the proposed approach achieved a significantly high classification accuracy when tested against the weed images dataset. This high performance is attributed to the calculation of edge directions in the presence of non-maximum suppression. Furthermore, when processing weed images, some edges might appear as a step function, in which case, the proposed approach may outperform other operators.

5.2 Assessment of the Second Experiment

The significance of the developed approach in the weeds classification system is presented in the second experiment. In this experiment, the existing state-of-the-art feature extraction methods such as partial least squares, speeded-up robust features, semidefinite embedding, wrapper method, independent component analysis, gray texture features, latent semantic analysis, fusion feature, local binary pattern, curvelet transform are utilized instead of employing the proposed edge detection method. The over result is described in Fig. 6 and Table 2.

images

Figure 6: Accuracy of the system using existing famous methods instead of utilizing the developed approach on weed images dataset

images

The results presented in Fig. 6 and Table 2 indicate that the system did not achieve the highest accuracy of classification with the existing famous methods. However, the proposed feature extraction technique demonstrated significant accuracy. This can be attributed to the fact that the developed model computes the edge directions using non-maximum conquest, providing the profits in terms of simplicity, modularity, and potential for extension to other kind of features. Typically, it is beneficial to excerpt additional subordinate details in terms of features when detecting edges.

5.3 Comparison

The accuracy of classification for the developed technique is compared against state-of-the-art systems in the last experiment. The existing systems are implemented under the exact settings as presented in their particular research. In this comparison, some systems are implemented; while the implementations for some of the systems were borrowed. The overall comparison results along with corresponding elapse times are respectively prescribed in Fig. 7 and Table 3.

images

Figure 7: Comparison of the developed approach along with the latest existing systems using the weed images dataset

images

As illustrated in Fig. 7 and Table 3, in comparison, the developed approach attained the best accuracy of classification on the weed images dataset as demonstrated. Also, comparatively, this algorithm showed the best efficiency rate during the identification of various weeds. This can be attributed to the proposed method’s ability to process weed images, where certain edges may be verified as a footstep function, and our technique may outperform other operators such as gradient operators. The relevant details are extracted to generate a feature vector that is further given to a classifier for weed identification.

5.4 Evaluation Results Against Various Indicators

Furthermore, the proposed approach together with the existing works is assessed by various evaluation indicators like accuracy (as shown in Eq. (24)), Precision (as shown in Eq. (25)), Recall (as presented in Eq. (26)), and F1-score (as shown in Eq. (29)). For evaluation, the below formulas are respectively utilized.

Accuracy (A)=Tpositive+TnegativeTpositive+Tnegative+Fpositive+Fnegative(24)

Precision (P)=TpositiveTpositive+Fpositive(25)

Recall (R)=TpositiveTpositive+Fnegative(26)

The average precision (AP) and average recall (AR) are respectively calculated in Eqs. (27) and (28).

Average Precision (AP)=P(Broad)+P(Narrow)+P(Mixed)3(27)

Average Recall (AR)=R(Broad)+R(Narrow)+R(Mixed)3(28)

The mean of AP indicates the mean of the average precision and AR represents the average recall of all classes in the respective dataset.

F1score=2×(AP×AR)AP+AR×100%(29)

The entire assessment results of the proposed approach along with the existing works against the abovementioned evaluation indicators are respectively shown in Table 4.

images

As illustrated in Table 4 that the proposed approach along with the existing works has been assessed against various evaluation indicators. Hence, it is proved that the proposed approach showed significant performance compared to existing work.

5.5 Discussion

As described before, the proposed technique has been tested and validated using the naturalistic dataset of weed. The dataset contains three kinds of classes such as broad, narrow, and mixed respectively against various weather conditions like morning time, afternoon, and evening at sunny days, cloudy days, and rainy days. The proposed approach showed a significant recognition rate compared to state-of-the-art research (as shown in Fig. 5 and Table 1). This is because the developed feature extraction technique computes the edge directions while accounting for non-maximum suppression. This method has several benefits, including its ease of use and ability to extend to other types of features. Typically, subordinate details in the term of features are extracted to identify weeds, and additional techniques for detecting cultured weeds are utilized if necessary. In the processing of weed images, certain edges may be verified as a footstep function, and our technique may outperform other operators such as gradient operators.

Furthermore, the weed classification was tested against various kinds of existing well-known feature extraction methods; however, at this stage, the proposed feature extraction method is removed for the experiments (as shown in Fig. 6 and Table 2). Hence, it is proved that the weed classification system is unable to achieve best accuracy of classification without the proposed feature extraction method. Similarly, the accuracy of classification of the developed method is compared against existing works (as shown in Fig. 7 and Table 3), which illustrates that the proposed approach is accurate and robust against the naturalistic weed images dataset.

Likewise, the proposed approach along with the existing works are assessed by various evaluation indicators such as precision, recall, and F1-score (as shown in Table 4). As can be seen that the proposed approach provides better results against precision and recall compared with existing systems.

The accuracy (as shown in Eq. (24)) presents the ratio of examples across all the categories which were accurately classified. However, it does not offer the facts that how the approach is performing across every category of weeds. The averaged precision (as presented in Eq. (27)) indicates the ratio of accurate categorization beyond the whole predictions made for the corresponding weed category. On the other hand, the recall (as described in Eq. (28)) presents the ratio of accurate categorization beyond the total ground truth for the corresponding weed category. Commonly, the suitable level of significance for the above-mentioned metrics fluctuates depending upon the research domain.

6  Conclusions

Weed harms crops because it competes with them for resources like water, light, nutrients, and space. This results in lower crop yields and less effective use of machinery. There are various spraying techniques that are related to these herbicide treatments, such as selective spraying, spot spraying, or intermittent spraying. Therefore, in this study, an adaptive system is developed for the analysis of various kinds of weeds.

In this system, a Homomorphic filter is employed to address environmental factors such as lighting effects and noise. The Homomorphic filter normalizes the brightness across the image while simultaneously enhancing the contrast. The image presented a challenge due to improper illumination, which makes it difficult to separate illumination and reflectance. However, it is possible to locate their approximate positions in the frequency domain. Since illumination and reflectance combine multiplicatively, the logarithm of the image intensity is considered to make their components additive. This allows the multiplicative components of the image to be separated linearly in the frequency domain. Illumination variations can be considered as a form of multiplicative noise that can be reduced by filtering in the log domain. Additionally, an adaptive feature extraction method is designed for weed classification systems. The proposed approach involves calculating the edge directions under the presence of non-maximum suppression. The advantage of this method lies in its simplicity, which relies on straightforward procedures, and its versatility in extracting other types of features. Typically, when identifying weeds, it is useful to excerpt additional subordinate details in terms of features, and to obtain the appropriate information, an additional process for detecting cultured weeds is either used or discarded. Moreover, the entire weed image dataset contains features known as sludge, which can be observed using different kinds of operators used for edge extraction, even in the presence of noise. The hysteresis thresholding is utilized to generate the feature vectors in the developed technique, with the manual selection of thresholds for optimal performance. The elementary factors in this method effectively respond to noise, making it difficult to select a threshold that reveals a significant part of the sludge borderline. However, this approach enhances the production of the SNR and the uppermost percentage of separation and localization. Furthermore, when processing weed images, some edges might appear as a step function, in which case, the proposed approach may outperform other operators. Relevant details are extracted to generate the feature vectors that further are given to the classifier for weed identification. The performance of the proposed edge-based feature extraction method is evaluated on the weed dataset and attained the best accuracy of classification compared to state-of-the-art systems.

In future work, our aim is to further improve and apply the proposed methodology in real-world agricultural fields to maintain the same level of accuracy and ease better crop management for farmers.

Acknowledgement: The authors would like to lengthen their thanks to Jouf University to support this work.

Funding Statement: The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the Project Number: 223202, Y. A.

Author Contributions: The authors confirm contribution to the paper as follows: study conception and design: Y. Alhwaiti, M. H. Siddiqi; data collection: I. Ahmad; analysis and interpretation of results: M. H. Siddiqi, I. Ahmad; draft manuscript preparation: M. H. Siddiqi. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: The data used for this study and simulation will be provided on demand.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

1. N. P. Daggupati, “Assessment of the varitarget nozzle for variable rate application of liquid crop protection products,” Ph.D. Dissertation, Department of Biological & Agricultural Engineering, Kansas State University, Manhattan, KS 66506, United States, 2007. [Google Scholar]

2. J. Bossu, C. Gée, G. Jones and F. Truchetet, “Wavelet transform to discriminate between crop and weed in perspective agronomic images,” Computers and Electronics in Agriculture, vol. 65, no. 1, pp. 133–143, 2009. [Google Scholar]

3. Y. Chen, P. Lin, Y. He and Z. Xu, “Classification of broadleaf weed images using Gabor wavelets and Lie group structure of region covariance on Riemannian manifolds,” Biosystems Engineering, vol. 109, no. 3, pp. 220–227, 2011. [Google Scholar]

4. A. Subeesh, S. Bhole, K. Singh, N. S. Chandel, Y. A. Rajwade et al., “Deep convolutional neural network models for weed detection in polyhouse grown bell peppers,” Artificial Intelligence in Agriculture, vol. 6, pp. 47–54, 2022. [Google Scholar]

5. N. Razfar, J. True, R. Bassiouny, V. Venkatesh and R. Kashef, “Weed detection in soybean crops using custom lightweight deep learning models,” Journal of Agriculture and Food Research, vol. 8, pp. 100308, 2022. [Google Scholar]

6. G. C. Sunil, Y. Zhang, C. Koparan, M. R. Ahmed, K. Howatt et al., “Weed and crop species classification using computer vision and deep learning technologies in greenhouse conditions,” Journal of Agriculture and Food Research, vol. 9, pp. 100325, 2022. [Google Scholar]

7. A. M. Hasan, F. Sohel, D. Diepeveen, H. Laga and M. G. Jones, “Weed recognition using deep learning techniques on class-imbalanced imagery,” Crop and Pasture Science, vol. 74, no. 6, pp. 1–12, 2022. [Google Scholar]

8. T. Tao and X. Wei, “A hybrid CNN-SVM classifier for weed recognition in winter rape field,” Plant Methods, vol. 18, no. 1, pp. 29, 2022. [Google Scholar] [PubMed]

9. F. Arikan, B. O. R. A. Şebnem and U. G. U. R. Aybars, “Weeds detection using deep learning methods and dataset balancing,” International Journal of Multidisciplinary Studies and Innovative Technologies, vol. 6, no. 1, pp. 19–22, 2022. [Google Scholar]

10. J. Haichen, C. Qingrui and G. L. Zheng, “Weeds and crops classification using deep convolutional neural network,” in 3rd Int. Conf. on Control and Computer Vision, Macau, China, pp. 40–44, 2022. [Google Scholar]

11. A. Bouguettaya, H. Zarzour, A. Kechida and A. M. Taberkit, “Deep learning techniques to classify agricultural crops through UAV imagery: A review,” Neural Computing and Applications, vol. 34, no. 12, pp. 9511–9536, 2022. [Google Scholar] [PubMed]

12. M. Vaidhehi and C. Malathy, “A unique model for weed and paddy detection using regional convolutional neural networks,” Acta Agriculturae Scandinavica, Section B—Soil & Plant Science, vol. 72, no. 1, pp. 463–475, 2022. [Google Scholar]

13. S. K. Valicharla, “Weed recognition in agriculture: A mask R-CNN approach,” M.S. Thesis, Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, United States, 2021. [Google Scholar]

14. N. Islam, M. M. Rashid, S. Wibowo, C. Y. Xu, A. Morshed et al., “Early weed detection using image processing and machine learning techniques in an Australian chilli farm,” Agriculture, vol. 11, no. 5, pp. 387, 2021. [Google Scholar]

15. M. H. Saleem, J. Potgieter and K. M. Arif, “Weed detection by faster RCNN model: An enhanced anchor box approach,” Agronomy, vol. 12, no. 7, pp. 1580, 2022. [Google Scholar]

16. V. Abhilash, “Weed detection using convolutional neural network,” BOHR International Journal of Computer Science, vol. 1, no. 1, pp. 46–49, 2022. [Google Scholar]

17. A. M. Almalky and K. P. Ahmed, “Deep learning for detecting and classifying the growth stages of Consolida regalis weeds on fields,” Agronomy, vol. 13, no. 3, pp. 934, 2023. [Google Scholar]

18. M. H. Siddiqi, S. W. Lee and A. M. Khan, “Weed image classification using wavelet transform, stepwise linear discriminant analysis, and support vector machines for an automatic spray control system,” Journal of Information Science & Engineering, vol. 30, no. 4, pp. 1253–1270, 2014. [Google Scholar]

19. A. X. P. Burgos, A. Ribeiro, M. Guijarro and G. Pajares, “Real-time image processing for crop/weed discrimination in maize fields,” Computers and Electronics in Agriculture, vol. 75, no. 2, pp. 337–346, 2011. [Google Scholar]

20. A. A. Nurhanna and M. F. Othman, “Multi-class support vector machine application in the field of agriculture and poultry: A review,” Malaysian Journal of Mathematical Sciences, vol. 11, pp. 35–52, 2017. [Google Scholar]

21. B. Urmashev, Z. Buribayev, Z. Amirgaliyeva, A. Ataniyazova, M. Zhassuzak et al., “Development of a weed detection system using machine learning and neural network algorithms,” Eastern-European Journal of Enterprise Technologies, vol. 6, no. 2, pp. 114, 2021. [Google Scholar]

22. S. Dridi, V. Machine, D. Tree, R. Forest and L. Regression, “Supervised learning—A systematic literature review,” 2021. [Online]. Available: https://osf.io/tysr4 [Google Scholar]

23. F. G. Márquez, G. Flores, D. A. M. Ravell, A. R. Pedraza and L. M. V. Coronado, “Weed classification from natural corn field-multi-plant images based on shallow and deep learning,” Sensors, vol. 22, no. 8, pp. 3021, 2022. [Google Scholar]

24. A. M. Mishra, S. Harnal, K. Mohiuddin, V. Gautam, O. A. Nasr et al., “A deep learning-based novel approach for weed growth estimation,” Intelligent Automation & Soft Computing, vol. 31, no. 2, pp. 1157–1172, 2022. [Google Scholar]

25. R. Gerhards, D. A. Sanchez, P. Hamouz, G. G. Peteinatos, S. Christensen et al., “Advances in site-specific weed management in agriculture—A review,” Weed Research, vol. 62, no. 2, pp. 123–133, 2022. [Google Scholar]

26. S. Arif, R. Kumar, S. Abbasi, K. Mohammadani and K. Dev, Weeds Detection and Classification Using Convolutional Long-Short-Term Memory. Durham, NC, USA: Research Square, 2021. [Google Scholar]

27. S. T. Namin, M. Esmaeilzadeh, M. Najafi, T. B. Brown and J. O. Borevitz, “Deep phenotyping: Deep learning for temporal phenotype/genotype classification,” Plant Methods, vol. 14, no. 1, pp. 1–14, 2018. [Google Scholar]

28. P. Wang, Y. Tang, F. Luo, L. Wang, C. Li et al., “Weed25: A deep learning dataset for weed identification,” Frontiers in Plant Science, vol. 13, pp. 1–14, 2022. [Google Scholar]

29. R. Punithavathi, A. D. C. Rani, K. R. Sughashini, C. Kurangi, M. Nirmala et al., “Computer vision and deep learning-enabled weed detection model for precision agriculture,” Computer Systems Science and Engineering, vol. 44, no. 3, pp. 2759–2774, 2023. [Google Scholar]

30. T. Luo, J. Zhao, Y. Gu, S. Zhang, X. Qiao et al., “Classification of weed seeds based on visual images and deep learning,” Information Processing in Agriculture, vol. 10, no. 1, pp. 40–51, 2023. [Google Scholar]

31. A. Dheeraj and S. Chand, “Using deep learning models for crop and weed classification at early stage,” in 2nd Int. Conf. on Sentiment Analysis and Deep Learning, Lalitpur, Nepal, pp. 931–942, 2022. [Google Scholar]

32. S. Tiwari, A. K. Sharma, A. Jain, D. Gupta and M. Gono et al., “IOT-enabled model for weed seedling classification: An application for smart agriculture,” AgriEngineering, vol. 5, no. 1, pp. 257–272, 2023. [Google Scholar]

33. S. M. Mustaza, M. F. Ibrahim, M. H. M. Zaman, N. Zulkarnain, N. Zainal et al., “Directional shape feature extraction using modified line filter technique for weed classification,” International Journal of Electrical and Electronics Research, vol. 10, no. 3, pp. 564–571, 2022. [Google Scholar]

34. Logistic Regression, 2023. [Online]. Available: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression [Google Scholar]


Cite This Article

APA Style
Alhwaiti, Y., Siddiqi, M.H., Ahmad, I. (2023). An adaptive edge detection algorithm for weed image analysis. Computer Systems Science and Engineering, 47(3), 3011-3031. https://doi.org/10.32604/csse.2023.042110
Vancouver Style
Alhwaiti Y, Siddiqi MH, Ahmad I. An adaptive edge detection algorithm for weed image analysis. Comput Syst Sci Eng. 2023;47(3):3011-3031 https://doi.org/10.32604/csse.2023.042110
IEEE Style
Y. Alhwaiti, M.H. Siddiqi, and I. Ahmad "An Adaptive Edge Detection Algorithm for Weed Image Analysis," Comput. Syst. Sci. Eng., vol. 47, no. 3, pp. 3011-3031. 2023. https://doi.org/10.32604/csse.2023.042110


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 376

    View

  • 282

    Download

  • 1

    Like

Share Link