Open Access
ARTICLE
Wheat Leaf Rust Detection and Infected-Area Estimation Using Multi-Scale Fusion and Lab-Based Lesion Localization
Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
* Corresponding Author: Sajid Ullah Khan. Email:
(This article belongs to the Special Issue: Development and Application of Deep Learning and Image Processing)
Computers, Materials & Continua 2026, 88(1), 82 https://doi.org/10.32604/cmc.2026.079440
Received 21 January 2026; Accepted 09 April 2026; Issue published 08 May 2026
Abstract
Healthcare, education, technological advancement, and farming are the key challenges facing developing countries, with agriculture unquestionably playing an important role in economic growth. Ensuring adequate food production is essential for citizens’ survival, as it is anticipated that efforts in this area would result in increased food productivity. A key approach to enhancing field productivity involves meticulous care of its components, starting with the production of crops. Wheat leaf rust poses a severe threat, particularly to young seedlings, constituting a significant fungal disease that can cause a 25% reduction in wheat productivity. To overcome these issues, this research work proposes a novel image fusion approach called Multi-Scale Discrete Wavelet Transform (MS-DWT). The method uses distinct fusion strategies to extract meaningful details from source images. After that, a Lab Color Space (LCS), followed by a color thresholding method, is employed for the detection and lesion localization of rust in the source images. Furthermore, the proposed model measures the area affected by rust in wheat crops, providing farmers with vital information during the post-medication (anti-rust spray) operation. The experimental findings demonstrate superior performance and achieve a classification accuracy of 98.85%, and the maximum testing accuracy was 99.17% on our generated dataset.Keywords
Agriculture is the most significant sector because of its economic impact on society, especially in developing countries [1]. Agriculture serves as a cornerstone of economic growth for every nation. However, the visual examination of crops by farmers and agriculture experts is a time-consuming, inefficient, and error-prone process, posing potential risks of future losses. In the contemporary agricultural landscape, plants frequently fall victim to viral diseases, significantly impacting the economic progression of agronomic nations. Leaf rust is favored by warm weather in nations like Saudi Arabia, India, and Pakistan. Wheat Leaf Rust is a serious illness that mostly damages young wheat leaves. It usually appears during the cold and wet months, such as February and March. Leaf rust infestation leads to a substantial 25% reduction in wheat production [2]. Traditional methods of plant disease detection relying on visual observation by the naked eye are no longer viable due to their dependence on expert knowledge and time-intensive nature. Leaf Rust stands out as a major fungal disease in wheat crops, manifesting in small-sized, reddish-orange pustules on affected wheat leaves. These pustules can either accumulate in one location or spread across the leaf surface, as illustrated in Fig. 1.

Figure 1: Rust-affected wheat leaf.
Advanced technologies, including Computer Vision (CV), the Internet of Things (IoT), and Image Processing (IP), significantly contribute to enhancing agricultural production, reducing waste, and increasing profits. The timely implementation of remedial actions and accurate disease diagnosis can profoundly impact overall production, ensuring the cultivation of high-quality wheat and maximizing profits for farmers. The agricultural sector’s transformation is increasingly reliant on the critical roles played by IP, CV, DL, and IoT [3]. Unlike conventional feature-based techniques such as random forests, decision trees, and support vector machines, these modern approaches adopt a distinct perspective. The adoption of these advanced technologies represents a fundamental shift in agricultural practices.
Another driving force behind this research is the need to detect leaf rust in remote areas. Manual monitoring of entire fields is labor-intensive and time-consuming, especially given the high density of wheat crops in the field. Embracing cutting-edge technologies offers a more efficient and effective solution for disease detection and field monitoring.
Detecting wheat leaf rust using state-of-the-art IP, ML, and DL techniques faces significant challenges, with one major obstacle being the presence of background noise and blurriness in acquired images. The limited availability of comprehensive datasets for wheat leaf rust disease constitutes another crucial challenge. Existing datasets, as highlighted in [4,5], suffer from several limitations, including illumination variations, occlusion, distortion, and varying viewing angles. Therefore, collecting images from wheat crops under diverse field conditions is important for reliable automatic disease detection.
Another challenge lies in the segmentation or separation of leaves amidst the complex or busy background of the wheat crop [6–8]. Furthermore, the proper selection of IP and ML or DL algorithms for segmentation and classification adds another layer of complexity to the leaf rust detection process [9].
In this research work, we used drone technology to capture images of various parts of the wheat harvest. However, images captured using drones often have blurriness and background noise. To address these challenges, a novel image fusion method called Multi-Scale Discrete Wavelet Transform (MS-DWT) is proposed, relying on distinct fusion strategies to extract meaningful features from source images. The LCS method is then applied, coupled with a color thresholding approach, to identify and classify rust in wheat leaves. Furthermore, the proposed method measures the area affected by rust in leaves, providing farmers with vital information during the post-medication operations.
The research aims to boost the country’s sustainability by improving wheat resilience to leaf rust, a disease that cuts productivity. It introduces novel methods for early detection and management of the disease, aiding in efficient treatment application. This approach can improve crop health and increase yield. It also supports the economic sustainability of local agriculture by providing farmers with the knowledge and tools needed to protect their livelihoods and the environment.
This study’s findings are summarized below:
a) The dataset was generated from various fields in Pakistan, considering the drone’s illumination and orientation factors.
b) To design a Multi-Scale Discrete Wavelet Transform (MS-DWT) relying on distinct fusion strategies to extract meaningful details from source images. Our model incorporates a multi-scale mechanism to reduce information loss and blurriness, thereby aiding the post-processing process in classifying healthy and unhealthy wheat crops.
c) To design an LCS method, coupled with a color thresholding approach, for the detection and lesion localization of wheat leaf rust in the harvest.
d) An area-calculation function is developed to compute the crop rust-affected area, aiding farmers in their post-treatment procedures.
e) Both qualitative and quantitative evaluation metrics are used to validate the accuracy and applicability of the proposed model.
The remaining sections of the paper are organized as follows:
Section 2 discusses the principles of wheat rust diseases, as well as pertinent research on image processing and deep learning models for wheat diseases. Section 3 goes over the proposed method in detail. Section 4 presents the subjective/objective quality evaluation metrics, whereas Section 5 details the analysis and outcomes. Section 6 concludes the research work.
Agriculture plays a pivotal role in the economy of every nation, and wheat is crucial for global food security. It stands as the most extensively cultivated crop worldwide, covering an area of 217 million hectares annually [10]. Wheat diseases primarily result from fungi, non-chlorophyll-containing organisms that lack photosynthesis. These fungi spread through various means such as wind, water, insects, animals, and human activities. Some notable fungal diseases include:
i) Leaf rust disease, also known as brown rust, is a fungal infection that affects plants, particularly crops like wheat. It is characterized by the development of circular or elliptical lesions on the leaves. The disease primarily infects the upper surface of the leaves and can extend to the base of the leaf petiole. Leaf rust is favored by specific environmental conditions, including moderate temperatures around 20°C and the presence of moisture.
ii) Stem rust is a disease caused by various species of the rust fungus that primarily affects the stems and other aerial parts of leaves. It spreads quickly when temperatures are between 25°C–30°C with significant wetness at night.
iii) Stripe rust, or yellow rust, is a bacterial infection caused by the Puccinia striiformis pathogen. It affects various cereal crops, particularly wheat, and is characterized by the appearance of yellow or orange stripes on the leaves, stems, and grains of infected plants.
iv) Leaf blotch is a fungal disease that affects plants, particularly cereals like wheat and barley. It is caused by various pathogens, including species of the Septoria genus.
Fig. 2 shows a graphic representation of the aforementioned diseases.

Figure 2: Symptoms of wheat leaf diseases.
Efficient and timely corrective actions, coupled with precise disease diagnosis, are pivotal in maximizing the quality of wheat production and ensuring optimal profits for farmers. Globally, researchers are focused on generating ideas that help farmers to make sound decisions and take the appropriate measures. Researchers have paid close attention to technical breakthroughs over the last two decades, particularly in Computer Vision (CV) and Deep Learning (DL). They have worked diligently to formulate efficient algorithms for diagnosing wheat diseases, contributing to the evolution of several aspects of smart farming [11]. Several state-of-the-art methods, such as IP, CV, and DL, have been employed on leaf images to detect and classify different crop diseases [9–12].
An Embedded Image Processing System for Grading Diagnosis, as outlined in [13], employs edge detection, intensity processing, and traditional filtering methods for noise reduction. This method demonstrates a high accuracy rate of up to 96%. Nevertheless, it faces challenges when dealing with images captured in motion or exhibiting blurriness. The idea proposed in [14] encompasses several steps: capturing images with a camera, saving a database of 1000 images, applying a traditional Median filter for de-noising and de-blurring. For image segmentation and classification, K-means clustering and SVM (Support Vector Machine) are utilized. In [15], the concept involves detecting leaf spots in sugar crops using the Jetson GPU Infrastructure. While it yields good results under daylight conditions, its performance falters in real-time scenarios when images are captured with a drone camera. In [16], the author suggests identifying five leaf diseases based on shape, texture, and color features. This method utilizes SVM, which may be inappropriate for extensive datasets and scenarios with considerable noise and image blurriness. In a similar manner, the study outlined in [17] concentrates on detecting diseases in pomegranate leaves using K-means clustering and multiclass SVM to detect and classify diseases. The K-means clustering method is used to cluster the diseased patches. Finally, classification is performed using the multiclass SVM approach, which yields an accuracy of 98.07%.
In [18], a fusion of Convolutional Neural Network (CNN) and K Nearest Neighbor (KNN) is proposed for detecting tomato leaf diseases. It exhibits strong performance on high-quality enhanced images but struggles with disease detection in complex background images. Furthermore, its performance slows down during the training of large datasets. The study presented in [19] introduces a Machine Learning framework that extracts features using algorithms like Gray Level Co-Occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Shift-Invariant Feature Transform (SIFT). Disease classification is performed with KNN, SVM, and RF models, achieving an accuracy of 91.24%. However, the method exhibits an extensive running time, and its performance requires further improvement. The research work presented in [20] focuses on a color space model. Initially, 172 particular points from source images are collected, and an SVM is trained using these points. While this approach achieves a better accuracy of 94%, it has difficulty coping with complex images and has spectral distortion issues. In [21], a framework based on region categorization was presented, and achieved a maximum accuracy of 98.63%. The approach initiates with a morphological method for de-noising the image, followed by the application of the k (k-FLBPCM) method to extract interesting points. However, this method fails while processing the complex background and blurred images. On the other hand, Ref. [22] introduces Directional Local Quinary Patterns (DLQPs) to obtain spatial features, and then applies the SVM method for disease classification. This method has higher accuracy and can be used for rusted pixels in complex background images.
In [23], an idea was proposed for identifying and recognizing diseases in cereal crops. The most prominent Artificial Intelligence (AI) techniques were evaluated for different types of crop diseases, encompassing both simple and complex background images. Additionally, Ref. [24] introduced a framework that implemented prominent DL models like ResNet 50, InceptionV3, and VGG16/19. The aforementioned methods were tested on a single dataset of 1500 simple and complex background images depicting three forms of leaf diseases. A simple mobile camera was used for gathering data, and compared to the other networks, VGG19 displayed greater accuracy. Another study presented in [25] introduces a residual neural network designed for the analysis of 1200 stripe rust images, which was trained using the 224 × 224 PX training patches from the database. This method achieved a high classification accuracy of 78% but faced challenges in denoising and deblurring motion images. In [26], a fusion method for crop disease detection by decimating the Plant Village dataset, comprising a total of 54,308 images, and the Digipathos dataset, comprising a total of 43,106 images, using the GLCM and Gabor filter for feature extraction. This model achieved higher accuracy for simple images, but it fails to work with complex background images.
Table 1 illustrates the prominent research works compared with our proposed model.
To mitigate the problems of busy background rust and blurriness effect, this study presented an MS-DWT that employs the distinct fusion strategies to obtain meaningful features from source images. The second phase involves the LCS method coupled with a color thresholding approach to identify and segment leaf rust in the crops. The proposed method also calculates the rust-affected region of the wheat crop, which helps with the post-medication (anti-rust spray) process.
In this section, we thoroughly discussed our proposed model. This section has been divided into four steps for ease of understanding.
a) Real-time Data Collection,
b) Data Pre-processing,
c) MS-DWT method with hybrid fusion rules,
d) LCS and Color Thresholding Methods.
Fig. 3 shows the graphical representation of our proposed work.

Figure 3: Graphical representation of proposed work (a) Performed pre-processing, including image registration, labeling, and splitting the dataset. (b) Training of four comparative models and our proposed model for evaluation. (c) Testing of our proposed model for prediction and affected area calculation.
This section describes the proposed approach for recognizing and classifying wheat diseases. Initially, images of wheat leaves affected by rust and those unaffected are gathered, forming the foundation of a new dataset.
Online plant datasets, like the plant village datasets, featuring over 40,000 images of diverse plants, are also available for reference. Nevertheless, there is a shortage of images depicting wheat leaf rust disease. To address this gap, more than 6500 images of wheat crops, representing two distinct classes (healthy and rusted) of wheat leaves, were gathered from the actual fields in Pakistan, specifically the city of Bannu, under temperatures ranging from 28°C to 35°C. For each field scene, we captured 2 images under the same angle and viewpoint to reduce blur and noise.
The acquisition of these images is not feasible through a standard camera or smartphone; instead, an unmanned aerial vehicle (drone with a camera) will be used to capture images across different parts of the wheat crops. Manual monitoring of the entire field is time-consuming, and acquiring images of the leaves manually by hand is impractical. Fig. 3 illustrates samples of both healthy and unhealthy (leaf rust) conditions.
A total of 6500 wheat crop images were utilized for the validation and testing of the proposed algorithm, divided into 3600 healthy images and 2900 into rust affected images. The dataset was split into a 70% training set and a 30% testing set. For final evaluation, we use a hold-out test set (30%), and for performance estimates, we use stratified k-fold cross-validation on the remaining 70% of the data. As there are two images per scene, we perform scene-wise data splitting to prevent leakage (as all images from the same scene will always end up in the same split). Before initiating the fusion process, a crucial step involved image registration, a pivotal factor influencing the performance of the proposed model. In this study, all images were registered with a uniform size of 64 × 64 pixels. During data augmentation, we kept the height/width and flipping ranges from 0.04 to 0.1, zooming of 0.10, and degrees of 5 as rotation to increase the speed of the proposed model. This is an important step in model training since a lower image size allows the model to train more quickly. Then, we normalized each image by transforming each pixel in an image from a specified range (0 to 255).
To minimize distortion, background noise, and blurriness from complex background images, we initially employed the MS-DWT method, relying on multiple fusion rules to obtain meaningful details from source images. We applied an image decomposition method followed by appropriate fusion rules to get high and low frequency sub-bands of the source images. To effectively process these sub-bands, it is crucial to apply appropriate decomposition scales and wavelet family. Opting for too few scales may result in the minimum meaningful features, while selecting too many scales may lead to image blurriness. In this study, a two-level scale was utilized for decomposing low-frequency components and a four-level scale for high-frequency components, employing the “Haar” family. Our proposed fusion model outperforms standard denoising and deblurring models because it leverages two source images to reduce noise and blurriness while preserving lesion edges and fine rust texture in the wavelet high-frequency bands, which single-image denoising models cannot handle and may introduce artifacts that degrade downstream classification/localization.
Fig. 4 visually shows the block diagram of this method.

Figure 4: Block diagram of the proposed method.
The DWT of a given image G(x) is achieved by analyzing and synthesizing the image with the scaling and wavelet functions [31]. The scaling function is expressed using Eq. (1).
where low(k)’s = low frequency sub-bands and
The high-frequency sub-bands are calculated using Eq. (2).
where high(k)’s = high frequency sub-bands.
Next, the scaling and wavelet coefficients are decomposed using Eq. (3).
where
Finally, the Inverse Discrete Wavelet Transform (IDWT) can be calculated using Eq. (6).
In addition, DWT analysis supports three directions: vertical, diagonal, and horizontal. Eq. (7) expressed these directions:
Fig. 4 depicts the MS-DWT method for image fusion. We utilized a four-scale approach for high-frequency sub-bands decomposition to capture significant information, such as edges, boundaries, and contours. We used the Consistency Verification and Principal Component Analysis (PCA) techniques as fusion rules. These steps/rules are used to collect additional information and features. PCA is used in the image fusion process to enhance subtle details in the source images. The sub-band weights are calculated using the following equations.
Assume that the input modality sub-bands are W1 and W2.
Covariance matrix measurements using Eq. (9).
where E = expectation vector, and
Subsequently, Eigen vectors (VCc) and Eigen values (Edv) are computed using Eq. (12).
To obtain the normalized weights, VCc is determined using Eq. (13).
Finally, the fused sub-bands can be computed using Eq. (14).
The Consistency Verification fusion rule is then used to reduce inaccuracies in pixel intensity values. A 9 × 9 kernel is used to generate a new mapping window for making decisions. Following this process, the final image has all meaningful features.
The second phase involves an LCS approach, coupled with a color thresholding method, to identify and segment healthy and unhealthy images. Image conversion, a subset of pre-processing, entails converting images into fewer color spaces, such as Lab space or black and white, in order to speed up the computation. However, conversion is not always required; it should be avoided if it has the potential to modify spatial and spectral information. LCS can be mathematically described along three axes: L represents lightness, “a*” denotes color information along the green-red axis, and “b*” represents color information along the blue-yellow axis.
In this study, RGB values are initially transformed into an LCS model to improve computing efficiency and enhance pixel identification accuracy. The Lab color model has various advantages, including device independence and a wide color spectrum. It can also reduce the unequal color distribution found in the RGB color model, which includes many transition colors from blue to green. The image is then translated to the Lab space and segmented accordingly. The advantage of the LCS image is that it reserves one channel just for image luminance and two additional channels for color information. The LCS is considered more accurate than the RGB color space since it allows for operations that are not practical with RGB images. Furthermore, it facilitates color reduction, making future processing easier. Fig. 5 shows the LCS converted image.

Figure 5: Resulted image obtained after LCS transformation.
Finally, the rust-affected pixels are classified using a color image thresholding method. This procedure uses a variety of image features, including color information, shape, boundaries, and segments. The removal of the image background is determined by whether the pixel value exceeds or falls below the threshold value. This image separation is specifically designed to isolate the healthy region of the wheat leaf, effectively controlling the disease.
We used a color thresholding approach to assign a pixel value of 0 to the unhealthy or rust-affected area of the leaf, while the healthy part retained its natural color. This method is useful for processing large amounts of data and produces higher accuracy in less time.
The thresholding values for the yellow color based on the min/max RGB values can be calculated using Eqs. (15)–(17). However, these initial values are modified to accommodate a 10% variance [32,33].
where gray(m, n) = pixels intensity, and red(m, n), green(m, n), and blue(m, n) = intensities value of red/green/blue channels.
We maintained the minimum threshold of 0.057 and maximum of 99.612 for channel 1, 0 and 14.603 for channel 2, and 0 and 43.414 for channel 3. The background image was then segmented and converted into binary code, where the rusted area of the leaf is assigned a value of 1, and the background is represented by 0. Furthermore, the rust-affected area is calculated by adding all pixels with an intensity value of 1, which represent rust-affected pixels. Finally, the affected area of the leaf in pixels is calculated using a simple area calculation function. This step is performed after extracting color features from the leaf. We report 95 percent (confidence intervals) CI for accuracy. We also conducted McNemar’s test (α = 0.05) on the same test sample for model comparison.
In this section, we will thoroughly examine the efficiency of our proposed method.
The experiments were carried out on a ROG Strix GeForce RTX™ 4080 with 24 GB of RAM. We used both subjective and objective methods to compare the performance of our proposed model to state-of-the-art (SOTA) methodologies. In this study, the evaluation metrics are divided into two parts. The first phase, which includes Sections A to C, represents the assessment metrics to evaluate the MS-DWT method, while Section D displays the evaluation metrics for overall model accuracy, precision, recall, and F1 score.
4.1 Structure Similarity Indexed Measures (SSIM)
It computes the similarity between the input and the fused image. In this work, the enhanced fusion results are associated with higher SSIM levels. It can be calculated using the following formula.
4.2 Peak Signal to Noise Ratio (PSNR)
It calculates the intensity levels between the input and fused images. Higher PSNR values imply better fusion outcomes. It can be calculated using the following formula.
4.3 Sum of Correlation Difference (SCD)
It calculates the transferred features from input images to the fused image. Its value must be high. It can be calculated using the following formula.
4.4 Evaluation Metrics for Model Evaluation
The following are the evaluation metrics for overall model assessment.
The outcomes of our proposed MS-DWT approach outperformed the SOTA methods [28–30]. As shown in Fig. 6, our proposed MS-DWT approach produced superior qualitative results. These qualitative findings highlight the effectiveness of our proposed model. While a single image acquired with camera technology can be directly used for post-processing and deep learning algorithms to segment and classify normal and rust-affected images, this approach presents challenges when analyzing rust-affected areas across an entire crop field. To overcome this issue, this study used drone technology to collect two or three images of the same scene, which were then combined using the proposed MS-DWT approach to extract complementary information from all images.

Figure 6: MS-DWT fusion (a) first image, (b) second image of the same scene, (c) MS-DWT resulting image, (d) first image (complex background), (e) second image (complex background) of the same scene, (f) MS-DWT resulting image.
Our proposed MS-DWT method is also evaluated using quantitative evaluation metrics and compared the results with SOTA methods. Table 2 illustrates the comparative outcomes of SOTA and the proposed method. Our proposed method produced superior results to the other SOTA methods, particularly for images with complex backgrounds. It was also shown that most of the studies performed well with a single image represented in bold. However, it is unable to extract features from a complex background image that has multiple leaves. Additionally, this study also evaluates the running time and the rusted pixels detection of the proposed and existing methods. Table 3 illustrates the miss detection and running time of SOTA and proposed methods.


Next, Fig. 7 depicts the graphical outcomes of the LCS technique. Likewise, Figs. 8 and 9 show the lesion localization and classification outcomes of both state-of-the-art (SOTA) and proposed methods for single images (camera-caught) and crop fields with complex background captured utilizing drone technology. Throughout the testing, our proposed method consistently outperformed the SOTA methods [29,30], particularly using busy-background crop-field acquired images. The SOTA methods have satisfactory performance in handling a single image with smartphones or digital cameras. However, its performance dramatically degraded when using complex background images.

Figure 7: LCS outcomes (a) original healthy image, (b) result of (a) having no white color, (c) rust-affected original image (signal), (d) result of (c) rusted-affected pixels with white color, (e) original complex background crop image, (f) result of (e) rusted-affected pixels with white color, (g) original complex background crop image, (h) result of (g) rusted-affected pixels with white color.

Figure 8: Color thresholding results (single image). (a) original image (b) result of [29], (c) results of [30], (d) proposed method, (e) original image, (f) result of [29], (g) result of [30], (h) proposed method.

Figure 9: Color thresholding results (complex background condition). (a) original image, (b) result of [29], (c) results of [30], (d) proposed method, (e) original image, (f) result of [29], (g) result of [30], (h) proposed method.
Similarly, we calculated the overall accuracy of trained models by dividing the total number of correctly predicted occurrences by the total number of testing samples from the specified dataset for testing. As previously stated, 30% of the overall dataset was allocated for testing purposes. The computed accuracies of all trained models are shown in Fig. 10. Notably, our proposed model had the greatest accuracy at 99.17%, followed by RFC (98.80%). In contrast, SVM and NB produced lower accuracies of 89.6% and 97.54%, respectively, compared to all other models mentioned.

Figure 10: Accuracy of SOTA and proposed models.
To evaluate and assess our experimental findings, we calculated numerous assessment metrics from the confusion matrix. These measurements include precision, recall, and F1 score. All these metrics are depicted in Fig. 11, where our proposed model has the highest precision, recall, and F1-score, all at 99.17%. These findings show that our proposed model properly classifies all tested samples, followed by RFC with 99.8% accuracy across all metrics. This implies that merely 1% of the data in each class is incorrectly classified. In contrast, SVM shows lower values of 89.80% for precision, recall, and F1-score. Notably, our proposed model correctly detects yellow images due to the prominent pixels in the input images.

Figure 11: Comparative assessment of Precision, Recall, and F1-Score.
Table 4 illustrates the results of our proposed model compared with the SOTA method. In the study [28], ML models such as SVM and PNN are used to classify disease using the affected area divided by the total area formula, and achieved 99.99% accuracy. Similarly, the study in [29], the authors used LBP, Color histogram and HOG features and a Fine-tune RFC model. This study achieved 99.08% accuracy. Further, an E-MMC model was presented in [30] using texture and color features and achieved 99.07% accuracy. It is observed that our proposed model achieved a higher accuracy of 99.17% as compared to SOTA methods.

Finally, Fig. 12 shows the results of the normal and affected wheat leaf images’ area calculation.

Figure 12: Estimated lesion area of rust infection on the wheat leaf.
A limitation of this study is the absence of expert-annotated pixel-level masks for rust regions in the dataset. The proposed Lab-thresholding method provides qualitative lesion localization and helps estimate the overall extent of the infected area.
In real UAV/field images, wheat rust is difficult to analyze because the images often suffer from motion blur, changing lighting conditions, and messy backgrounds [33]. This has been addressed by recent works that have utilized deep learning models, particularly when precise lesion segmentation is required. In this research, we used a more practical model that combined two registered images of the same scene to minimize frame-specific noise and blur while preserving lesion edges and fine rust texture in the wavelet high-frequency bands, which are mandatory for detection. Moreover, the multiscale features (empirically determined) along with the consistency verification fusion rule reduce inaccuracies in pixel intensity values. And finally, we apply Lab color analysis to emphasize the suspected infected areas in an easily interpretable manner, which can help estimate their extent. In this work, we present a lightweight, explainable process that performs well for image-level healthy vs. rust-affected classification and provides useful visual localization. At the same time, we acknowledge important limitations such as the extension of private datasets, their pre-processing, and the implementation of cutting-edge DL models for more valuable results.
This research aims to enhance sustainability in agriculture by improving wheat resilience to leaf rust, a disease that significantly reduces productivity. It introduces novel methods for early detection and management of the disease, facilitating efficient treatment application. These advancements will enhance crop health and yield, supporting the economic sustainability of local agriculture by equipping farmers with essential knowledge and tools to protect their livelihoods and the environment. This research develops an innovative approach for wheat disease detection utilizing transformation and machine learning algorithms. Multiple images of wheat crops, both healthy and rust-affected, were captured using drone technology. Initially, the Multi-Scale Discrete Wavelet Transform (MS-DWT), a novel image fusion method, was applied, utilizing hybrid fusion rules to extract meaningful information from the images. In the second phase, Lab Color Space and color thresholding techniques were employed to detect and segment wheat leaf rust. The model also calculates the area affected by rust, providing farmers with critical information for post-medication (anti-rust spray) operations.
In future work, we plan to expand our dataset to include more images of wheat crops affected by other diseases. We also aim to implement our model on a server-based system and deploy it on smart mobile devices, assisting farmers in the early stages of disease detection and management. We will also compare our model with recent state-of-the-art models, such as Transformer-based models and GAN models. Additionally, we will develop pixel-level annotated segment expert masks and assess localization and segmentation performance using standard metrics such as Dice, IoU, and pixel precision. At the end, we will also quantify agronomic and economic benefits through field trials and cost-benefit analysis.
Acknowledgement: Thanks to Prince Sattam bin Abdulaziz University for supporting this work.
Funding Statement: The author extends appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2025 03/34113).
Availability of Data and Materials: The datasets analyzed during the current study are not publicly available. However, the corresponding author may provide datasets upon reasonable request.
Ethics Approval: Not applicable.
Conflicts of Interest: The author declares no conflicts of interest.
References
1. Aker JC. Dial “A” for agriculture: a review of information and communication technologies for agricultural extension in developing countries. Agric Econ. 2011;42(6):631–47. [Google Scholar]
2. Leaf rust In wheat, barley, and oats [cited 2022 Nov 1]. Available from: https://www.gov.mb.ca/agriculture/crops/plant-diseases/leaf-rust-wheat-barley-oats.html. [Google Scholar]
3. Leonello T. From precision agriculture to Industry 4.0. Br Food J. 2019;121(8):1730–43. doi:10.1108/bfj-11-2018-0747. [Google Scholar] [CrossRef]
4. Hughes D, Marcel S. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv:1511.08060. 2015. [Google Scholar]
5. Barbedo JG, Koenigkan LV, Halfeld-Vieira BA, Costa RV, Nechet KL, Godoy CV, et al. Annotated plant pathology databases for image-based detection and recognition of diseases. IEEE Lat Am Trans. 2018;16(6):1749–57. doi:10.1109/tla.2018.8444395. [Google Scholar] [CrossRef]
6. Barbedo JG. Factors influencing the use of deep learning for plant disease recognition. Biosyst Eng. 2018;172:84–91. doi:10.1016/j.biosystemseng.2018.05.013. [Google Scholar] [CrossRef]
7. Johannes A, Picon A, Alvarez-Gila A, Echazarra J, Rodriguez-Vaamonde S, Navajas AD, et al. Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case. Comput Electron Agric. 2017;38:200–9. doi:10.1016/j.compag.2017.04.013. [Google Scholar] [CrossRef]
8. Barbedo JGA. Plant disease identification from individual lesions and spots using deep learning. Biosyst Eng. 2019;180(1):96–107. doi:10.1016/j.biosystemseng.2019.02.002. [Google Scholar] [CrossRef]
9. Ngugi LC, Abelwahab M, Abo-Zahhad M. Recent advances in image processing techniques for automated leaf pest and disease recognition—a review. Inf Process Agric. 2021;8(1):27–51. doi:10.1016/j.inpa.2020.04.004. [Google Scholar] [CrossRef]
10. Erenstein O, Jaleta M, Mottaleb KA, Sonder K, Donovan J, Braun HJ. Global trends in wheat production, consumption and trade. In: Wheat improvement. Cham, Switzerland: Springer; 2022. p. 47–66. doi:10.1007/978-3-030-90673-3_4. [Google Scholar] [CrossRef]
11. Paul A, Ghosh S, Das AK, Goswami S, Das Choudhury S, Sen S. A review on agricultural advancement based on computer vision and machine learning. In: Mandal JK, Bhattacharya D, editors. Emerging technology in modelling and graphics. Singapore: Springer; 2020. p. 567–81. doi:10.1007/978-981-13-7403-6_50. [Google Scholar] [CrossRef]
12. Liakos KG, Busato P, Moshou D, Pearson S, Bochtis D. Machine learning in agriculture: a review. Sensors. 2018;18(8):26–74. doi:10.3390/s18082674. [Google Scholar] [PubMed] [CrossRef]
13. Xu P, Wu G, Guo Y, Yang H, Zhang R. Automatic wheat leaf rust detection and grading diagnosis via embedded image processing system. Procedia Comput Sci. 2017;107:836–41. doi:10.1016/j.procs.2017.03.177. [Google Scholar] [CrossRef]
14. Tambake TR, Patil PB. Wheat disease detection using image processing. In: Proceedings of the International Conference on Sustainable Growth through Universal Practices in Science, Technology and Management (ICSGUPSTM-2018); 2018 Jun 8–10; Goa, India. p. 121–4. [Google Scholar]
15. Punn M, Bhalla N. Classification of wheat grains using machine algorithms. Int J Sci Res. 2013;2(8):363–6. doi:10.32628/ijsrset2411451. [Google Scholar] [CrossRef]
16. Suresh V, Krishnan M, Hemavarthini M, Jayanthan K, Gopinath D. Plant disease detection using image processing. Int J Eng Res Technol. 2020;9(3):78–82. doi:10.17577/IJERTV9IS030114. [Google Scholar] [CrossRef]
17. Mangena VM, Thanh DN, Khamparia A, Pande S, Malik R, Gupta D. Recognition and classification of pomegranate leaves diseases by image processing and machine learning techniques. Comput Mater Contin. 2021;66(3):2939–55. [Google Scholar]
18. Vijay N. Detection of plant diseases in tomato leaves: with focus on providing explainability and evaluating user trust [master’s thesis]. Skövde, Sweden: University of Skövde; 2021. [Google Scholar]
19. Kaur N. Plant leaf disease detection using ensemble classification and feature extraction. Turk J Comput Math Educ. 2021;12(11):2339–52. doi:10.1109/icrito51393.2021.9596456. [Google Scholar] [CrossRef]
20. Shrivastava VK, Pradhan MK. Rice plant disease classification using color features: a machine learning paradigm. J Plant Pathol. 2021;103(1):17–26. [Google Scholar]
21. Le VN, Ahderom S, Apopei B, Alameh K. A novel method for detecting morphologically similar crops and weeds based on the combination of contour masks and filtered local binaryp pattern operators. GigaScience. 2020;9(3):giaa017. doi:10.1093/gigascience/giaa017. [Google Scholar] [PubMed] [CrossRef]
22. Ahmad W, Shah SM, Irtaza A. Plants disease phenotyping using quinary patterns as texture descriptor. KSII Trans Internet Inf Syst. 2020;14(8):3312–27. doi:10.3837/tiis.2020.08.009. [Google Scholar] [CrossRef]
23. Waldamichael FG, Debelee TG, Schwenker F, Ayano YM, Kebede SR. Machine learning in cereal crops disease detection: a review. Algorithms. 2022;15(3):75. doi:10.3390/a15030075. [Google Scholar] [CrossRef]
24. Aboneh T, Rorissa A, Srinivasagan R, Gemechu A. Computer vision framework for wheat disease identification and classification using Jetson GPU infrastructure. Technologies. 2021;9(3):47. doi:10.3390/technologies9030047. [Google Scholar] [CrossRef]
25. Schirrmann M, Landwehr N, Giebel A, Garz A, Dammer KH. Early detection of stripe rust in winter wheat using deep residual neural networks. Front Plant Sci. 2021;12:1–14. doi:10.3389/fpls.2021.469689. [Google Scholar] [PubMed] [CrossRef]
26. Bhagwat R, Dandawate Y. A framework for crop disease detection using feature fusion method. Int J Eng Technol Innov. 2021;11(3):216–28. doi:10.46604/ijeti.2021.7346. [Google Scholar] [CrossRef]
27. Aurangzeb K, Akmal F, Khan MA, Sharif M, Javed MY. Advanced machine learning algorithm based system for crops leaf diseases recognition. In: Proceedings of the IEEE 6th Conference on Data Science and Machine Learning Applications (CDMA); 2020 Mar 4–5; Riyadh, Saudi Arabia. p. 146–51. [Google Scholar]
28. Zhao J, Fang Y, Chu G, Yan H, Hu L, Huang L. Identification of leaf-scale wheat powdery mildew (Blumeria graminis f. sp. Tritici) combining hyperspectral imaging and an SVM classifier. Plants. 2020;9(8):936. doi:10.3390/plants9080936. [Google Scholar] [PubMed] [CrossRef]
29. Khan H, Haq IU, Munsif MM, Khan SU, Lee MY. Automated wheat diseases classification framework using advanced machine learning technique. Agriculture. 2022;12(8):1226. doi:10.3390/agriculture12081226. [Google Scholar] [CrossRef]
30. Sood S, Singh H, Jindal S, Ribeiro-Barros AI, Tevera DS, Goulao LF, et al. Rust disease classification using deep learning based algorithm. In: Food systems resilience. Norderstedt, Germany: Books on Demand; 2022. 197 p. [Google Scholar]
31. Pajares G, De La Cruz JM. A wavelet-based image fusion tutorial. Pattern Recognit. 2004;37(9):1855–72. doi:10.1016/j.patcog.2004.03.010. [Google Scholar] [CrossRef]
32. Kaushick P. Thresholding technique for color image segmentation [bachelor’s thesis]. Vellore, India: VIT University; 2018. [Google Scholar]
33. Khan SU, Alsuhaibani A, Alabduljabbar A, Almarshad F, Altherwy YN, Akram T. A review on automated plant disease detection: motivation, limitations, challenges, and recent advancements for future research. J King Saud Univ Comput Inf Sci. 2025;37(3):34. [Google Scholar]
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools