Preserving biodiversity and maintaining ecological balance is essential in current environmental conditions. It is challenging to determine vegetation using traditional map classification approaches. The primary issue in detecting vegetation pattern is that it appears with complex spatial structures and similar spectral properties. It is more demandable to determine the multiple spectral analyses for improving the accuracy of vegetation mapping through remotely sensed images. The proposed framework is developed with the idea of ensembling three effective strategies to produce a robust architecture for vegetation mapping. The architecture comprises three approaches, feature-based approach, region-based approach, and texture-based approach for classifying the vegetation area. The novel Deep Meta fusion model (DMFM) is created with a unique fusion framework of residual stacking of convolution layers with Unique covariate features (UCF), Intensity features (IF), and Colour features (CF). The overhead issues in GPU utilization during Convolution neural network (CNN) models are reduced here with a lightweight architecture. The system considers detailing feature areas to improve classification accuracy and reduce processing time. The proposed DMFM model achieved 99% accuracy, with a maximum processing time of 130 s. The training, testing, and validation losses are degraded to a significant level that shows the performance quality with the DMFM model. The system acts as a standard analysis platform for dynamic datasets since all three different features, such as Unique covariate features (UCF), Intensity features (IF), and Colour features (CF), are considered very well.
Detection of invasive species is a complex task in vegetation mapping systems. Remote sensing techniques are helpful in the invasive methods of vegetation mapping. In coastal areas, the densely turned cloud cover disturbs the process of phonological information deriving. Scene-based methods are less applicable in these areas [
On the other hand, vegetation mapping is also essential for soil moisture analysis using microwave remote sensing. The Radar vegetation index (RVI) is helpful in the estimation of vegetation water content (VWC). Due to frequent variations in vegetation structure and surface roughness, high-level observations are required [
The Research Perspectives of this paper are as follows: To protect the land surfaces, it is important to utilize the resources and map the vegetation areas. Globally mapping the various vegetation spaces enable the proper utilization of resources. Interpreting and mapping locations are essential to safeguard the unauthorized vegetation area, water resources, surfaces, and crop cultivation areas. Further, keeping these constraints as the research motive, the goal of the proposed system is formulated with deep extraction of regions to be utilized for vegetation using eMODIS NDVI dataset from earth explorer. The proposed framework is modelled with three different phases of feature study using Region-based features, Unique covariate-based features and Intensity-based features etc., to map the regions globally. Various statistical measures are made and compared with the existing system in terms of the accuracy of the Kappa score, and a formulated confusion matrix is derived.
The rest of the Paper is structured with a Background study relevant to the existing research in Section 2 and followed by the system design frameworks discussed in Section 3. The research is based on existing drawbacks discussed in the background study and sorting out the problem statement defined in the system design section. Detailed design architecture and methodology of implementation are discussed in Section 4. Various results are briefly discussed in Section 5 followed by the Conclusion of the research paper with future extensions.
Urban vegetation mapping using deep neural network is significantly explained with stopper explainable AI (XAI) can help different insights of training data set. This method evaluates the vegetation area from the given data set. This classification process uses aerial imagery based on spectral and textural features and achieves an accuracy of 94.4% for vegetation cover mapping. This study also reveals that spectral characteristics are insufficient for mapping vegetation coverage [
Due to the significant changes in climate and environment, various earth locations are changing, and many resources, such as water resources and vegetation, are not visible for utilization. Remotely sensed images are available in a massive collection, helpful for mapping the vegetation area correctly and accurately to use it to the maximum. Resources such as water resources, vegetation resources, different soil types, and cultivation lands are left untreated because of locating the pathways and features of the natural behaviour. The proposed study considers how vegetation mapping from remotely sensed images can help make use of the land, and surface area, for the benefit of everyone to safeguard the environment and natural resources.
The traditional method involves a vegetation mapping system that consists of a collection of remote sensing images, satellite image pre-processing and noise removal techniques. A pre-processed image deals with all necessary steps towards the image-enhancing process and a classification model in deep learning and machine learning. These images are categorized concerning vegetation points.
The proposed model considers the USGS-Earth explorer data that provides access to the different global locations. It contains the geospatial data of earth’s imaginary collection from Unmanned aerial vehicles (UAV), IKONOS and National maps data accessed via interactive maps; it has geology, ecosystem, oceanic, water resources, and multi-media global access data. The proposed work considers the east European forest with few desert images collected for differentiation.
The input images for the vegetation mapping process are collected from earth Explorer USGS. The input images are initially pre-processed by scaling the image size into constant since the data set images are different in scale. Colour masking and RGB segmentation are the first and foremost processes done with the pre-processing images. A unique colour masking process is done concerning the threshold of each channel of the images. The image consists of red, green and blue channels with a specific threshold ranging from 0 to 255. The requirement of vegetation colour lies at the channel, which is altered explicitly by the image masking tool. A slider is created based on the obtained minimum and maximum channel histogram threshold levels. A colour mask is projected concerning the channel minimum and maximum value and applied to the three channels equally. After the channel masking, the images are converted into constant projections by rotating colour space. A masked RGB image is created from 2D colour space pixels after the polygon application of slider value. These images are translated into the region of interest (ROI) location by using an image transformation matrix. The masked RGB image has a unique intensity pixel between the given space. Before the triple learning feature extraction, ionic components in the masked RGB images are extracted. The individual points are randomly distributed for the sample size. The sampling size used here is 100. The triple learning process for multi-feature extraction here uses logical regression, random forest regression and decision tree algorithms. A novel intermediate feature convergence technique is utilized in which the output of the loud linear regression is not required to grab the unique features of the input photos. On the other hand, the intermediate features of the logical regression random forest and decision tree algorithm for extracted and mapped. The proposed model is analyzed in two different steps; the first one extracts the unique features and formulates the feature-based analysis. The second one extracts the masked image for the region’s map using the colour masking technique and performs the region-based analysis. A detailed loop KNN that runs iteratively for the given features using a saddle for point estimator is evaluated. The resultant of lubrication and the saddle point estimator provides the masked features corresponding to the input images. A novel deep sense net technique is here assessed to analyze the region- and feature-based analysis in parallel. The modules of the proposed approach are categorized as three phases of operations to determine the novel method of vegetation mapping to the next level, comparing the existing frameworks [
The proposed novel methodology is the fusion technique that incorporates the intermediate feature extraction using a Linear regression algorithm, Random Forest regression algorithm and Decision tree algorithm. Each machine learning algorithm has a unique way of representing the correlation score. The novelty of the proposed model is derived by extracting the intermediate values that act as unique point features of the input test and train images.
The pre-processing module consists of the following steps. Reading the input image Dataset from the specified location, we created a dataset of images collected from the USGS-eMODIS NDVI V6 dataset (Earth Explorer). We are segmenting the required area through the Colour Space mapping technique using the Colour Threshold application available with the image processing toolbox. The Colour Thresholder app lets you threshold colour images by manipulating the colour components of these images based on different colour spaces. Using this app, you can create a segmentation mask for a colour image. Colour Threshold opens the Colour Threshold app, enabling users to create a segmentation mask of a colour image based on exploring different colour spaces. We are initializing the length of the iterative loop and Sample size to be considered for each frame of analysis (100 samples of unique Pixels intensities). Display the input image, binary masked image, Colour Space mapped image etc.
The unique pixel intensities from the masked image are extracted using an individual command. Apply linear regression, Random Forest and Decision Tree algorithms separately to analyze the pixel intensities related to associated pixel intensities. For example, the vegetation area has a specific range of image intensities. The data split within the samples as Train intensity and Test intensity to transform the input unique pixel values into another form of intensity. These Correlated intensity values provide the exact vegetation area in the test image.
The feature extraction process is determined to grab the unique information in the form of numerical data from the given raw input, to secure the original data, and to allow the detection algorithm to make specific statistics and relativity between the data patterns. It provides better results when applied to the raw data. The goal of the feature extraction process with the machine learning algorithm is discussed here, like Random Forest (RF), Decision Tree (DT) [ The original image data from the dataset is transformed into pixels processing, and the originality of the image pixels is also secured. In terms of existing feature extraction techniques, such as Principle component analysis (PCA) and Linear discriminant analysis (LDA) [ The triple learning process developed here considers various aspects of intermediate features of RF, LR, and DT models to provide the top cumulative feature points.
The suggested approach uses a robust feature fusion technique, which combines three machine-learning techniques. The intermediate features are immediate values extracted after the colour masking process. Once the RGB colour space masking has been done, the triple-learning process maps the feature points together. In statistical analysis, linear regression considers the inputs linear and gathers the dependent variable from the given pattern. These values determine the relationship between the dependent variable with one or more independent variables. The generalized form of random forest regression is defined as.
The Random Forest algorithm formulates the accumulation of numerous decisions. The forest depends on each independent decision of the trees. Higher the number of decisions, the denser the forest associated. Random forest is the summation of independent performance and is stated in
In the same way, the random forest regression is denoted as the summation of total independent decision-making trees available to the total count of the trees.
Similarly, the Decision tree algorithm is nothing but numerous intermediate decision makers that divides the large dataset into smaller couples of decision values. These decisions are regression results of the given input pattern at smaller spans. The decision-making process is the same as that of
The common features considered here are a colour feature, intensity feature, and unique attributes in the form of pixels are determined.
Deep convolutional neural network act as the common deriving model for novelty in Deep Sense net. The convolution architecture comprises an image input layer of 100 × 100 ×1 and a 2D convolution layer of 1 × 10. The transfer function used in the Deep Sense net is Relu. The max-pooling of the 2D layer is evaluated, which consolidates the kernel to extract the maximum value from the patch. Max-pooling is used to convolute and downsample the long stream of the input layer. The max pooling operation minimizes the total scale of the convoluted input samples.
The convolution neural network is trained with a stochastic gradient descent optimizer (SGD). The classification process is performed with two modes of operation, feature-based analysis and region-based analysis. SGD optimizer is used for complete training of masked images in the region-based analysis and features mapped study in the second method.
The feature input values fetched from the triple learning process are further optimized with a stochastic gradient Descent process which is an attractive method for optimizing the input values for the objective function. These optimizers must tune the difference and sub deference input values adaptively. The feature values are not exactly matching with the training values. To make a matching score, these values need to be differentiated by tuning the values with the tolerance. Gradient Descent optimizers are helpful for randomly making the subset of the data required to prevent it from being used for dimensionality reduction. The stochastic approximation method is based on the Robbins-Monroe algorithm.
In stochastic gradient descent, the true gradient of
The K-nearest neighbour algorithm is used for non-parametric classification and regression needs. The closest training sample is compared with the given test sample in which the object related to the training sample is identified by differentiation. At K, the nearest neighbour algorithm defines the Kth nearest value applicable to the training data for the complete iterations.
From the iterative analysis with Deep Sensenet with two modes, feature-based analysis and region-based analysis, the complete correlation process runs to the maximum of looped k-nearest neighbour (KNN). The saddle point is the maximum occurring threshold in the complete iteration process. The saddle points help to reach the nearest match of the image features with the training features that maps the vegetation area. The saddle point is defined as the maximum threshold value frequently occurring within the looped k-nearest neighbour (KNN). The saddle point estimator formulates the threshold value that frequently occurs from the n iterations of the looped k-nearest neighbour (KNN). The
The unique covariate points are extracted from LR, DT, and RF collaborated triple feature learning (TFL) process. The intermediate features are fetched to extract the unique covariate pixel features (UCF) (
These points are nothing but the normalized feature vectors created to test the proposed model. The data split up is implemented using test data and fetched into Deep Densenet for classification. Further, the Looped KNN is the iterative loop of the nearest point calculator required to correlate with the relative match score. The saddle point estimator is the final suggestion maker that considers the decision made by the proposed different parametric results and further estimates the final decision.
Mathematical dynamic shape (GAC) is a form model that changes the Euclidean smooth bend by changing the bend’s focus in opposite directions. The focus moves at a rate proportionate to the shape of the picture’s district. The mathematical progression of the angle and the acknowledgement of things are utilized to portray forms. Mathematical stream envelops inside and outside mathematical measures in the area of interest. During the time spent recognizing things in an image, a mathematical substitution for snakes is used. These form models depend vigorously on fair and square set works that determine the picture’s special locales for division. GLCM gray covariance matrix is one of the robust techniques for extracting the features present in the test images. Further, the images estimate the probability that distant sets of pixel values will occur in the image. This calculation is known as GLCM. The framework characterizes the likelihood of joining two pixels with values I and j with distance d and as a precise direction.
RGB masking is done, and the equivalent feature component after the Triple learning process is shown in
S no | References | Dataset | Algorithms | Accuracy |
---|---|---|---|---|
1 | [ |
AIRS | AI (XAI), GLCM, DNN | 94.40% |
2 | [ |
Landsat-5 | Random forest classifier | 92.40% |
3 | [ |
NDVI-AKEVT land cover | CNN | 83% |
4 | [ |
Landsat-7 | UCVM-CNN | 97% |
5 | Proposed | USGS-explorer-eMODIS | DCNN, DMFM | 99% |
Reference | Application | Dataset | Kappa coefficient |
---|---|---|---|
Lee, et al., (2021) | Vegetation maps | NDVI index | 0.8 |
Shaik et al., (2021) | WildFire maps | PRISMA | 0.79 |
Weil et al., (2018) | Woody vegetation | UAV | 0.82 |
Haque et al., (2021) | Damaged vegetation | Sentinel-2 | 0.1 |
Proposed | Vegetation maps | NDVI eMODIS | 0.9 |
Algorithm | Accuracy | Precision | Recall | F1score | Spec | Sen |
---|---|---|---|---|---|---|
Proposed UCF method | 0.98 | 0.9 | 0.91 | 0.9 | 0.9 | 0.9 |
Proposed CF method | 0.96 | 0.92 | 0.96 | 0.95 | 0.92 | 0.91 |
Proposed IF method | 0.97 | 0.91 | 0.92 | 0.94 | 0.91 | 0.92 |
Proposed DMFM method | 0.99 | 0.91 | 0.92 | 0.92 | 0.9 | 0.9 |
Iterations | Testing loss | Training loss | Validation loss | Computational time (s) |
---|---|---|---|---|
10 | 1.3863 | 1.2140 | 1.0254 | 0.52 |
50 | 1.3773 | 1.0124 | 1.0204 | 47.86 |
100 | 1.3683 | 1.0223 | 1.0154 | 79.42 |
150 | 1.3593 | 1.0250 | 1.0084 | 110.98 |
200 | 1.3503 | 1.0654 | 1.0014 | 158.32 |
250 | 1.3413 | 1.2230 | 0.9944 | 179.76 |
The novel implementation of vegetation mapping is evaluated here. Vegetation mapping is important to maintain the ecosystem and formulate the vegetation areas’ healthiness. Mapping the vegetation patterns using remote sensing images is discussed here. The presented model considers the Earth Explorer-USGS-eMODIS-NDVI V6 dataset that contains the overall images of earth vegetation areas in high resolution. The novel Deep Meta fusion model (DMFM) is implemented here to create a robust architecture that extracts the features of the input images through a feature- and region-based approach. Based on the evaluated features, Deep Sensenet architecture performs a higher accuracy of 99% for the static data given from the USGS dataset. The system developed here is a lightweight architecture to map global data effectively. The accuracy of the proposed model is 99%, precision is 91%, Recall is 92%, F1Score is 92% precision, Specificity is 90% specificity, 90% etc. The looped KNN presented here is dynamic that adaptively adjusts the weights of the threshold and length of the loop depending on the complexity of the input images. Validation of similar pattern structure to training images and considering UCF, IF, and CF parameters further help to make robust decisions. It adds strength to the decision rule. Various parametric measured even the Kappa coefficient is achieved around 0.9, nearer to Score 1, which is the positive highlight of the proposed work. Further, the proposed model is extended by utilizing the categorized vegetation data using YoLo frameworks.
The authors would like to thank the Department of Computer Science and Engineering, Saveetha School of Engineering, SIMATS, for their support till the completion of this project.
The authors received no specific funding for this study.
The authors declare they have no conflicts of interest to report regarding the present study.