[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.022904
images
Article

Detection of Behavioral Patterns Employing a Hybrid Approach of Computational Techniques

Rohit Raja1, Chetan Swarup2, Abhishek Kumar3,*, Kamred Udham Singh4, Teekam Singh5, Dinesh Gupta6, Neeraj Varshney7 and Swati Jain8

1Department of Information Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, 495009, India
2Department of Basic Science, College of Science & Theoretical Studies, Saudi Electronic University, 13316, Saudi Arabia
3Department of Computer Science & IT, JAIN (Deemed to be University), Bangalore, 560069, India
4Computer Science and Information Science, Cheng Kung University, 621301, Taiwan
5School of Computer Science, University of Petroleum and Energy Studies, Dehradun, 248007, India
6Department of CSE, I K Gujral Punjab Technical University, Jalandhar, 144603, India
7Department of Computer Engineering and Applications, GLA University, Mathura, 281406, India
8Department of Computer Science, Government J Yoganandam Chhattisgarh College, Raipur, 492001, India
*Corresponding Author: Abhishek Kumar. Email: abhishek.maacindia@gmail.com
Received: 22 August 2021; Accepted: 17 January 2022

Abstract: As far as the present state is concerned in detecting the behavioral pattern of humans (subject) using morphological image processing, a considerable portion of the study has been conducted utilizing frontal vision data of human faces. The present research work had used a side vision of human-face data to develop a theoretical framework via a hybrid analytical model approach. In this example, hybridization includes an artificial neural network (ANN) with a genetic algorithm (GA). We researched the geometrical properties extracted from side-vision human-face data. An additional study was conducted to determine the ideal number of geometrical characteristics to pick while clustering. The close vicinity of minimum distance measurements is done for these clusters, mapped for proper classification and decision process of behavioral pattern. To identify the data acquired, support vector machines and artificial neural networks are utilized. A method known as an adaptive-unidirectional associative memory (AUTAM) was used to map one side of a human face to the other side of the same subject. The behavioral pattern has been detected based on two-class problem classification, and the decision process has been done using a genetic algorithm with best-fit measurements. The developed algorithm in the present work has been tested by considering a dataset of 100 subjects and tested using standard databases like FERET, Multi-PIE, Yale Face database, RTR, CASIA, etc. The complexity measures have also been calculated under worst-case and best-case situations.

Keywords: Adaptive-unidirectional-associative-memory technique; artificial neural network; genetic algorithm; hybrid approach

1  Introduction

To detect the behavioral pattern of any subject (human) is the most challenging task, especially in the defense field. The current study examines similar challenges using a side-by-side perspective of human-face data. According to the literature, only a few researchers have used side visions of human faces to identify behavioral traits. Most research has been conducted using frontal-vision data of human faces, either for face recognition or as a biometric characteristic assessment. Until now, very few types of research have been carried out to detect behavioral patterns. Several significant improvements have been made before identifying human faces from the side (parallel to the picture plane), using a five-degree switching mechanism in regression or decreasing step method. Bouzas et al. [1] used a similar method in his method for dimensional space reducing switching amount based on the requirement of mutual information between the altered data and their associated class labels. Later on, [2] enhanced the work performance by using descriptions to describe human-face pictures and a clustering algorithm to choose and classify variables for human-face recognition.

Furthermore, Chatrath et al. [3] used facial emotion to interact between people and robots by employing a human-face front-vision. Furthermore, Zhang et al. [4] also achieved some frontal-vision-related work while the target is distance. Many researchers suggested a better regression analysis classification technique by the upfront perspective of human-face data. Zhao et al. [5] has shown that learning representation to predict the placement and shape of face images may boost emotion detection from human images. Similarly, Wang et al. [6,7] suggested a technique of interactive frontal processing and segmentation for human-face recognition. The literature analysis indicated that relatively few scholars carried out their work to discover behavioral patterns from human-face data. Also, statistical methodologies and classic mathematical techniques were discovered, although most of the study was conducted. Formerly, some artificial neural network components and other statistical approaches have achieved some significant satisfactory results [810].

A subsequent study was conducted to recognize when the subject's human face is aligned to the picture plane using hybrid methodology [11,12]. The current research study was also conducted employing hybrid cloud computing.

In the same year, Algorithms were proposed for secured photography using the dual camera. This method helps to identify issues such as authentication, forgery detection, and ownership management. This algorithm was developed for Android phones having dual cameras for security purposes [13].

Introduce fuzzy logic-based facial expression recognition system which identifies seven basic facial expressions like happy, anger, sad, neutral, fear, surprise, disgust. This type of system is used in the intelligent selection of areas in the facial expression recognition system [14]. An algorithm was proposed which is used in a video-based face recognition system. This algorithm can compare any still image with video, and matching videos with videos. For optimizing the rank list across video frames three-stage approach is used for effective matching of videos and still images [15]. The method has introduced a method for exploring facial asymmetry using optical flow. In terms of shape and texture, the human face is not bilaterally symmetry and the attractiveness of human facial images can be increased by artificial reconstruction and facial beautification. And using optical flow image can be reconstructed according to needed symmetry [16] has proposed an effective, efficient, and robust method for face recognition based on image sets (FRIS) known as locally Grassmannian Discriminant Analysis (LGDA). For finding the optimal set of local linear bases a novel accelerated proximal gradient-based learning algorithm is used. LGDA is combined with the clustering technique linearity-constrained nearest neighborhood (LCNN) for expressing the main fold by a collection of the local linear model (LLMs).

An algorithm was proposed to change the direction of the subject's face parallel (zero degrees) to the picture plane diagonally (45 degrees) to the image plane. In this research, artificial neural networks (ANN) and genetic algorithms (GA) were used. Detailed research is divided into two parts: During the first part, features are obtained from the front face and database is built, and in the second portion, a test face image but with all feasible alignment must be developed and hybridized forwards computing approach performed for proper identification of the subject's face. An actual perfectly matched classification-decision procedure must be performed utilizing the datasets generated in the current research activity. Other statistics like FERET and so on datasets were examined for an acceptable optimization method. An algorithm was designed to identify cognitive qualities and the subject's physiologic attributes to support the biometric safety system. To support the biometric security system, specific instance analysis must also be conducted. Development has been widely datasets, and a suitable comparison methodology has been analyzed [16]. These studies reveal various features with varying performance [17]. The work was structured for biometric study. Using Deep CNN with genetic segmentation, this research proposes a method for autonomous detection and recognition of animals. Standard recognition methods such as SU, DS, MDF, LEGS, DRFI, MR, and GC are compared to the suggested work. A database containing 100 different subjects, two classes, and ten photos is produced for training and examining the suggested task [18]. The CBIR algorithm examined visual image characteristics, such as colour, texture, shape, etc. The non-visual aspects also play a key role in image recovery. The image is extracted using a neural network which enables the computation to be improved using the Corel dataset [19,20]. This paper presents a new age function modelling technique based on the fusion of local features. Image normalization is initially performed and a feature removal process is performed. The classifier for Extreme Learning Machine (ELM) is used to evaluate output pictures for the respective input images [21]. The proposed algorithm has a higher recall value, accuracy and error rate than previous algorithms. New 5-layer SegNet-based algorithm encoder enhances the accuracy of various dataset benchmarks. The detection rate was up to 97 percent and the lifespan is reduced to one second per image [22].

Modeling of Datasets

In the present work, how the modeling of datasets is done is described briefly. The complete work has been carried out in two phases: the modeling phase and the understanding phase. In the first phase, a knowledge-based model called the RTR database model as corpus has been formed over human face images. The strategies that have been applied for the formation of the corpus are the image warping technique (IWT) and artificial neural network (ANN). The model has been formed after capturing the human face image through a digital camera or through scanning the human face image (Refer Appendix–B). Also, the collections of the human images have been done from the different standard databases (Refer Appendix–A). In the present work, how the human face images have been captured has been depicted in Fig. 1 below.

images

Figure 1: Functional block diagram for capturing the human face image

From Fig. 1, a known human face image has been captured through hardware, which means a camera or a scanner. During capturing of an image, a feedback control mechanism has been applied manually. The adjustments for two factors have been done. The factors are resolution and distance. A fixed resolution has been kept while capturing a known human face image. The distance has been fixed at 1 meter between face and camera. Although the second factor has been overcome by proper scaling and rectification of the image. This process has been jointly called as image warping technique (IWT). After proper adjustment of an image, it has been stored in a file with extension jpg (joint photographic group) format.

The objective and highlight of research work are represented by the following steps: -

•   Enhanced and compressed image (human-face image) has to be obtained.

•   Segmentation of the face image has to be done.

•   Relevant features have to be extracted from the face image.

•   Modeling of face features using artificial neural network (ANN) technique, wavelet transformation, fuzzy c-means, and k-means clustering techniques, forward-backward dynamic programming.

•   Development of an algorithm for the formation of the above model.

•   Understanding of the above-framed model for automatic human face recognition (AHFR) using genetic algorithm method and classification using fuzzy set rules or theory.

•   Development of an algorithm for the understanding of human face model for AHFR.

The following sections comprise the planned study: Section 2 provides a solution methodology using mathematical formulations, Section 3 discusses actual results and discussions, Section 4 finishes with remarks and an expanded area of study, and Section 5 contains all references to the last section of the paper.

2  Solution Methodology with a Mathematical Formulation

This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.

The mathematical formulations and their practical execution of the current study are described in succeeding subcategories.

As far as the present situation in the field of morphological image processing is concerned, a great deal of human-face identification research has been carried out using a 90-degree orientation in the imaging plane of any subject. The majority of the work was done using statistical methods and traditional mathematical approaches. A number of soft computing components and other statistical methods have yielded remarkable and good results. There has been little effort when the subject's human face is parallel to the image plane in order to recognize the subject, using hybrid approaches of soft-computing technology. Soft-computing technology hybrid means a combination of soft-computing tools. The current research has also been done using advanced computing techniques. High-end technology involves combining soft-computing and symbolic computing. The technology of soft computing includes artificial neural networks, fuzzy set theory and genetic algorithms. Symbolic computing is a special type of data known as symbolic objects. Such computing enables researchers to perform mathematical operations without numerical valves calculation. Analytical symbolic calculations also include differentiation, partial differentiation, definite and indefinite integration and take into consideration limitations. Comprehensive transformation, Symbolic objects with symbolic variables, symbolic numbers and symbolic expressions and matrices. Very few previous contributions had been made using the neurogenetic approach to recognize the human face of a side-view subject (parallel to the image plane). The mechanism was applied as a fully 2-D method of transformation with a five-degree switch to reduce steps or regressive strategies. The algorithm was developed for the process of recognition, in which the orientation of the subject's human face was moved to the diagonal position of the image plane (45 degrees) at zero degrees. The techniques used are the Artificial Neural Network (ANN), Genetic Algorithms (GA) and a little useful computing concept to identify a human face from the side view (parallel to the image plane). The research was done in two phases: the phase of modelling and the phase of understanding. In the first phase, only the frontal image of the human face must be studied to extract relevant features for the development of a corpus called a human face model. In the second phase, a test face image with all possible orientation has been captured and therefore the advanced computer approach is applied with high-end computing in order to properly recognize the person of the subject. A correct matching-classification-decision process using the data sets created during the present research has been carried out. Other datasets such as FERET, CASIA and so on were also tested for acceptable performance measures. Furthermore, computation of polynomial complexities with the proper transmission capability has been studied with adequate justification rather than spatial and time complexities for improving the performance of secured systems for the promotion of global cyber safety. An algorithm has been developed to support overall identification of the subject's behavioral and physiological features in order to justify the biometric security system. A case-based study has also taken into account various datasets to justify the biometric safety system and a proper comparison model with different characteristics and performance variations was shown. The comparative study and final observations were experimentally tested with at least 10 images of 100 subjects with different ages and updates. Calculation of the complexity of developed algorithms and the performance measurements of them were also compared with other databases such as FERET, Multi-PIE, Yale Face and RTR. The corpus and algorithms developed in this work have been found to be satisfactory. Complete flow diagram of the work in Fig. 2.

images

Figure 2: Complete flow diagram of work

2.1 Behavioral Pattern Detection

Let side-vision of human-face data gathered under the situations stated below:

•   The subject is either standing or sitting idle

•   The outfit of the subject is actual

•   The subject is either talking with someone face to face

Let “ZLnFL1”, “ZLnFL2”, “ZLnFL3”, “ZLnFL4”, and “ZLnFL5” be the left-side-vision of human-face data at five different time intervals with minimum time-lag by the subject ‘ZLn.’ Similarly, also assume “ZRnFR1”, “ZRnFR2”, “ZRnFR3”, “ZRnFR4” and “ZRnFR5” be the right-side-vision of human-face data at five different time intervals with minimum time-lag by the subject ‘ZRn,’ where ‘n’ is the number, whose range is infinity ≥ n ≥ 1.

Normally, for the human face left-side-vision pattern is “ZLnFLm” and that for the right-side-vision of the human face is “ZRnFRm.”

So, the frontal-vision of human-face data “ZRnVLRm” will yield to,

ZLRnVLRm=ZLnFLm+ZRnFRm(1)

(A) Clustering of geometrical features from left-side-vision of human-face data

Clusters have even-odd elements. Distinguishing the clustering of left-hand data, ZLnFLm, into even and odd components, consider ‘FZT’ training datasets, where ‘F’ represents human-face data and ‘T’ represents total training datasets. Even components and Old component is ‘FZE’ and ‘FZO’, respectively, for a left-side-vision ‘L1’. Hence it yields to,

FZT=FZE+FZO(2)

So, the total training image ‘T’ gives even training sample image ‘E’ and odd training sample image ‘O’ sums. So mathematical linearity combined effect (2) gets,

ZL1SL1=FZT=  FZE+FZO(3)

Thus, the equation for highly interconnected and poorly interconnected human-face collected data is represented by, ZLnFLm. And it becomes,

ZLnFLm=ρT  FZT(4)

where convolution operator is ρT, and ⊗ and the linearity factor for total training datasets,

Now for even cluster We, the mean μe for even human face sample Ne and for odd cluster Wo the mean μo for odd human face sample No, sample mean value is represented by μT,

μT=1NeXe+1NoXo(5)

So, μT¯ is the projection μT of the projected mean points yield,

μT¯=1NTρTμT=ρTμT(6)

The diversion was represented in equation of the projected means on training human face odd and even sample images yields to,

|μe¯μo¯|=|ρT(μeμo)|(7)

Let LFin = {LF1, LF2,…, LFn} and DFout = {DF1, DF2,…, DFm} input left-side-vision and output code word respectively, which are of maximum size. BF test_data_set = {BF1, BF2,…, BFu} have been AF trained_data_set = {AF1, AF2,…, AFq} linearity-index condition, DFout = DFin,

Mathematically the relation is,

Cmatching=argmin1<=q<=n{S(DFe,DFo)}(8)

The previous metrics provide the closest representation of the human face's left-side image with tightly crucial elements. Thus, the system's experience and understanding database analyzed each extracted feature in the data processing stream, C matching, and the greatest codeword was picked as the minimum average distance. If the unknown vector is inaccessible to the known vector, this condition is considered an OOL issue. Attributing values to all database codewords have reduced the OOL issue. Highest vector values, thus, yield to,

COOL = argmax1<=q<=n{S(DFe,DFo)}(9)

Eq. (10) is CDIFF the absolute difference and it is the cropped pattern and yields to,

CDIFF=|COOLCmatching)|(10)

Dividing Eq. (8) by Eq. (10), it yields, to CCMR,

CCMR=Cmatching|CDIFF|(11)

B) Clustering of geometrical features from right-side-vision of human-face data

Similarly, to distinguish the clusters of right-side-vision of human-face data, ZRnFRm, into even and odd components, consider ‘FZT’. The mathematical formulations for this part of this paper follow Eq. (2) through (11).

2.2 Gradient or Slope of Human-Face Data with Strongly Connected Components

Let the slope of human face pattern is xSCF, where the SCF means strongly connected features: “shape”, “size”, “effort” and “momentum”. The superscript ‘x’ is the number of strongly connected features. Let slope of left-side-vision of human-face and right-side-vision of human-face data is xLSCF and xRSCF be the slope or gradient,

xSCF=xLSCF/xRSCF(12a)

orxSCF=xRSCF/xLSCF(12b)

1Shape=1left\_side\_shape/1right\_side\_shape(13)

2Size=2left - side\_size/2right\_side\_size(14)

Effort=3left\_side\_effort/3right\_side\_effort(15)

4Momentum=4left\_side\_momentum/4right\_side\_momentum  (16)

3  Results and Discussions

Practical investigations and discussions regarding identifying behavioral patterns have been conducted following image processing pre-processing activities. The performance data were initially processed using a schematic diagram and standardized image processing methods. Signal processing has been achieved utilizing the discrete-cosine transform technique, as it has been shown that it functions flawlessly for real-coded vectored analysis. As a result, the segmentation procedure was carried out by statistical analysis. Boundary detection is achieved using morphological methods used in digital image processing, such as erosion and dilation. By initially picking the region of interest (ROI) and hence the items of interest, the image distortion approach was employed objects of interest (OOI). Cropping the item and hence image rectification are performed to ensure that the system's efficiency does not suffer. Cropping the picture results in the storage of the cropped picture in a separate file and the extraction of crucial geometrical information. As seen in Fig. 3, the clusters of these obtained traits are shown.

images

Figure 3: Clusters of features extracted from test data of human-face

As seen in Fig. 4, only a few factors exhibit uniformity. As a result, additional analysis was conducted by switching the data set with five degrees of freedom between regressive and advanced modes. The research was graphed and is shown in Fig. 4.

images

Figure 4: Comparison of the different testing frames with different switching patterns

Fig. 5, the pattern's typical behavior is very uniform. This indicates that the curve's behavior is not highly diverse in origin. The test set of data has been subjected further to descriptive statistics. From Fig. 5, it has been observed that the clusters of trained and test data set of human-face are almost close to linearity. Further, the cumulative distribution of the above test data sets has been performed, shown in Fig. 5.

images

Figure 5: Normal distribution of different data sets of human-face

As seen in Fig. 6, the boundary values of both the test and training data sets are exceptionally near to fitness. Therefore, the most acceptable test was done utilizing the genetic algorithm methodology. Adjusted for most vital measurements. If compatibility fails, take another subject sample. As a result, further analysis is performed to determine the better measures. This was accomplished by gradually or regressively switching human faces with a five to ten-degree displacement. Subsequently, it was discovered that most variables follow the regular pattern of a corpus's training samples.

images

Figure 6: Boundary of face code detection using unidirectional temporary associative memory (UTAM)

As a consequence, best-fit measurements were chosen, and further segmentation and detecting analyses utilizing the soft-computing approach genetic algorithm were done. Using face-code formation, one-to-one mappings were performed all through this technique. Fig. 7, the border for detecting face-code using unidirectional transitory memory. Perceptual memories are neural network designs that store and retrieve correlated patterns in essential elements when prompted by associated matches. In other words, associative memory is the storage of related patterns in acceptable form. Fig. 7 shows graphical behavioral pattern matching test data sets. “Over act” is defined as “Abnormal behavior” while “Normal act” is labeled as “Normal behavior.” Whenever the behavioral characteristic curve is missing, the behavior is expected. When the behavioral characteristic curve has a large number of interruptions, it is considered to be under act behavior; when the cognitive characteristic curve has a smaller number of disruptions, it is considered to be over act behavior and attitude.

images

Figure 7: Applying a distinct subject's face image to the overacting, regular act, and underacting moods

The whole behavioral detection system's performance is proportional to the size of the corpus or database. If the unknown pattern is not matched against the known pattern throughout the detection procedure, an OOC (Out of corpus) error occurs. A substantial corpus has been generated in this study to prevent such issues, therefore resolving the OOC issue. Numerous common databases such as FERET, Multi-Pie, and Yale face databases were also examined in the current study for automated facial recognition. Tab. 1 below illustrates the contrast.

images

The obtained corpus RTR database was evaluated in combination with FERET, Multi-Pie, and FERET, and the results were determined to be almost adequate. Fig. 6 illustrates the contrast graphically.

As seen in Fig. 8, the corpus created in this research endeavor produces findings that are pretty similar to those found in the FERET database. Additionally, it has been shown that after selecting two features, the behavioral detection system's performance increases, and the whole evaluation procedure stays positive with seven features picked with the highest difficulty levels. Additionally, Tab. 2 illustrates the overall performance for behavioral detection when the maximum number of characteristics is included.

images

Figure 8: Comparative study and analysis as per complexity measures

images

Fig. 9 represents the full behavioral detection system results graphically, with an average detection rate of 93 percent for the Normal behavioral pattern.

images

Figure 9: Graphical representations of performance measures

Fig. 10 illustrates the developed algorithms’ behavioral performance metrics and their comparability to other algorithms that use human face patterns. In addition to the findings and comments in the current study, Fig. 10 depicts the general behavioral pattern for the training and testing datasets.

images

Figure 10: Overall behavioral pattern of trained and test data sets

As seen in Fig. 11, when the appropriate set of attributes is found using a genetic algorithm methodology, the behavior of training and test data sets shows a similar pattern. For about the same dataset, the actual result was 93 percent recognition accuracy for usual behavior. Fig. 11 shows the result.

The method used to get and describe the given findings is given here, along with its sophistication.

images

Figure 11: Outcome of normal behavioral pattern of the test data set

Developed Algorithm HCBABD (Hybrid Computing based Automatic Behavior Detection):

images

Complexity measures for the develop system: In the worst-case assumption, let ‘p’ denote the total number of training data. Thus, the complexity is proportional to the number of loop executions divided by the total number of events. In the worst-case scenario, the cycle will execute as ‘p + 4’. Thus, in the worst-case scenario, the complexity measure is ‘(p + 4)/p’. Similarly, in the best situation, the smallest number of features necessary for the mapping procedure is one, which increases the execution time. Thus, in the best-case scenario, the complexity measures in the best-case scenario is ‘(p + 1)/p’. Current automatic emotion recognizers typically assign category labels to emotional states, such as “angry” or “sad,” relying on signal processing and pattern recognition techniques. Efforts involving human emotion recognition have mostly relied on mapping cues such as speech acoustics (for example energy and pitch) and/or facial expressions to some target emotion category or representation. The comparative analysis of algorithms is shown in Tab. 3 and the time complexity represented in Tab. 4.

images

images

4  Conclusion and Further Work

In this present study effort, two behavioral patterns, namely standard and aberrant, have been categorized. The categorization in this study is based on four geometrical characteristics taken from human-face data for left- and right-side-vision s. Then, to make decisions about detecting behavioral patterns, these characteristics were grouped, and a correct mapping method was done. The gradation of each of the human-face extracted features from the left- and right-side vision s has been computed, and when shown, a uniformity index attributes feature is generated. The dispersion of the gradients has been calculated, providing either positive or negative values. For normal behavior, the decision is good; for aberrant behavior, the option is unfavorable. The efficiency of a proposed approach has been determined. In the worst-case scenario, the complexity of the suggested method is “(p + 4)/p”; in the best-case scenario, the complexity of the suggested method is “(p + 1)/p,” where ‘p’ is the total frequency of occurrence. The current work might be developed to incorporate identification and comprehension of human-brain signals and human-face-speech patterns to establish a tri-modal biometrics security system. The assignment might be broadened to include diagnosing various health concerns related to breathing, speaking, brain function, and heart function. Furthermore, this technology might be utilized to further the development of a global multi-modal biometric network security.

Acknowledgement: The authors extend their appreciation to the Saudi Electronic University for funding this research work.

Funding Statement: The author thanks to Saudi Electronic University for financial support to complete the research.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. D. Bouzas, N. Arvanitopoulos and A. Tefas, “Graph embedded nonparametric mutual information for supervised dimensionality reduction,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 5, pp. 951–963, 2015.
  2. R. Raja, S. Kumar and M. Rashid, “Color object detection based image retrieval using ROI segmentation with multi-feature method,” Wireless Personal Communication Springer Journal, vol. 112, pp. 1–24, 2020.
  3. J. Chatrath, P. Gupta, P. Ahuja, A. Goel and S. M. Arora, “Real time human face detection and tracking,” in 2014 Int. Conf. on Signal Processing and Integrated Networks (SPIN), Amity University, Noida, Delhi NCR India, pp. 705–710, 2014.
  4. X. Zhang, Y. Gao and M. K. H. Leung, “Recognizing rotated faces from frontal and side views: An approach toward effective use of mugshot databases,” IEEE Transactions on Information Forensics and Security, vol. 3, no. 4, pp. 684–697, 2008.
  5. S. Zhao and Y. Gao, “Establishing point correspondence using multidirectional binary pattern for face recognition,” in 19th Int. Conf. on Pattern Recognition, Tampa, FL, USA, vol. 10444434, no. 922, pp. 1–4, 2008.
  6. L. Wang, L. Khan and B. Thuraisingham, “An effective evidence theory based K-nearest neighbor (KNN) classification,” in IEEE/WIC/ACM Int. Conf. on Web Intelligence and Intelligent Agent Technology, Maebashi, Japan, pp. 797–801, 2008.
  7. Q. Wang and S. Ju, “A mixed classifier based on combination of HMM and KNN,” in Fourth Int. Conf. on Natural Computation, IEEE Computer Society 1730 Massachusetts Ave., NW Washington, DC United States, pp. 38–42, 2008.
  8. N. Ahmed, T. Natarajan and K. R. Rao, “Discrete cosine transform,” IEEE Transactions on Computers, vol. C-23, no. 1, pp. 90–93, 1974.
  9. K. Rao and P. Yip, “Discrete Cosine Transform: Algorithms, Advantages, Applications,” Boston: Academic Press, pp. 1–10. 1990.
  10. J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986.
  11. W. Rong, Z. Li, W. Zhang and L. Sun, “An improved Canny edge detection algorithm,” in IEEE Int. Conf. on Mechatronics and Automation, Takamatsu, Kagawa, Japan, pp. 577–582, 2014.
  12. Z. Fang, N. Xiong, L. T. Yang, X. Sun and Y. Yang, “Interpolation-based direction-adaptive lifting DWT and modified SPIHT for image compression in multimedia communications,” IEEE Systems Journal, vol. 5, no. 4, pp. 584–593, 2011.
  13. A. Blessie, J. Nalini and S. C. Ramesh, “Image compression using wavelet transform based on the lifting scheme and its implementation,” IJCSI International Journal of Computer Science Issues, vol. 8, no. 3, pp. 1–14. 2011.
  14. C. Chang and B. Girod, “Direction-adaptive discrete wavelet transform via directional lifting and bandeletization,” in Int. Conf. on Image Processing, Bordeaux, France, pp. 1149–1152, 2006.
  15. K. U. Singh, H. S. Abu-Hamatta, A. Kumar, A. Singhal and M. Rashid et al., “Secure watermarking scheme for color DICOM images in telemedicine applications,” Computers, Materials & Continua, vol. 70, no. 2, pp. 2525–2542, 2022.
  16. H. Zhang, Z. Yang, L. Zhang and H. Shen, “Super-resolution reconstruction for multi-angle remote sensing images considering resolution differences,” Remote Sensing's, vol. 6, no. 1, pp. 637–657, 2014.
  17. B. Kosko, “Bidirectional associative memories,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49–60, 1988.
  18. R. Chandrakar, R. Raja and R. Miri, “Animal detection based on deep convolutional neural networks with genetic segmentation,” Multimedia Tools and Application, vol. 1220, no. 1, pp. 1–14, 2021.
  19. R. Raja, S. Kumar, S. Choudhary and H. Dalmia, “An effective contour detection based image retrieval using multi-fusion method and neural network,” Wireless Personal Communication, vol. 1, no. 1, pp. 1–12, 2021.
  20. R. Raja, R. Miri, R. K. Patra and U. Sinha, “Computer succored vaticination of multi-object detection and histogram enhancement in low vision,” Int. J. of Biometrics. Special Issue: Investigation of Robustness in Image Enhancement and Preprocessing Techniques for Biometrics and Computer Vision Applications, vol. 4, no. 1, pp. 1–15, 2021.
  21. S. Agrawal, S. Kumar, S. Kumar and A. Thomas, “A novel robust feature extraction with GSO-optimized extreme learning for age-invariant face recognition,” The Imaging Science Journal, vol. 67, no. 6, pp. 319–329, 2019.
  22. S. Kumar, S. Singh and J. Kumar, “Face spoofing detection using improved SegNet architecture with a blur estimation technique,” International Journal of Biometrics, vol. 13, no. 2, pp. 131–149, 2021.
  23.  R. Chandrakar, R. Raja, R. Miri, U. Sinha and A. K. S. Kushwaha, “Enhanced the moving object detection and object tracking for traffic surveillance using RBF-FDLNN and CBF algorithm,” Expert Systems with Applications, vol. 191, no. 1, pp. 116306–116318, 2022.
  24.  C. H. Hsia and J. M. Guo, “Efficient modified directional lifting-based discrete wavelet transform for moving object detection,” Journal of Signal Processing, vol. 96, no. 2, pp. 138–15, 2014.
  25.  M. Khare, R. K. Srivastava and A. Khare, “Single change detection-based moving object segmentation by using Daubechies complex wavelet transform,” IET Image Process, vol. 8, no. 1, pp. 334–344, 2021.
  26.  A. K. S. Kushwaha and R. Srivastava, “Maritime object segmentation using dynamic background modeling and shadow suppression,” The Computer Journal, vol. 59, no. 9, pp. 1303–1329, 2016.
  27.  D. Bloisi, L. I. A. Pennisi and F. Previtali, “ARGOS-Venice boat classification,” in 12th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS), Karlsruhe, Germany, vol. 92, no. 1, pp. 1–6, 2015.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.