Open Access
EDITORIAL
Introduction to the Special Issue on Artificial Intelligence Emerging Trends and Sustainable Applications in Image Processing and Computer Vision
1 College of computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia
2 Automated Systems & Computing Lab (ASCL), Prince Sultan University, Riyadh, 12435, Saudi Arabia
* Corresponding Author: Ahmad Taher Azar. Email:
(This article belongs to the Special Issue: Artificial Intelligence Emerging Trends and Sustainable Applications in Image Processing and Computer Vision)
Computer Modeling in Engineering & Sciences 2025, 144(1), 29-36. https://doi.org/10.32604/cmes.2025.069309
Received 19 June 2025; Accepted 26 June 2025; Issue published 31 July 2025
Abstract
This article has no abstract.The rapid development of artificial intelligence (AI), machine learning (ML), and deep learning (DL) in recent years has transformed many sectors. A fundamental shift has occurred in approaches to solving complex problems and making decisions in many different fields. These advanced technologies have enabled significant breakthroughs in sectors including entertainment, finance, transportation, and healthcare. AI systems, which can analyze vast volumes of data, have significantly driven efficiency and innovation. With remarkable accuracy, patterns can be identified and predictions generated, improving decision-making processes and facilitating the development of more intelligent solutions. The increasing adoption of these technologies by organizations has expanded the potential for AI to change processes and improve results.
In particular, image processing and computer vision have witnessed the emergence of new opportunities for automating tasks that previously relied on human expertise. The application of AI, ML, and DL in these domains has led to substantial advancements. For instance, in medical diagnostics, accurate image analysis has enhanced diagnosis accuracy and enabled earlier disease detection [1–3]. In the domain of autonomous vehicles, AI technologies have improved the interpretation of visual data, allowing vehicles to navigate complex environments safely. In addition to increasing accuracy and processing speed, the integration of AI in these fields has enabled the development of applications capable of operating in real-time. This capability has contributed significantly to accessibility, efficiency, and safety across multiple disciplines.
As the boundaries of AI continue to expand, the necessity of understanding its implications and applications becomes increasingly critical. Ethical concerns regarding the use of AI technologies are gaining prominence as issues of fairness, transparency, and accountability become more pressing. The emphasis on the need for diverse datasets and the potential for bias in algorithms will likely shape the future landscape of AI as different industries engage in ongoing dialogues concerning responsible AI development.
In addition, the shifting educational landscape highlights the demand for a workforce equipped with AI- and ML-relevant skills. Curricula are being updated to include topics such as data science, algorithm design, and ethical AI, preparing future generations for the opportunities and challenges posed by these technologies. Collaboration among academia, industry, and government is essential for fostering innovation and maintaining a balanced approach to AI.
In the corporate sector, the influence of AI on operational effectiveness cannot be overstated. Businesses are increasingly utilizing AI tools to streamline processes, reduce costs, and enhance customer interactions. Predictive analytics, powered by ML algorithms, enables companies to anticipate customer needs and preferences, allowing for more personalized services. In addition, the automation of routine tasks frees human resources to concentrate on strategic initiatives, encouraging growth and innovation.
As AI technologies continue to evolve, public discourse around their societal impact remains central. The potential displacement of jobs due to automation raises critical concerns about the future of work. Policymakers and industry leaders must address these challenges to ensure the equitable distribution of AI’s benefits. The ethical deployment of AI will shape its future and determine the extent of its impact on daily life.
Across many sectors, the rapid progress of AI, ML, and DL has led to profound transformations. The advancements in accuracy, efficiency, and decision-making capabilities are undeniable. All members of society share the responsibility of managing the consequences of these technologies as they advance. Fully harnessing these transformative innovations requires continued exploration of AI’s potential, guided by a balanced approach that combines both innovation and ethical issues.
This special issue highlights the increasing significance of AI in these domains, showing the groundbreaking advancements and innovations that are shaping the future of image processing and computer vision.
AI has fundamentally transformed approaches to visual data, enabling machines to achieve human-like tasks in visual recognition and interpretation. This special issue aims to investigate the latest trends, breakthroughs, and challenges associated with the integration of AI technologies in image processing and computer vision. This study aims to provide a comprehensive understanding of how AI is broadening the scope of what is achievable in various applications by examining the complexities of these fields, including medical imaging, autonomous vehicles, surveillance systems, facial recognition, and more.
The scope of this special issue reflects the wide range of AI applications in image processing and computer vision. It explores the methodologies, algorithms, and models propelling progress in the field, alongside the ethical and societal implications associated with such powerful technologies. From advanced DL techniques to the integration of AI with Internet of Things (IoT) devices, and addressing issues related to interpretability and fairness, this collection of articles brings together leading experts and researchers from around the world to share insights into cutting-edge AI solutions and the future trajectories of these technologies.
This special issue stands as a testament to the ongoing evolution and expansion of AI in these critical areas, and it is intended to serve as a source of inspiration for further research and collaboration aimed at unlocking the full potential of artificial intelligence for visual data analysis and comprehension. This compilation is expected to be a valuable resource for researchers, practitioners, policymakers, and anyone interested in the expansive scope of AI in image processing and computer vision.
Among the contributions, Khairnar et al. [4] systematically evaluate various pre-trained convolutional neural networks (CNNs) for face liveness detection, addressing vulnerabilities in biometric authentication systems. The study reveals that DenseNet201 achieves the highest accuracy using transfer learning and fine-tuning, reaching 98.5% on the NUAA dataset and 97.71% on the Replay Attack dataset. In addition, the researchers emphasize the importance of model efficiency, identifying MobileNetV2 as the most suitable option for real-time applications due to its low latency of 15 ms, memory usage of 45 MB, and energy consumption of 30 mJ.
The research also highlights the significance of cross-dataset generalization, demonstrating that DenseNet201 and MobileNetV2 maintain robust performance across diverse datasets, including the SiW-MV2. Statistical analyses confirm the reliability of the findings, with significant improvements in performance metrics such as precision, recall, and F1-score. The study provides a comprehensive framework for selecting appropriate models based on deployment requirements, advocating for DenseNet201 in high-security environments and MobileNetV2 for lightweight, real-time authentication solutions.
In another study, Alshahrani et al. [5] proposed a novel hybrid model that integrates multiple convolutional neural networks (CNNs) with an XGBoost classifier to enhance the early detection of Multiple Sclerosis (MS) using MRI analysis. The study emphasizes the limitations of manual diagnosis and highlights the importance of artificial intelligence (AI) in automating and improving classification accuracy. The proposed system achieves remarkable outcomes, including an accuracy of 99.4% and a specificity of 99.75% for multi-class classification employing techniques such as Ant Colony Optimization (ACO) and Maximum Entropy Score-based Selection (MESbS) for feature selection. This demonstrates the effectiveness of combining multi-CNN features for improved diagnostic performance.
The research further elaborates on the methodology, which includes enhancing MRI images through Gaussian filtering and Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve lesion visibility. The Gradient Vector Flow (GVF) algorithm is employed for segmenting white matter lesions, which are processed by various CNN models (ResNet101, DenseNet201, and MobileNet) to extract deep feature maps. The results indicate that the hybrid approach not only surpasses previous studies but also provides a robust framework for the automated analysis of MRI images, supporting neurologists in timely and accurate MS diagnosis. Future work aims to validate these findings across diverse datasets and to explore the integration of other advanced AI techniques.
Singh and Singla [6] presented a novel biometric recognition system for finger-vein identification using deep transfer learning, specifically the EfficientNet model combined with a self-attention mechanism. This approach addresses the limitations of existing CNN-based methods, which often struggle with generalization due to insufficient training data. The proposed EFI-SATL model achieves recognition accuracies of 98.14% on the HKPU dataset, 99.03% on the FVUSM dataset, and 99.50% on the SDUMLA dataset, employing data augmentation techniques and a simplified deep transfer learning framework. The integration of self-attention enhances the model’s ability to focus on salient features, improving performance in recognizing finger-vein patterns.
The research highlights the advantages of finger-vein biometrics, such as high security and stability, and addresses the challenges faced by current recognition systems, including the need for extensive training data and susceptibility to noise. The proposed methodology involves preprocessing images, applying K-fold cross-validation, and utilizing EfficientNet with a self-attention layer to extract and classify features effectively. Experimental results demonstrate that the EFI-SATL framework not only outperforms traditional methods but also provides a computationally efficient solution for biometric recognition, paving the way for future advancements in this field.
Lu et al. [7] introduced a multi-stage Siamese neural network framework for the recognition of seal images to enhance the accuracy and efficiency of seal authentication processes. The proposed method addresses the challenges of traditional manual inspection, which is often labor-intensive and prone to errors. The model effectively captures salient features from seal images using a self-attention mechanism within the Siamese network, improving recognition performance. The study also implements a rotation correction module based on Histogram of Oriented Gradients (HOG) to standardize seal angles, further enhancing the model’s robustness against variations in seal orientation.
The experimental results demonstrate the effectiveness of the proposed method, achieving an accuracy of 92.54% on the SEAL48_R45 dataset and 91.43% on the SEAL48_R90 dataset. The study includes a comprehensive evaluation using a new seal image dataset with 210,000 labeled pairs, showing the model’s ability to generalize across different seal types. The authors highlight the importance of using data augmentation and K-fold cross-validation to improve model performance and mitigate the effects of limited training data. Overall, this research contributes significantly to the field of biometric recognition, providing a reliable and automated solution for seal authentication in legal and financial sectors.
In addition, Mahdi et al. [8] proposed a novel approach for segmenting head and neck tumors using dual PET/CT imaging, employing a multi-stage UNet Transformer model. This research addresses the critical need for accurate tumor segmentation to enhance diagnosis, treatment planning, and outcome prediction in clinical oncology. The authors emphasize the limitations of existing 2D and 3D models, particularly in capturing complex tumor structures, and introduce a 2.5D approach that uses the strengths of both CNNs and transformer networks. Their methodology includes a comprehensive preprocessing pipeline, data augmentation, and K-fold cross-validation to improve model robustness. The proposed model demonstrates superior performance on three publicly available datasets—HeckTor2022, AutoPET2023, and SegRap2023—achieving high Dice scores and Jaccard indices, indicating its effectiveness in accurately delineating tumors.
The findings of the study reveal that the 2.5D UNet Transformer model consistently outperforms traditional 2D and 3D models across various metrics, achieving a Dice score of 81.777 for primary tumors and demonstrating enhanced boundary delineation capabilities. The authors emphasize that the integration of a self-attention mechanism significantly boosts the model’s ability to focus on relevant features, improving segmentation accuracy. This research not only fills a significant gap in the literature regarding head and neck tumor segmentation but also provides a robust framework that can be adapted for various applications in medical imaging, ultimately contributing to better patient outcomes and more efficient clinical workflows.
Mahajan and Singla [9] developed “DeepBio”, a novel deep learning framework for person identification using ear biometrics. This research addresses the challenges posed by traditional facial recognition systems, particularly during the COVID-19 pandemic when masks hinder facial visibility. The authors propose a hybrid model that combines Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory (Bi-LSTM) networks to enhance the accuracy of identification through ear images.
The study utilizes five datasets, including IIT Delhi (IITD-I and IITD-II), Annotated Web Images (AWE), Mathematical Analysis of Images (AMI), and EARVN1. Data augmentation techniques such as flipping, translation, and Gaussian noise are employed to improve model performance and reduce overfitting. The experimental results demonstrate that DeepBio achieves high recognition rates of 97.97%, 99.37%, 98.57%, 94.5%, and 96.87% on the respective datasets.
Comparative analysis shows that DeepBio outperforms existing methods, with improvements of up to 12% on certain datasets. The authors emphasize the importance of utilizing ear biometrics as a reliable, contactless identification method, especially in scenarios where traditional biometric systems can fail. This research contributes significantly to the field of biometric identification, providing a robust framework that can enhance security and streamline authentication processes in various applications.
Xiang et al. [10] introduced the Two-Layer Attention Feature Pyramid Network (TA-FPN) to improve small object detection in various applications, such as urban intelligent transportation and pedestrian detection. Recognizing the challenges posed by small objects, which often contain limited information and are easily obscured by backgrounds, the authors propose a novel framework that enhances feature fusion across different layers of a Feature Pyramid Network (FPN).
The TA-FPN comprises two main components: the Two-layer Attention Module (TAM) and the Small Object Detail Enhancement Module (SODEM). TAM utilizes attention mechanisms to focus on the semantic information of objects, ensuring that adjacent layers share similar semantic content, mitigating the semantic gaps that often hinder small object detection. SODEM enhances local features and suppresses background noise, ensuring that each feature layer is rich in small object information.
The researchers validated their approach on challenging datasets, including Microsoft COCO and PASCAL VOC, demonstrating significant improvements in detection accuracy. The TA-FPN achieved remarkable results, outperforming state-of-the-art detectors, particularly in detecting small objects. The experimental findings highlight the framework’s effectiveness in enhancing the precision of small object detection, showing its potential for real-world applications in various fields. This research contributes to the ongoing efforts to refine object detection technologies, particularly for small objects, which are critical for safety and operational efficiency in many scenarios.
Said et al. [11] presented an innovative AI-based helmet violation detection system aimed at enhancing traffic management and road safety. The study addresses the critical issue of motorcycle accidents, which often result from non-compliance with helmet regulations. Recognizing the importance of helmets in rider protection, the authors propose a system that adapts the PerspectiveNet architecture by replacing the original Res2Net with the more efficient EfficientNet v2 backbone. This modification significantly bolsters the system’s detection capabilities.
The proposed helmet violation detection system utilizes deep learning methodologies to achieve high accuracy in real-time monitoring. Through rigorous optimization techniques and extensive experimentation using the India Driving Dataset (IDD), the system demonstrates exceptional performance, achieving a detection accuracy of 95.2%, which surpasses existing benchmarks. This high level of accuracy is essential for effectively enforcing helmet usage regulations and improving road safety.
A feature of the study is the introduction of a Two-layer Attention Feature Pyramid Network (TA-FPN), which enhances feature extraction and improves detection accuracy for small objects, particularly helmets. In addition, the Small Object Detail Enhancement Module (SODEM) is employed to strengthen local features and reduce background noise, ensuring that the system can effectively identify helmet use even in challenging conditions.
The findings from this research highlight the potential of AI technologies in traffic enforcement systems, providing a robust framework that can help reduce motorcycle-related fatalities and improve overall public safety on the roads. The system aims to foster greater compliance with traffic regulations and contribute to safer riding practices by automating helmet violation detection.
Das et al. [12] introduced a novel approach to translating Urdu Sign Language (UrSL) using the UrSL-CNN model, a convolutional neural network (CNN) specifically designed for this purpose. The study addresses a significant gap in sign language translation, focusing on a language with limited resources, unlike many existing works that primarily target languages with rich datasets. The researchers conducted experiments utilizing two datasets, consisting of 1500 and 78,000 images, respectively, employing a comprehensive methodology that includes data collection, preprocessing, categorization, and prediction.
Each sign image was transformed into a grayscale format and underwent noise filtering to enhance prediction accuracy. The performance of the UrSL-CNN was compared against several machine learning baseline methods, including support vector machines (SVM), Gaussian Naive Bayes, random forest, and k-nearest neighbors. The results demonstrated the superiority of the UrSL-CNN model, achieving an impressive accuracy of 95%. In addition, the model showed superior performance in precision, recall, and F1-score evaluations, marking a significant advancement in the field of sign language translation.
This research not only contributes to the enhancement of sign language translation technologies but also holds promise for improving communication accessibility for individuals with hearing impairments. The study aims to bridge the communication gap between deaf individuals and the hearing community by providing a reliable method for translating UrSL, fostering social inclusion and interaction. The paper outlines its structure, with sections dedicated to related work, the design and architecture of the UrSL-CNN model, experimental results, and conclusions, paving the way for future research in this critical area.
Khan et al. [13] explored an innovative method for sleep posture classification utilizing both RGB and thermal cameras, enhanced by deep learning models. Recognizing the importance of accurate sleep posture surveillance for patient comfort and health, the study addresses the challenges posed by traditional methods, particularly the obstructions caused by blankets. The proposed approach captures a dataset of sleep postures through video recordings, focusing on six common postures: supine, left log, right log, prone head, prone left, and prone right. The data collection involved 10 participants under two conditions: with and without blankets.
The methodology consists of several key steps. Initially, the video data is normalized into individual frames. The study then employs fine-tuned, pretrained models, specifically VGG16 and ResNet50, to extract features from the images. Following feature extraction, a serial fusion technique based on normal distribution is applied to merge the vectors derived from both RGB and thermal datasets. This fusion approach is crucial for enhancing posture classification accuracy, especially in scenarios where blankets obscure visibility. The final classification is performed using machine learning classifiers, achieving impressive results—96.7% accuracy with Quadratic Support Vector Machine (QSVM) when no blanket is used, and 99% accuracy when normal distribution serial fusion is applied to features obtained with a blanket.
The study highlights the significance of using dual-camera systems for improved classification accuracy in sleep posture monitoring. The study provides a robust solution that overcomes the limitations of traditional RGB-only methods by integrating RGB and thermal imaging. It contributes to advancements in sleep posture classification and holds potential implications for improving patient care and comfort in clinical settings. The study is structured to include related work, methodology, results, and discussions, culminating in a comprehensive overview of the findings and their relevance to the field.
Wang and Noto Susanto [14] proposed a novel approach for predicting traffic flow using heterogeneous spatiotemporal data, using a hybrid deep learning model that incorporates an attention mechanism. The study addresses significant challenges faced in intelligent transportation systems (ITS), particularly the difficulty of accurately predicting traffic flow at the individual road level due to complex spatial and temporal interactions.
The proposed method utilizes a convolutional bidirectional long short-term memory (Conv-BiLSTM) architecture, which effectively captures both spatial and temporal dependencies in traffic data. Unlike previous studies that often overlooked critical factors such as holidays, weather conditions, and vehicle types, this research integrates these variables into the prediction model. The authors emphasize the importance of incorporating recurring monthly periodic patterns, which enhance the accuracy of traffic flow predictions.
The methodology begins with the collection of traffic flow data from the Taiwan National Freeway, supplemented by additional features such as weather and holiday information. The model processes this data using a combination of convolutional layers to extract spatial features and BiLSTM layers to capture temporal characteristics. The attention mechanism is applied throughout the model to dynamically assign importance to different features and time steps, allowing for a more nuanced understanding of traffic patterns.
Experimental results demonstrate a significant performance improvement of 21.68% when the vehicle type feature is included in the model. The hybrid approach not only improves prediction accuracy but also provides a more comprehensive framework for understanding the complex dynamics of traffic flow. This research contributes to the growing field of traffic prediction and provides valuable insights for enhancing traffic management strategies in urban settings.
Rahim et al. [15] introduced an enhanced hybrid model that combines Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) networks for identifying individuals through handwriting analysis. This innovative approach addresses the challenges of traditional handwriting recognition systems, which often rely on specific signatures or symbols and are vulnerable to forgery. The study proposes a method that enhances security and accuracy in individual identification by focusing on independent handwriting characteristics.
The methodology consists of five distinct phases: data collection, preprocessing, feature extraction, significant feature selection, and classification. A novel dataset specifically designed for Bengali handwriting (BHW) was created, providing a robust foundation for the study. The research meticulously extracted a comprehensive set of 91 features, integrating kinematic, statistical, spatial, and composite characteristics. It employed statistical techniques such as the analysis of variance (ANOVA) F test and mutual information scores to select the most relevant features for identification to enhance efficiency.
In the classification phase, the authors utilized deep learning models, specifically CNN and BiLSTM, to discern individuals based on their handwriting traits. The hybrid model combines the strengths of both CNN and BiLSTM, using fine motor features for improved classification accuracy. Experimental results demonstrated that the hybrid approach outperformed existing state-of-the-art techniques, validating the effectiveness of the proposed method. Overall, this research represents a significant step forward in handwriting recognition technology, emphasizing the unique characteristics of individual handwriting as a reliable biometric trait.
Finally, Wu et al. [16] presented a novel hand features-based fusion recognition network that enhances multimodal correlation for biometric recognition. The study addresses key challenges in multimodal biometric systems, specifically the need to improve recognition performance using intermodal correlations and addressing issues related to improper weight selection during feature fusion.
The proposed method introduces an enhanced DenseNet architecture that utilizes feature-level fusion to combine information from multiple biometric modalities, including palmprint, palm vein, and finger vein data. The network employs Efficient Channel Attention (ECA-Net) to dynamically adjust the weights of each channel, amplifying the importance of critical features and improving overall recognition performance. In addition, depthwise separable convolution is utilized to reduce the number of training parameters and enhance feature correlation, making the network more efficient and robust.
Experimental evaluations were conducted on four multimodal databases, which included six unimodal databases. The results demonstrated impressive Equal Error Rates (EER) of 0.0149%, 0.0150%, 0.0099%, and 0.0050%, indicating significant improvements in recognition performance compared to existing methods for palmprint, palm vein, and finger vein fusion recognition. The approach is particularly suitable for high-security environments due to its enhanced anti-spoofing capabilities and practical applicability.
Findings highlight the effectiveness of the enhanced DenseNet network in improving biometric recognition through improved feature fusion and inter-modal correlation, paving the way for future advancements in multimodal biometric systems.
This special issue highlights the transformative impact of artificial intelligence in image processing and computer vision. The diverse range of applications and methodologies presented in these papers emphasizes the ongoing research efforts to harness AI for sustainable solutions across various domains. These contributions are expected to inspire further exploration and innovation in this dynamic field.
Acknowledgement: This paper is based on a research grant funded by the Research, Development, and Innovation Authority (RDIA), Kingdom of Saudi Arabia, with grant number 13382-PSU-2023-PSNU-R-3-1-EI. The authors acknowledge the support of Prince Sultan University, Riyadh, Saudi Arabia, for this publication. This research is supported by the Automated Systems and Computing Lab (ASCL), Prince Sultan University, Riyadh, Saudi Arabia.
Funding Statement: This research was funded by the Research, Development, and Innovation Authority (RDIA), Kingdom of Saudi Arabia, with grant number 13382-PSU-2023-PSNU-R-3-1-EI.
Conflicts of Interest: The author declares no conflicts of interest to report regarding the present study.
References
1. Banu PKN, Azar AT, Inbarani HH. Fuzzy firefly clustering for tumor and cancer analysis. Int J Model Identif Control (IJMIC). 2017;27(2):92–103. [Google Scholar]
2. Inbarani HH, Azar AT, Jothi G. Leukemia image segmentation using a hybrid histogram-based soft covering rough K-Means clustering algorithm. Electronics. 2020;9(1):188. doi:10.3390/electronics9010188. [Google Scholar] [CrossRef]
3. Ganesan J, Azar AT, Alsenan S, Kamal NA, Qureshi B, Hassanien AE. Deep learning reader for visually impaired. Electronics. 2022;11(20):3335. doi:10.3390/electronics11203335. [Google Scholar] [CrossRef]
4. Khairnar S, Gite S, Pradhan B, Thepade SD, Alamri A. Optimizing CNN architectures for face liveness detection: performance, efficiency, and generalization across datasets. Comput Model Eng Sci. 2025;143(3):3677–707. doi:10.32604/cmes.2025.058855. [Google Scholar] [CrossRef]
5. Alshahrani M, Al-Jabbar M, Senan EM, Amer FA, Almahri J, Almalki SA. Hybrid models of multi-CNN features with ACO algorithm for MRI analysis for early detection of multiple sclerosis. Comput Model Eng Sci. 2025;143(3):3639–75. doi:10.32604/cmes.2025.064668. [Google Scholar] [CrossRef]
6. Singh M, Singla SK. EFI-SATL: an EfficientNet and self-attention based biometric recognition for finger-vein using deep transfer learning. Comput Model Eng Sci. 2025;142(3):3003–29. doi:10.32604/cmes.2025.060863. [Google Scholar] [CrossRef]
7. Lu J, Huang X, Li C, Xin R, Zhang S, Emam M. Multi-stage-based Siamese neural network for seal image recognition. Comput Model Eng Sci. 2025;142(1):405–23. doi:10.32604/cmes.2024.058121. [Google Scholar] [CrossRef]
8. Mahdi MA, Ahamad S, Saad SA, Dafhalla A, Alqushaibi A, Qureshi R. Segmentation of head and neck tumors using dual PET/CT imaging: comparative analysis of 2D, 2.5D, and 3D approaches using UNet transformer. Comput Model Eng Sci. 2024;141(3):2351–73. doi:10.32604/cmes.2024.055723. [Google Scholar] [CrossRef]
9. Mahajan A, Singla SK. DeepBio: a deep CNN and Bi-LSTM learning for person identification using ear biometrics. Comput Model Eng Sci. 2024;141(2):1623–49. doi:10.32604/cmes.2024.054468. [Google Scholar] [CrossRef]
10. Xiang S, Ma J, Shang Q, Wang X, Chen D. Two-layer attention feature pyramid network for small object detection. Comput Model Eng Sci. 2024;141(1):713–31. doi:10.32604/cmes.2024.052759. [Google Scholar] [CrossRef]
11. Said Y, Alassaf Y, Ghodhbani R, Alsariera YA, Saidani T, Ben Rhaiem O, et al. AI-based helmet violation detection for traffic management system. Comput Model Eng Sci. 2024;141(1):733–49. doi:10.32604/cmes.2024.052369. [Google Scholar] [CrossRef]
12. Das K, Abid F, Rasheed J, Kamlish Asuroglu T, Alsubai S, Soomro S. Enhancing communication accessibility: urSL-CNN approach to Urdu sign language translation for hearing-impaired individuals. Comput Model Eng Sci. 2024;141(1):689–711. doi:10.32604/cmes.2024.051335. [Google Scholar] [CrossRef]
13. Khan A, Kim C, Kim J-Y, Aziz A, Nam Y. Sleep posture classification using RGB and thermal cameras based on deep learning model. Comput Model Eng Sci. 2024;140(2):1729–55. doi:10.32604/cmes.2024.049618. [Google Scholar] [CrossRef]
14. Wang J-D, Noto Susanto CO. Traffic flow prediction with heterogeneous spatiotemporal data based on a hybrid deep learning model using attention-mechanism. Comput Model Eng Sci. 2024;140(2):1711–28. doi:10.32604/cmes.2024.048955. [Google Scholar] [CrossRef]
15. Rahim MA, Al Farid F, Miah ASM, Puza AK, Alam MN, Hossain MN, et al. An enhanced hybrid model based on CNN and BiLSTM for identifying individuals via handwriting analysis. Comput Model Eng Sci. 2024;140(2):1689–710. doi:10.32604/cmes.2024.048714. [Google Scholar] [CrossRef]
16. Wu W, Zhang Y, Li Y, Li C, Hao Y. A hand features based fusion recognition network with enhancing multi-modal correlation. Comput Model Eng Sci. 2024;140(1):537–55. doi:10.32604/cmes.2024.049174. [Google Scholar] [CrossRef]
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools