Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (8)
  • Open Access

    ARTICLE

    An Improved Knowledge Distillation Algorithm and Its Application to Object Detection

    Min Yao1,*, Guofeng Liu2, Yaozu Zhang3, Guangjie Hu1

    CMC-Computers, Materials & Continua, Vol.83, No.2, pp. 2189-2205, 2025, DOI:10.32604/cmc.2025.060609 - 16 April 2025

    Abstract Knowledge distillation (KD) is an emerging model compression technique for learning compact object detector models. Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers, which may limit the comprehensive learning of the student network. Additionally, the imbalance between the foreground and background also affects the performance of the model. To address these issues, this paper employs feature-based distillation to enhance the detection performance of the bounding box localization part, and logit-based distillation to improve the detection performance of the category prediction part. Specifically, for the intermediate layer feature… More >

  • Open Access

    REVIEW

    A Literature Review on Model Conversion, Inference, and Learning Strategies in EdgeML with TinyML Deployment

    Muhammad Arif1,*, Muhammad Rashid2

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 13-64, 2025, DOI:10.32604/cmc.2025.062819 - 26 March 2025

    Abstract Edge Machine Learning (EdgeML) and Tiny Machine Learning (TinyML) are fast-growing fields that bring machine learning to resource-constrained devices, allowing real-time data processing and decision-making at the network’s edge. However, the complexity of model conversion techniques, diverse inference mechanisms, and varied learning strategies make designing and deploying these models challenging. Additionally, deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors. These factors underscore the necessity for a comprehensive literature review, as current reviews do not systematically encompass the most recent findings on these topics. Consequently, it provides… More >

  • Open Access

    ARTICLE

    Optimizing BERT for Bengali Emotion Classification: Evaluating Knowledge Distillation, Pruning, and Quantization

    Md Hasibur Rahman, Mohammed Arif Uddin, Zinnat Fowzia Ria, Rashedur M. Rahman*

    CMES-Computer Modeling in Engineering & Sciences, Vol.142, No.2, pp. 1637-1666, 2025, DOI:10.32604/cmes.2024.058329 - 27 January 2025

    Abstract The rapid growth of digital data necessitates advanced natural language processing (NLP) models like BERT (Bidirectional Encoder Representations from Transformers), known for its superior performance in text classification. However, BERT’s size and computational demands limit its practicality, especially in resource-constrained settings. This research compresses the BERT base model for Bengali emotion classification through knowledge distillation (KD), pruning, and quantization techniques. Despite Bengali being the sixth most spoken language globally, NLP research in this area is limited. Our approach addresses this gap by creating an efficient BERT-based model for Bengali text. We have explored 20 combinations… More > Graphic Abstract

    Optimizing BERT for Bengali Emotion Classification: Evaluating Knowledge Distillation, Pruning, and Quantization

  • Open Access

    ARTICLE

    DPAL-BERT: A Faster and Lighter Question Answering Model

    Lirong Yin1, Lei Wang1, Zhuohang Cai2, Siyu Lu2,*, Ruiyang Wang2, Ahmed AlSanad3, Salman A. AlQahtani3, Xiaobing Chen4, Zhengtong Yin5, Xiaolu Li6, Wenfeng Zheng2,3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 771-786, 2024, DOI:10.32604/cmes.2024.052622 - 20 August 2024

    Abstract Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems. However, with the constant evolution of algorithms, data, and computing power, the increasing size and complexity of these models have led to increased training costs and reduced efficiency. This study aims to minimize the inference time of such models while maintaining computational performance. It also proposes a novel Distillation model for PAL-BERT (DPAL-BERT), specifically, employs knowledge distillation, using the PAL-BERT model as the teacher model to train two student models: DPAL-BERT-Bi and DPAL-BERT-C. This research enhances the dataset More >

  • Open Access

    ARTICLE

    A Novel Quantization and Model Compression Approach for Hardware Accelerators in Edge Computing

    Fangzhou He1,3, Ke Ding1,2, Dingjiang Yan3, Jie Li3,*, Jiajun Wang1,2, Mingzhe Chen1,2

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 3021-3045, 2024, DOI:10.32604/cmc.2024.053632 - 15 August 2024

    Abstract Massive computational complexity and memory requirement of artificial intelligence models impede their deployability on edge computing devices of the Internet of Things (IoT). While Power-of-Two (PoT) quantization is proposed to improve the efficiency for edge inference of Deep Neural Networks (DNNs), existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead, and their efficiency is bounded by the bottleneck of computation latency and memory footprint. To tackle this challenge, we present an efficient inference approach on the basis of PoT quantization and model compression. An integer-only scalar PoT quantization (IOS-PoT)… More >

  • Open Access

    ARTICLE

    Optimized Binary Neural Networks for Road Anomaly Detection: A TinyML Approach on Edge Devices

    Amna Khatoon1, Weixing Wang1,*, Asad Ullah2, Limin Li3,*, Mengfei Wang1

    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 527-546, 2024, DOI:10.32604/cmc.2024.051147 - 18 July 2024

    Abstract Integrating Tiny Machine Learning (TinyML) with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level. Constrained devices efficiently implement a Binary Neural Network (BNN) for road feature extraction, utilizing quantization and compression through a pruning strategy. The modifications resulted in a 28-fold decrease in memory usage and a 25% enhancement in inference speed while only experiencing a 2.5% decrease in accuracy. It showcases its superiority over conventional detection algorithms in different road image scenarios. Although constrained by computer resources and training datasets, our results indicate opportunities for More >

  • Open Access

    ARTICLE

    Optimizing Deep Learning for Computer-Aided Diagnosis of Lung Diseases: An Automated Method Combining Evolutionary Algorithm, Transfer Learning, and Model Compression

    Hassen Louati1,2, Ali Louati3,*, Elham Kariri3, Slim Bechikh2

    CMES-Computer Modeling in Engineering & Sciences, Vol.138, No.3, pp. 2519-2547, 2024, DOI:10.32604/cmes.2023.030806 - 15 December 2023

    Abstract Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues, particularly in the field of lung disease diagnosis. One promising avenue involves the use of chest X-Rays, which are commonly utilized in radiology. To fully exploit their potential, researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems. However, constructing and compressing these systems presents a significant challenge, as it relies heavily on the expertise of data scientists. To tackle this issue, we propose an automated approach that utilizes an evolutionary algorithm (EA) to optimize the design and compression More >

  • Open Access

    ARTICLE

    A Secure and Effective Energy-Aware Fixed-Point Quantization Scheme for Asynchronous Federated Learning

    Zerui Zhen1, Zihao Wu2, Lei Feng1,*, Wenjing Li1, Feng Qi1, Shixuan Guo1

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 2939-2955, 2023, DOI:10.32604/cmc.2023.036505 - 31 March 2023

    Abstract Asynchronous federated learning (AsynFL) can effectively mitigate the impact of heterogeneity of edge nodes on joint training while satisfying participant user privacy protection and data security. However, the frequent exchange of massive data can lead to excess communication overhead between edge and central nodes regardless of whether the federated learning (FL) algorithm uses synchronous or asynchronous aggregation. Therefore, there is an urgent need for a method that can simultaneously take into account device heterogeneity and edge node energy consumption reduction. This paper proposes a novel Fixed-point Asynchronous Federated Learning (FixedAsynFL) algorithm, which could mitigate the… More >

Displaying 1-10 on page 1 of 8. Per Page