Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (20)
  • Open Access

    REVIEW

    Large Language Model-Driven Knowledge Discovery for Designing Advanced Micro/Nano Electrocatalyst Materials

    Ying Shen1, Shichao Zhao1, Yanfei Lv1, Fei Chen1, Li Fu1,*, Hassan Karimi-Maleh2,*

    CMC-Computers, Materials & Continua, Vol.84, No.2, pp. 1921-1950, 2025, DOI:10.32604/cmc.2025.067427 - 03 July 2025

    Abstract This review presents a comprehensive and forward-looking analysis of how Large Language Models (LLMs) are transforming knowledge discovery in the rational design of advanced micro/nano electrocatalyst materials. Electrocatalysis is central to sustainable energy and environmental technologies, but traditional catalyst discovery is often hindered by high complexity, fragmented knowledge, and inefficiencies. LLMs, particularly those based on Transformer architectures, offer unprecedented capabilities in extracting, synthesizing, and generating scientific knowledge from vast unstructured textual corpora. This work provides the first structured synthesis of how LLMs have been leveraged across various electrocatalysis tasks, including automated information extraction from literature,… More >

  • Open Access

    ARTICLE

    Rethinking Chart Understanding Using Multimodal Large Language Models

    Andreea-Maria Tanasă, Simona-Vasilica Oprea*

    CMC-Computers, Materials & Continua, Vol.84, No.2, pp. 2905-2933, 2025, DOI:10.32604/cmc.2025.065421 - 03 July 2025

    Abstract Extracting data from visually rich documents and charts using traditional methods that rely on OCR-based parsing poses multiple challenges, including layout complexity in unstructured formats, limitations in recognizing visual elements, and the correlation between different parts of the documents, as well as domain-specific semantics. Simply extracting text is not sufficient; advanced reasoning capabilities are proving to be essential to analyze content and answer questions accurately. This paper aims to evaluate the ability of the Large Language Models (LLMs) to correctly answer questions about various types of charts, comparing their performance when using images as input… More >

  • Open Access

    ARTICLE

    Transformer-Enhanced Intelligent Microgrid Self-Healing: Integrating Large Language Models and Adaptive Optimization for Real-Time Fault Detection and Recovery

    Qiang Gao1, Lei Shen1,*, Jiaming Shi2, Xinfa Gu2, Shanyun Gu1, Yuwei Ge1, Yang Xie1, Xiaoqiong Zhu1, Baoguo Zang1, Ming Zhang1, Muhammad Shahzad Nazir2, Jie Ji2

    Energy Engineering, Vol.122, No.7, pp. 2767-2800, 2025, DOI:10.32604/ee.2025.065600 - 27 June 2025

    Abstract The rapid proliferation of renewable energy integration and escalating grid operational complexity have intensified demands for resilient self-healing mechanisms in modern power systems. Conventional approaches relying on static models and heuristic rules exhibit limitations in addressing dynamic fault propagation and multi-modal data fusion. This study proposes a Transformer-enhanced intelligent microgrid self-healing framework that synergizes large language models (LLMs) with adaptive optimization, achieving three key innovations: (1) A hierarchical attention mechanism incorporating grid impedance characteristics for spatiotemporal feature extraction, (2) Dynamic covariance estimation Kalman filtering with wavelet packet energy entropy thresholds (Daubechies-4 basis, 6-level decomposition), and… More >

  • Open Access

    ARTICLE

    Adversarial Prompt Detection in Large Language Models: A Classification-Driven Approach

    Ahmet Emre Ergün, Aytuğ Onan*

    CMC-Computers, Materials & Continua, Vol.83, No.3, pp. 4855-4877, 2025, DOI:10.32604/cmc.2025.063826 - 19 May 2025

    Abstract Large Language Models (LLMs) have significantly advanced human-computer interaction by improving natural language understanding and generation. However, their vulnerability to adversarial prompts–carefully designed inputs that manipulate model outputs–presents substantial challenges. This paper introduces a classification-based approach to detect adversarial prompts by utilizing both prompt features and prompt response features. Eleven machine learning models were evaluated based on key metrics such as accuracy, precision, recall, and F1-score. The results show that the Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) cascade model delivers the best performance, especially when using prompt features, achieving an accuracy of over 97% in… More >

  • Open Access

    REVIEW

    An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning, Quantum Embedding’s, and Multimodal Architectures

    Uddagiri Sirisha1,*, Chanumolu Kiran Kumar2, Revathi Durgam3, Poluru Eswaraiah4, G Muni Nagamani5

    CMC-Computers, Materials & Continua, Vol.83, No.3, pp. 4031-4059, 2025, DOI:10.32604/cmc.2025.063721 - 19 May 2025

    Abstract A complete examination of Large Language Models’ strengths, problems, and applications is needed due to their rising use across disciplines. Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance, strengths, and weaknesses. This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies. In this research, 50 studies on 25+ LLMs, including GPT-3, GPT-4, Claude 3.5, DeepKet, and hybrid multimodal frameworks like ContextDET and GeoRSCLIP, are thoroughly reviewed. We propose LLM application taxonomy by grouping techniques by task focus—healthcare,… More >

  • Open Access

    ARTICLE

    Causal Representation Enhances Cross-Domain Named Entity Recognition in Large Language Models

    Jiahao Wu1,2, Jinzhong Xu1, Xiaoming Liu1,*, Guan Yang1,3, Jie Liu4

    CMC-Computers, Materials & Continua, Vol.83, No.2, pp. 2809-2828, 2025, DOI:10.32604/cmc.2025.061359 - 16 April 2025

    Abstract Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain, due to the entity bias arising from the variation of entity information between different domains, which makes large language models prone to spurious correlations problems when dealing with specific domains and entities. In order to solve this problem, this paper proposes a cross-domain named entity recognition method based on causal graph structure enhancement, which captures the cross-domain invariant causal structural representations between feature representations of text sequences and annotation sequences by establishing… More >

  • Open Access

    ARTICLE

    Multilingual Text Summarization in Healthcare Using Pre-Trained Transformer-Based Language Models

    Josua Käser1, Thomas Nagy1, Patrick Stirnemann1, Thomas Hanne2,*

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 201-217, 2025, DOI:10.32604/cmc.2025.061527 - 26 March 2025

    Abstract We analyze the suitability of existing pre-trained transformer-based language models (PLMs) for abstractive text summarization on German technical healthcare texts. The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field. The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts, even if the model is not specifically trained in that language. Through experiments, the research questions explore the performance of transformer language models in dealing with complex syntax constructs, the difference… More >

  • Open Access

    ARTICLE

    Smart Contract Vulnerability Detection Using Large Language Models and Graph Structural Analysis

    Ra-Yeon Choi1, Yeji Song2, Minsoo Jang1, Taekyung Kim3, Jinhyun Ahn4,*, Dong-Hyuk Im5,*

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 785-801, 2025, DOI:10.32604/cmc.2025.061185 - 26 March 2025

    Abstract Smart contracts are self-executing programs on blockchains that manage complex business logic with transparency and integrity. However, their immutability after deployment makes programming errors particularly critical, as such errors can be exploited to compromise blockchain security. Existing vulnerability detection methods often rely on fixed rules or target specific vulnerabilities, limiting their scalability and adaptability to diverse smart contract scenarios. Furthermore, natural language processing approaches for source code analysis frequently fail to capture program flow, which is essential for identifying structural vulnerabilities. To address these limitations, we propose a novel model that integrates textual and structural… More >

  • Open Access

    ARTICLE

    Amalgamation of Classical and Large Language Models for Duplicate Bug Detection: A Comparative Study

    Sai Venkata Akhil Ammu1, Sukhjit Singh Sehra1,*, Sumeet Kaur Sehra2, Jaiteg Singh3

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 435-453, 2025, DOI:10.32604/cmc.2025.057792 - 26 March 2025

    Abstract Duplicate bug reporting is a critical problem in the software repositories’ mining area. Duplicate bug reports can lead to redundant efforts, wasted resources, and delayed software releases. Thus, their accurate identification is essential for streamlining the bug triage process mining area. Several researchers have explored classical information retrieval, natural language processing, text and data mining, and machine learning approaches. The emergence of large language models (LLMs) (ChatGPT and Huggingface) has presented a new line of models for semantic textual similarity (STS). Although LLMs have shown remarkable advancements, there remains a need for longitudinal studies to… More >

  • Open Access

    ARTICLE

    Quantitative Assessment of Generative Large Language Models on Design Pattern Application

    Dae-Kyoo Kim*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 3843-3872, 2025, DOI:10.32604/cmc.2025.062552 - 06 March 2025

    Abstract Design patterns offer reusable solutions for common software issues, enhancing quality. The advent of generative large language models (LLMs) marks progress in software development, but their efficacy in applying design patterns is not fully assessed. The recent introduction of generative large language models (LLMs) like ChatGPT and CoPilot has demonstrated significant promise in software development. They assist with a variety of tasks including code generation, modeling, bug fixing, and testing, leading to enhanced efficiency and productivity. Although initial uses of these LLMs have had a positive effect on software development, their potential influence on the… More >

Displaying 1-10 on page 1 of 20. Per Page