Open Access iconOpen Access

ARTICLE

Graph-Embedded Neural Architecture Search: A Variational Approach for Optimized Model Design

Kazuki Hemmi1,2,*, Yuki Tanigaki3, Kaisei Hara4, Masaki Onishi1,2

1 Degree Programs in Systems and Information Engineering, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8577, Japan
2 Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba, Ibaraki, 305-8568, Japan
3 Department of Electronics and Information Systems Engineering, Osaka Institute of Technology, 5-16-1 Omiya, Asahi-ku, Osaka, 535-8585, Japan
4 Department of Electronics and Information Engineering, Nagaoka University of Technology, 1603-1, Kamitomioka Nagaoka, Niigata, 940-2188, Japan

* Corresponding Author: Kazuki Hemmi. Email: email

(This article belongs to the Special Issue: Neural Architecture Search: Optimization, Efficiency and Application)

Computers, Materials & Continua 2025, 84(2), 2245-2271. https://doi.org/10.32604/cmc.2025.064969

Abstract

Neural architecture search (NAS) optimizes neural network architectures to align with specific data and objectives, thereby enabling the design of high-performance models without specialized expertise. However, a significant limitation of NAS is that it requires extensive computational resources and time. Consequently, performing a comprehensive architectural search for each new dataset is inefficient. Given the continuous expansion of available datasets, there is an urgent need to predict the optimal architecture for the previously unknown datasets. This study proposes a novel framework that generates architectures tailored to unknown datasets by mapping architectures that have demonstrated effectiveness on the existing dataset into a latent feature space. As NAS is inherently represented as graph structures, we employed an encoder-decoder transformation model based on variational graph auto-encoders to perform this latent feature mapping. The encoder-decoder transformation model demonstrates strong capability in extracting features from graph structures, making it particularly well-suited for mapping NAS architectures. By training variational graph auto-encoders on existing high-quality architectures, the proposed method constructs a latent space and facilitates the design of optimal architectures for diverse datasets. Furthermore, to effectively define similarity among architectures, we propose constructing the latent space by incorporating both dataset and task features. Experimental results indicate that our approach significantly enhances search efficiency and outperforms conventional methods in terms of model performance.

Keywords

Neural architecture search; automated machine learning; artificial intelligence; deep learning; graph neural network

Cite This Article

APA Style
Hemmi, K., Tanigaki, Y., Hara, K., Onishi, M. (2025). Graph-Embedded Neural Architecture Search: A Variational Approach for Optimized Model Design. Computers, Materials & Continua, 84(2), 2245–2271. https://doi.org/10.32604/cmc.2025.064969
Vancouver Style
Hemmi K, Tanigaki Y, Hara K, Onishi M. Graph-Embedded Neural Architecture Search: A Variational Approach for Optimized Model Design. Comput Mater Contin. 2025;84(2):2245–2271. https://doi.org/10.32604/cmc.2025.064969
IEEE Style
K. Hemmi, Y. Tanigaki, K. Hara, and M. Onishi, “Graph-Embedded Neural Architecture Search: A Variational Approach for Optimized Model Design,” Comput. Mater. Contin., vol. 84, no. 2, pp. 2245–2271, 2025. https://doi.org/10.32604/cmc.2025.064969



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 703

    View

  • 157

    Download

  • 0

    Like

Share Link