Open Access
ARTICLE
Lightweight Complex-Valued Neural Network for Indoor Positioning
1 College of Command and Control Engineering, Army Engineering University of PLA, Nanjing, 210007, China
2 Purple Mountain Laboratories, Nanjing, 211111, China
* Corresponding Author: Bing Xu. Email:
Computers, Materials & Continua 2026, 86(2), 1-14. https://doi.org/10.32604/cmc.2025.070794
Received 24 July 2025; Accepted 01 October 2025; Issue published 09 December 2025
Abstract
Deep learning has been recognized as an effective method for indoor positioning. However, most existing real-valued neural networks (RVNNs) treat the two constituent components of complex-valued channel state information (CSI) as real-valued inputs, potentially discarding useful information embedded in the original CSI. In addition, existing positioning models generally face the contradiction between computational complexity and positioning accuracy. To address these issues, we combine graph neural network (GNN) with complex-valued neural network (CVNN) to construct a lightweight indoor positioning model named CGNet. CGNet employs complex-valued convolution operation to directly process the original CSI data, fully exploiting the correlation between real and imaginary parts of CSI while extracting local features. Subsequently, the feature values are treated as nodes, and conditional position encoding (CPE) module is applied to add positional information. To reduce the number of connections in the graph structure and lower the model complexity, feature information is mapped to an efficient graph structure through a dynamic axial graph construction (DAGC) method, with global features extracted using maximum relative graph convolution (MRConv). Experimental results show that, on the CTW dataset, CGNet achieves a 10 improvement in positioning accuracy compared to existing methods, while the number of model parameters is only 0.8 M. CGNet achieves excellent positioning accuracy with very few parameters.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools