Home / Journals / CMC / Online First / doi:10.32604/cmc.2026.078894
Special Issues
Table of Content

Open Access

ARTICLE

WCCN: An Efficient and Stable Neural Network Architecture for Complex-Valued Deep Learning

Bing-Zhou Chen1,2, Hai-Ying Zheng1,2, Ao-Wen Wang1,3, Ke-Lei Xia1,2, Li-Feng Fan1,3, Zhong-Yi Wang1,3, Lan Huang1,2,*
1 College of Information and Electrical Engineering, China Agricultural University, Beijing, China
2 Key Laboratory of Agricultural Information Acquisition Technology (Beijing), Ministry of Agriculture, Beijing, China
3 Key Laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, Beijing, China
* Corresponding Author: Lan Huang. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2026.078894

Received 09 January 2026; Accepted 16 March 2026; Published online 27 April 2026

Abstract

Many sensing and imaging modalities naturally yield complex-valued signals, where magnitude and phase jointly convey information. Complex-valued neural networks (CVNNs) possess unique advantages in processing phase-sensitive data (e.g., synthetic aperture radar (SAR) and magnetic resonance imaging (MRI)), yet their widespread adoption is hindered by significant computational overhead and training instability. To address these challenges, this paper presents the Wirtinger Derivative Complete Complex Network (WCCN), a unified and efficient framework for complex-valued deep learning. The proposed framework systematically addresses three key challenges in CVNNs: computational efficiency, parameter redundancy, and training stability. WCCN integrates three core components. First, an optimized complex convolution implementation (wcConv; Gauss trick + tuple-flow) is introduced to enable efficient complex-valued feature extraction, achieving a speedup of roughly 15%–20% over conventional implementations through a fused tuple-based processing strategy. Second, a Compact Complex Linear (CCL) layer based on low-rank factorization is proposed to reduce classifier parameters by up to 56.8% while preserving discriminative capacity. Third, a novel complex-valued activation function, wcPReLUJitter, is designed to enhance learning stability and effectively mitigate training collapse in deep CVNNs. In addition, a high-redundancy input mapping strategy, termed RTC6, is investigated and systematically compared with existing complex-valued input representations. RTC6 is introduced as a high-redundancy benchmark for representation analysis rather than an input-efficiency module. Experimental results demonstrate that RTC6 can effectively compensate for performance degradation caused by aggressive parameter compression. Extensive evaluations on CIFAR-10 and CIFAR-100 (Canadian Institute for Advanced Research (CIFAR) datasets), Street View House Numbers (SVHN), and SAR datasets show that WCCN achieves competitive performance relative to representative baselines under the experimental and data settings in this paper. Notably, the proposed WCCN-M model achieves 73.17% mean accuracy on CIFAR-100, using significantly fewer parameters, which highlights its effectiveness for large-scale pattern recognition tasks.

Keywords

Complex-valued neural networks; parameter efficiency; Wirtinger derivatives; input mapping strategy; synthetic aperture radar (SAR) image classification
  • 80

    View

  • 13

    Download

  • 0

    Like

Share Link