iconOpen Access

ARTICLE

Deep Learning–Aided Frequency-Modulated Continuous-Wave Radar for Around-the-Corner Non-Line-of-Sight Perception at Urban Intersections

Shih-Lin Lin*, Yi-Hsuan Chen

Graduate Institute of Vehicle Engineering, National Changhua University of Education, Changhua, Taiwan

* Corresponding Author: Shih-Lin Lin. Email: email

(This article belongs to the Special Issue: Advances in Deep Learning and Computer Vision for Intelligent Systems: Methods, Applications, and Future Directions)

Computer Modeling in Engineering & Sciences 2026, 147(1), 37 https://doi.org/10.32604/cmes.2026.078862

Abstract

Urban intersections contain severe blind zones where buildings and roadside obstacles block line-of-sight sensing, limiting the ability of autonomous vehicles to anticipate hidden hazards. This paper presents an urban-intersection-oriented non-line-of-sight (NLOS) perception framework that exploits specular reflections from building surfaces using 77 GHz frequency-modulated continuous-wave (FMCW) automotive radar. All evaluations are conducted in a MATLAB-based simulation environment that models intersection geometry, building-induced occlusions, and specular reflection-assisted propagation, and generates 77-GHz FMCW radar echoes under controllable interference; real-world validation with measured radar data and richer multipath/material modeling is planned as future work. To improve robustness under noisy intersection interference, we propose a deep-learning-based mitigation module that restores corrupted radar echoes at the chirp level using a compact AlexNet-derived 1D regression backbone, with minimal architectural changes that insert a residual block after conv2 and apply batch normalization to enhance training stability and suppress interference while preserving informative echo characteristics. The restored echoes are then processed by conventional estimation steps to obtain range and azimuth-related angles. Under severe interference (Noise Factor = 3.0), unmitigated measurements exhibit large errors (root-mean-square error (RMSE) = 5.48 m/18.95°/10.77° for range/angle/azimuth deviation). Conventional AlexNet-based mitigation reduces these errors to 0.75 m/0.83°/0.93°, while the proposed improved AlexNet further reduces them to 0.56 m/0.46°/0.73°. The results demonstrate improved signal stability and measurement accuracy, supporting the potential practicality of low-cost NLOS perception in simulation for safety-critical autonomous driving at occluded urban intersections, subject to future real-world validation.

Keywords

FMCW doppler radar; non-line-of-sight (NLOS) perception; urban intersections; specular reflection; interference mitigation; radar-echo restoration; residual learning; batch normalization; AlexNet; autonomous driving

1  Introduction

Urban intersections remain persistent collision hotspots because critical hazards can be occluded by buildings, parked vehicles, roadside furniture, and complex corner geometry. Crash analyses indicate that intersection accidents are influenced by environmental conditions and roadway configurations that degrade visibility and increase conflict complexity, implying that conventional line-of-sight (LOS) sensing is often insufficient for early risk anticipation and proactive intervention [1,2]. For intelligent vehicles and advanced driver-assistance systems, the ability to infer occluded threats is essential when decisions must be made under tight reaction-time constraints.

Deep learning and computer-vision-inspired representation learning have significantly enhanced the ability of intelligent systems to model and interpret high-dimensional sensory data, thereby accelerating multimodal perception in autonomous driving. Radar–camera fusion surveys summarize how complementary sensing enhances object detection and semantic segmentation compared to single-modality pipelines, highlighting practical fusion considerations, such as representation design and calibration [3]. Broader reviews further discuss datasets, methods, and open challenges for deep multi-modal detection and segmentation in real driving deployments [4]. Radar-centric architectures, such as cross-supervised radar object detection, demonstrate that learning can leverage camera-derived supervision to enhance radar perception while meeting real-time requirements [5]. Beyond camera fusion, monovision–millimeter-wave (mmWave) radar combinations have also been explored for vehicle detection and tracking in on-road settings [6]. Reviews of image–point-cloud fusion summarize trends in integrating vision and LiDAR features for robust perception [7]. Recent work further suggests that cross-modal priors can be used to estimate radar-like spatial maps from image/depth/semantic descriptions [8], and that temporal modeling can exploit inter-frame relations to stabilize radar perception over time [9].

Weather robustness further motivates radar-inclusive perception. Deep multimodal fusion has been evaluated in unseen foggy conditions to improve generalization beyond training distributions [10], and complementary LiDAR–radar fusion has been used for fog-robust vehicle detection [11]. On the LiDAR side, adverse-weather signal enhancement and noisy-point-cloud recognition have been explored using filtering and deep learning, including Kalman-filter-based improvement of LiDAR signals [12], noise-robust point-cloud classification in noisy environments [13], and residual-learning-based upgrades to LiDAR 3D classification pipelines [14]. These efforts collectively demonstrate that data-driven models can enhance robustness; however, the fundamental limitation of direct visibility at intersections remains a significant barrier for purely LOS sensing.

Despite progress in multimodal perception, intersection safety remains fundamentally constrained by occlusion. Around-the-corner, non-line-of-sight (NLOS) perception is therefore a safety-critical capability. Multipath propagation—often treated as interference—can encode indirect information about hidden objects. Early work exploited multipath for urban moving-target indication [15] and examined multipath exploitation in non-LOS urban synthetic aperture radar (SAR) settings [16]. Experimental demonstrations showed radar detection of moving targets behind corners [17] and behind-corner sensing with multipath-exploiting ultra-wideband (UWB) radar [18]. Related occluded-imaging research using time-of-flight sensors reinforces the general principle that indirect paths and reflections can support perception beyond direct visibility [19]. More targeted junction-oriented methods demonstrated around-the-corner radar detection and localization [20] and proposed adaptive detection under diffuse multipath for range-distributed targets [21]. In the automotive domain, NLOS multi-target localization for driver-assistance radar exploits multiple-input multiple-output (MIMO)-radar multipath to reduce ghosting and improve localization reliability [22]. Learning-based methods are beginning to address NLOS radar detection directly, including transformer-based NLOS mmWave radar object detection [23] and joint localization of LOS/NLOS targets with clutter mitigation via multipath exploitation [24].

Automotive radar surveys and tutorials emphasize that millimeter-wave FMCW radar enables all-weather operation and direct velocity measurement; however, practical challenges remain in terms of angular resolution, classification, multipath, and mutual interference in dense urban traffic [25,26]. Research-direction articles further stress the importance of interference, calibration, and environmental modeling as key open challenges [27]. A dedicated survey on deep learning for radar highlights learning-based recognition and interference suppression while noting data scarcity and reliability risks [28]. A comprehensive survey of mmWave FMCW radar for automotive perception further summarizes recognition and localization pipelines and emphasizes that sparse and multipath-contaminated measurements can degrade learning-based recognition unless representation and training are carefully designed [29]. Subproblem-focused studies demonstrate that direction of arrival (DoA) estimation can be revisited from a machine-learning perspective [30], that accurate FMCW distance estimation remains crucial for longitudinal safety functions [31], and that array processing (e.g., grating-lobe/sidelobe suppression in distributed MIMO imaging) can enhance imaging quality for dense arrays [32]. Learning-based 4-D radar object detection with data enhancement indicates a trend toward richer radar representations [33], while multi-channel echo separation aims to address high-interference automotive radar scenarios [34]. Practical sensing hardware, including compact 79 GHz microstrip patch arrays, supports all-weather deployments with constraints on cost and form factor [35]. Meanwhile, target classification and short-range detection baselines remain relevant for evaluating improvements, including pedestrian RCS pattern analysis [36], urban classification with 24 GHz FMCW radar [37], 77 GHz feature-based SVM human–vehicle classification [38], and short-range FMCW radar approaches for multi-target human–vehicle detection [39]. Finally, reliable NLOS perception can benefit from resilient positioning and cooperation, such as GNSS-challenged navigation methods [40], cooperative fusion positioning [41], and vehicle-to-infrastructure (V2I) cross-modality cooperation at accident black spots [42], while also requiring robustness against data-fabrication attacks in collaborative perception [43].

Beyond perception, intersection safety also depends on reliable vehicle-state sensing, planning/control, and post-incident assessment. For example, optical radar technology combined with iterative closest point (ICP) has been used for vehicle damage assessment after collisions [44], and model predictive control has been leveraged for optimized path planning in autonomous navigation [45].

Prior studies on radar perception can be grouped into three lines. (i) Classical around-the-corner/non-line-of-sight (NLOS) sensing exploits multipath and reflections for behind-corner detection and localization [1522]. (ii) For automotive FMCW radar, surveys and learning-based methods address mutual interference, clutter, and denoising to stabilize downstream estimation [2529,34]. (iii) Public benchmarks and dataset/survey papers provide evaluation context and protocol guidance, including nuScenes [46], radar-centric datasets such as K-Radar [47], Dual Radar [48], recent 4D mmWave radar perception and sensing surveys [49], and V2X-Radar [50], as well as collaborative perception dataset reviews and benchmarks [51,52]. These works motivate our lightweight chirp-level restoration and dual-domain evaluation.

Motivated by these gaps, this paper proposes a deep learning–aided FMCW radar framework for around-the-corner NLOS perception at urban intersections under realistic interference. At the system level, we analyze intersection blind-spot characteristics, derive practical sensor placement/orientation constraints, and exploit specular corner reflections using a three-radar front-bumper configuration with electronically steered beams (−60°, 0°, and +60°) to widen the effective observation sector while preserving cost and latency. At the signal-processing level, we introduce an urban-intersection-oriented interference mitigation module based on an improved AlexNet-derived regression network with residual learning and batch normalization, which restores distortion-corrupted radar echoes prior to measurement extraction. This design improves robustness in noisy intersection scenarios where a conventional AlexNet baseline exhibits unstable outputs and degraded range/azimuth accuracy.

This work makes the following contributions:

1.    Intersection-oriented NLOS sensing simulation framework. We model intersection geometry, building-induced occlusions, and reflection-assisted propagation paths, and incorporate practical sensor-placement constraints to enable feasible around-the-corner observability in urban intersections.

2.    Chirp-level interference mitigation formulation. We formulate an urban-intersection-oriented restoration task that mitigates heterogeneous environmental interference by reconstructing corrupted radar echoes at the chirp level prior to conventional range/azimuth estimation.

3.    Lightweight AlexNet-derived 1D backbone with minimal modifications. We develop a compact AlexNet-derived 1D regression model and introduce residual learning (after conv2) and batch normalization to improve training stability and estimation accuracy under noisy intersection conditions, compared with a conventional AlexNet baseline.

4.    Dual-domain evaluation protocol with practical metrics. We design an evaluation protocol across multiple interference intensities and radar units, combining signal-quality metrics (RMSE and SNR) with measurement-level errors (range, azimuth, and azimuth deviation) to quantify perception benefits.

The remainder of this paper is organized as follows: Section 1 introduces the problem setting and summarizes related work; Section 2 presents the proposed method; Section 3 describes the simulation-based experimental design; Section 4 reports the experimental results and discussion; and Section 5 concludes the paper.

2  Proposed Method

2.1 Radar Detection Method

Radar is an active sensing modality that exploits the round-trip propagation of electromagnetic waves. By transmitting a known waveform and processing the returned echoes, radar can estimate a target’s range, radial velocity (via Doppler shift), and scattering characteristics. Due to its long sensing range and robustness against illumination and adverse weather conditions, radar has become indispensable in aviation, maritime navigation, meteorological monitoring, and intelligent transportation systems. However, most automotive implementations rely on a forward-looking radar with a limited field of view, which inherently leaves occluded regions unobserved. Such blind zones—commonly encountered at urban intersections, sharp corners, and indoor parking structures—pose a major obstacle to reliable perception and safe fully autonomous driving.

To address this limitation, we propose a non-line-of-sight (NLOS) imaging framework that integrates a frequency-modulated continuous-wave (FMCW) radar with specular reflection propagation paths, as illustrated in Fig. 1. By exploiting corner-induced reflections, the proposed approach aims to infer targets that are hidden from direct line-of-sight sensing.

images

Figure 1: Schematic diagram of radar using mirror reflection to complete non-line-of-sight blind spot detection.

An FMCW radar emits a linear-chirp signal whose instantaneous frequency is given by [53]:

f(t)=f0+(BTchirp)t(1)

where f0 is the start frequency, B the sweep bandwidth, and Tchirp the chirp duration. After reflection from a target, the received signal undergoes a time delay and Doppler shift, yielding [53,54]

r(t)=cos(2π(f0t+B2Tchirpt2+Δft))(2)

The term f0t denotes the carrier component of the transmitted signal, BTchirpt2 represents the linearly increasing frequency sweep, and Δft corresponds to the frequency offset induced by the target’s radial motion. By analysing these variations in the received echo, the radar estimates range and velocity: the time delay is proportional to the target’s distance, whereas the frequency offset reflects its kinematic state. Accordingly, FMCW radar can concurrently provide accurate range and speed information while exhibiting strong immunity to interference, rendering it well-suited to complex traffic environments [22,2527,53].

In an FMCW radar, the frequency difference—or beat frequency—that arises after mixing the transmitted (Tx) and received (Rx) signals is [53]:

fbeat=BTchirp×Δt(3)

where B denotes the sweep bandwidth (Hz), Tchirp is the chirp duration (s), and Δt is the round-trip propagation time of the waveform. Because this transit time is governed by the target range D, it can be expressed as [53,54]:

Δt=2Dc(4)

with c=(3×108 m/s) representing the speed of light. Substituting (4) into (3) yields a direct relationship between the measured beat frequency and the target distance, thereby enabling precise range estimation from the observed spectral peak.

When simulating radar performance in urban environments, geometric occlusion is often used to determine whether a target is obscured by surrounding structures. Let the radar and target positions in the ground plane be denoted by pr and pt, respectively. A parametric line segment connecting the two points is defined as [55]

p(t)=pr+t(ptpr)(5)

where the scalar parameter t continuously maps every location between radar (t=0) and target (t=1). If this ray intersects any building footprint—modelled as polygons or rectangles—for some 0<t<1, the direct line-of-sight (LOS) between radar and target is considered obstructed, producing an occlusion that prevents the radar from detecting the target directly [20,55]. Geometric-occlusion checks of this type are integral to sensor layout studies and signal propagation analyses in smart city planning and autonomous driving research. They enable the quantitative assessment of how obstacle distributions constrain radar visibility, thereby supporting high-fidelity performance simulations in realistic urban street scenes.

In automotive radar sensing, angular estimation is crucial for determining the location of obstacles relative to the ego-vehicle. Two angular descriptors are used: the azimuth angle θ in the horizontal (xy) plane and the elevation angle in the vertical (z) plane. The global (absolute) azimuth of a target with Cartesian coordinates (xt,yt) relative to the radar at (xr,yr) is obtained from [25,29]

θglobal=arctan(ytyrxtxr)(6)

and is conventionally expressed in either the range [180,180] or [0,360]. The elevation angle (not explicitly required in planar street-level models) describes the target’s vertical offset relative to the radar boresight and is derived analogously from the zcomponent.

For motion-compensated tracking, it is often more useful to express the target direction in the radar’s own body frame, as a relative azimuth. If the sensor (or host vehicle) is yaw by an angle θyaw with respect to the global reference, the raw difference [25,29]

θrelative=θglobalθyaw(7)

must be wrapped in a continuous interval. Applying a modulo operation followed by a shift yield [25,29]

θrelative=mod(θglobalθyaw+180,360)180(8)

so that θrelative[180,180] satisfies the right-hand coordinate convention and admits unambiguous left/right discrimination.

In practice, angular accuracy is degraded by multipath, clutter, and sensor noise; therefore, Monte Carlo perturbations are frequently injected into simulation models to emulate measurement uncertainty, while adaptive filtering and interference mitigation techniques are deployed in real-time systems to restore precision. Accurate estimation of both azimuth and elevation thus remains crucial for localization, navigation, and object classification in autonomous driving applications.

In radar-based perception, a target is deemed directly detectable when its range D does not exceed the sensor’s maximum instrumented distance (20 m), and its bearing falls within the radar’s predefined field-of-view limits. Otherwise, the object must be assessed through indirect mechanisms such as reflection or diffraction, or may be disregarded altogether. Even under severe visibility degradation—caused by fog or other environmental interference—the geometrically derived distance serves as an “ideal-condition” reference against which simulated perturbations (random error, attenuation, propagation delay, etc.) can be superimposed to approximate real-world measurements. The range itself is evaluated via the Euclidean metric [55]:

D=(xtxr)2+(ytyr)2(9)

where (xr,yr) and (xt,yt) denote the radar and target coordinates, respectively. Owing to its computational simplicity, Eq. (9) facilitates rapid, large-scale assessments of spatial relationships between a radar unit and multiple targets in complex urban streetscapes.

The root-mean-square error (RMSE) is an established metric for quantifying estimation accuracy in radar signal processing; a lower value indicates closer agreement between an algorithm’s output and the ground truth. It is defined as [54]

RMSE=1Ni=1N(xix^i)2(10)

where xi and x^i denote the ith reference and estimated samples, respectively, and N is the total number of observations. The present study employs RMSE in two complementary ways:

1.    Signal-domain assessment—RMSE is computed for the original echoes, the echoes corrupted by environmental interference, and the echoes after neural-network–based interference suppression, thereby quantifying the improvement in waveform fidelity.

2.    Measurement-domain assessment—Separate RMSE values are calculated for range, azimuth, and relative bearing before and after processing, thus substantiating the contribution of the proposed model to overall sensing accuracy.

This two-tier evaluation framework rigorously validates the effectiveness and practicality of the interference-mitigation approach under realistic radar-measurement conditions.

The signal-to-noise ratio (SNR) is a fundamental metric that quantifies the proportion of useful signal energy to unwanted interference, and it is routinely employed in radar, telecommunications, and image-processing applications to characterize system performance under varying noise conditions. Expressed in decibels, SNR is defined by [54]

SNR=10×log10(PsignalPjam)(11)

where Psignal denotes the power of the desired signal in an interference-free scenario, and Pjam represents the power of the noise or jamming component measured at the receiver. A higher SNR indicates that the information-bearing component dominates the received waveform, leading to superior detection or decoding accuracy, whereas a lower SNR implies that noise constitutes a substantial fraction of the total power, thereby degrading the system’s ability to discriminate targets from background clutter.

2.2 Alex Net Neural Network

AlexNet [56] is an eight-layer architecture comprising five convolutional and three fully connected layers, originally devised to enhance feature extraction and classification performance on large-scale datasets such as ImageNet. In the present study, the network is repurposed for one-dimensional radar-waveform regression by substituting every two-dimensional convolutional kernel with a one-dimensional counterpart, while preserving the original convolutional-fully connected (FC) hierarchy. Experimental results indicate that the modified Alex Net markedly suppresses environmental noise, yielding a substantial reduction in the root-mean-square error (RMSE) of noisy radar signals and achieving higher accuracy in range and angle estimation. These findings demonstrate the architecture’s strong transferability and practical value in cross-domain signal-processing applications.

2.3 Proposed Improved Neural Network Method

To enhance around-the-corner radar perception at urban intersections under heterogeneous environmental interference, we propose an improved AlexNet-derived network for radar-echo restoration, as illustrated in Fig. 2. In noisy intersection conditions, a conventional AlexNet baseline may exhibit limited interference tolerance and unstable outputs, which can propagate into range and azimuth estimation errors. We therefore adopt a minimal-modification strategy that preserves the overall AlexNet backbone while improving feature refinement and training stability.

images

Figure 2: Proposed improved neural network method based on AlexNet.

Specifically, residual learning is introduced by inserting a lightweight residual module after the second convolutional layer (conv2). The module learns a residual mapping via two sequential convolutional layers with batch normalization and rectified linear unit (ReLU) activations. Its output is combined with the conv2 features through an identity skip connection (element-wise addition), followed by a post-activation. Batch normalization is also applied to early features to stabilize optimization under varying interference intensities. The remainder of the network (conv3–conv5 and fc6–fc8) follows the AlexNet-style hierarchy, and the final regression output directly produces a restored radar-echo signal of the same length as the input, without introducing additional decoding stages.

This residual-enhanced design facilitates gradient propagation, mitigates degradation effects, and preserves fine-grained echo characteristics that are critical for accurate range and azimuth-related estimation in urban intersection scenarios. As summarized in Table 1, the proposed architecture differs from the baseline primarily by the addition of a residual module and normalization layers, yielding improved robustness and generalization while maintaining a lightweight structure suitable for real-time deployment.

images

3  Experimental Design

3.1 Research Methods

A MATLAB-based simulation framework was developed to investigate non-line-of-sight (NLOS) blind-spot detection at urban intersections. The virtual environment models a representative intersection geometry with buildings and static obstacles that obstruct direct line-of-sight (LOS) visibility. This setup enables systematic analysis of how structural occlusions degrade radar perception and whether specular reflections from building façades can be exploited to infer targets hidden around corners. Within this framework, the radar processing chain is used to estimate the relative range and bearing of a target with respect to the host vehicle. To quantify the benefit of multipath exploitation, we conduct a comparative evaluation between (i) direct-path (LOS) radar detection and (ii) reflection-assisted (NLOS) detection. Performance is assessed in terms of estimation accuracy and robustness under different occlusion conditions and interference levels.

3.2 Radar Position Setting

Conventional automotive radar sensors are commonly mounted near the longitudinal centerline of the vehicle’s front fascia and are primarily oriented forward. While this configuration is effective for unobstructed frontal scenarios, it is inherently limited in intersection corner cases where buildings or parked vehicles create large occluded regions. To mitigate line-of-sight blind spots, we employ an enhanced installation scheme utilizing a lateral array of three 77-GHz millimeter-wave radars co-located on the front bumper. Each radar is mechanically aligned with the vehicle heading, while its effective sensing sector is electronically defined through beam steering. The key parameters are set to a maximum unambiguous range of 20 m and a horizontal field of view (HFOV) of 60 degrees. Specifically, the boresights of the three electronic beams are steered to −60°, 0°, and +60° relative to the vehicle heading. Accordingly, the three radars cover the azimuth intervals [−90°, −30°], [−30°, +30°], and [+30°, +90°], respectively. Concatenation of these sectors yields a continuous 180° forward coverage without gaps, forming a wider observation aperture that is better suited to capturing reflection-induced returns in urban intersection scenarios. This configuration substantially reduces blind zones while maintaining straightforward hardware integration and installation. In addition, the proposed architecture is readily extendable to advanced sensing strategies, such as adaptive beam scanning and multi-sensor fusion. These extensions can support dynamic field-of-view expansion and environment-specific calibration to further improve perception robustness in complex intersection scenarios, as illustrated in Fig. 3.

images

Figure 3: Scenario diagram of the detection region for the enhanced automotive radar.

3.3 Obstacle Scenario Configuration

To examine reflection characteristics and Doppler signatures of millimeter-wave radar in an urban environment with building-induced occlusions, we constructed a representative driving scenario using a commercial simulation platform. The simulated scene provides synthetic radar returns for systematic evaluation, and the generated dataset is subsequently used to develop and train a deep-learning-based radar interference mitigation module, adapted from AlexNet. As illustrated in Fig. 4, the virtual environment represents a four-way intersection formed by two orthogonal, bidirectional roadways with explicit lane markings. The ego vehicle (length: 4.5 m, width: 1.8 m) is initialized 10 m south of the intersection center and oriented northward (yaw = 90°). Three targets are introduced to create both line-of-sight and occluded cases: (i) a bicycle located on the west side of the intersection at (−15 m, −3 m); (ii) a passenger car directly ahead of the ego vehicle at (2 m, 5 m); and (iii) a pedestrian approaching from the northwest at (14 m, 3 m).

images

Figure 4: Road-scenario simulation diagram.

3.4 Detection of Reflected Signals from Occluding Surfaces

Building Reflection Detection Mode refers to a signal-processing mechanism designed for scenarios where a target is obstructed from direct line-of-sight (LOS) observation by occluding structures (e.g., buildings). In such cases, radar transmissions can reach the target via one or multiple specular reflections off building surfaces, and the corresponding reflected echoes can be exploited to infer the presence and kinematic properties of targets within radar shadow regions. This mode therefore enhances target acquisition in visually occluded environments, particularly for objects located behind corners or within building-induced blind zones, as shown in Fig. 5.

images

Figure 5: Simulation diagram of the reflection of the obstruction.

The simulation workflow proceeds as follows. First, the scene is instantiated, and the radar’s placement and sensing parameters are configured; after this, the radar waveform is transmitted. Two operating conditions are considered: (i) a clean propagation channel and (ii) a channel with injected environmental interference. The system then checks whether the target lies within the radar’s effective detection range. If the target is out of range, the process terminates with no detection. If the target is in range, the pipeline evaluates whether occluding objects block the direct path and accordingly branches to either direct-path (LOS) detection or reflection-assisted (NLOS) detection along the reflected propagation path. The received echoes are then collected and analyzed, followed by deep-learning-based interference suppression. Finally, waveforms are visualized for inspection, quantitative performance metrics are computed, and detection outcomes from the direct and reflection-based modes are compared.

3.5 Model Training Parameter Configuration

In this study, the simulation parameters and learning settings were carefully specified for both the radar signal generation environment and the neural-network-based environmental interference mitigation model. The FMCW radar configuration adopts a carrier (center) frequency of 77 GHz and a bandwidth of 200 MHz. The number of sampling points per chirp is set to 300, and the chirp duration is fixed at 5.5 μs to emulate practical automotive radar settings. To control interference severity in a consistent manner, we introduce a global parameter termed the Noise Factor, which is initialized to 0.5 and varied across experiments (e.g., 0.5–3.0) to represent different levels of environmental disturbance. For each simulation run, randomized seed initialization is applied to generate diverse realizations of the synthetic radar signals. This design supports robust evaluation of model stability and generalization across varying noise realizations.

The synthetic waveforms incorporate appropriate propagation time delays and amplitude attenuation to model target reflections. The resulting signals are then corrupted by additive Gaussian noise according to the specified Noise Factor, thereby mimicking signal distortion under environmental interference. For interference mitigation, we adopt a modified AlexNet architecture. The network consists of five convolutional layers followed by three fully connected layers, and it is trained for a regression objective to recover denoised radar signals from inputs contaminated by interference. Using an AlexNet-derived backbone provides a structured baseline that facilitates controlled comparisons between raw and processed waveforms while enabling architecture-level modifications for improved denoising capability. The network is trained using the Adam optimizer, which combines first-order momentum with adaptive second-order learning rates and employs bias-corrected updates. This optimizer is well-suited for noisy learning signals and can improve convergence stability in the presence of interference. The learning rate is set to 0.001, the batch size is 32, and training is performed for 300 epochs. To further improve generalization and reduce ordering effects, the training data are reshuffled after each epoch.

3.6 Error Evaluation Criteria

After constructing the simulation scenarios and finalizing the neural network architecture, we evaluate performance using two quantitative metrics: Root Mean Square Error (RMSE) and Signal-to-Noise Ratio (SNR). RMSE is used to measure estimation accuracy, where a lower RMSE indicates reduced prediction error after model enhancement. SNR is used to characterize signal clarity, where higher SNR corresponds to a larger proportion of useful signal energy relative to noise, implying improved interpretability of the radar returns. By jointly analyzing RMSE and SNR under various interference settings (Noise Factor) and network configurations, the effectiveness of the proposed interference mitigation approach can be objectively and consistently validated.

4  Results and Discussion

4.1 Signal Comparison

By comparing the baseline AlexNet-based network with the proposed improved model, the differences in handling environmentally induced interference become evident. For the baseline network, introducing interference results in pronounced variations in the error signal, particularly under stronger noise conditions, where the deviation from the clean reference widens considerably. This behavior indicates limited interference tolerance, as denoising provides only partial improvement in waveform clarity, with residual irregular oscillations persisting. Such artifacts suggest that the baseline architecture cannot fully suppress complex interference patterns, which is likely attributable to insufficient representational capacity and limited feature refinement. Consequently, the baseline model is more susceptible to amplitude instability under noisy inputs, which can propagate into downstream errors in range and angular estimation. To illustrate these effects in detail, we select the front radar as a representative example among the three simulated radars, as shown in Fig. 6(1) Because the front radar plays the primary role in forward detection and is typically exposed to the most direct interference in the simulated intersection scenario, it provides a meaningful basis for characterizing signal distortion and evaluating interference mitigation performance. The observed instability in the baseline output highlights the limitations of the original architecture and motivates the need for a more robust model in complex radar environments.

images

Figure 6: Side-by-side comparison of front-radar chirp-level signal processing under severe interference (Noise Factor = 3.0): ((1)) baseline AlexNet-based interference mitigation; ((2)) proposed improved AlexNet-based mitigation. Quantitative improvements are summarized in Table 2 (baseline) and Table 3 (proposed), e.g., post-mitigation RMSE = 0.75 m/0.83°/0.93° (baseline) vs. 0.56 m/0.46°/0.73° (proposed) for range/angle/azimuth deviation.

images

images

In contrast, the improved neural network exhibits a substantial reduction in error variability across different interference levels. This improvement is mainly attributed to the incorporation of residual connections and complementary optimization strategies. Residual learning promotes more reliable gradient propagation through deeper layers and alleviates degradation effects that can arise in standard deep networks, thereby improving stability when processing interference-contaminated signals. In addition, batch normalization and improved nonlinearities further enhance the model’s ability to attenuate noise and stabilize intermediate feature distributions. As a result, the enhanced model constrains signal fluctuations within a narrower band under various interference settings, demonstrating consistently higher robustness than the baseline architecture. After denoising, the interference-induced error is significantly reduced, and the reconstructed waveform more closely matches the clean reference signal, as shown in Fig. 6(2). Overall, these improvements enable more accurate and stable signal reconstruction under complex interference conditions, providing a stronger foundation for the subsequent estimation of target range and angular parameters. Mechanistically, the residual connection biases the network toward learning a correction term (i.e., the residual error) instead of reconstructing the entire echo from scratch, which improves gradient flow and stabilizes optimization under strong interference. Batch normalization further mitigates internal covariate shift, helping the regressor maintain consistent feature scaling and reconstruction behavior across varying noise levels and clutter conditions.

4.2 Neural Network-Based Estimation and Error Evaluation

In this study, radar-based range and angular estimates between the ego vehicle and target objects were obtained from simulation by modeling direct and reflected propagation paths and then computing the corresponding root mean square error (RMSE) with respect to the ground-truth geometry. RMSE serves as the primary quantitative metric, where lower values indicate higher estimation accuracy. To examine the relationship between interference intensity and detection performance, we focus on a severe interference setting (Noise Factor = 3.0). Three conditions are evaluated: (i) a clean environment, (ii) an interference-contaminated environment, and (iii) the denoised output produced by a conventional deep-learning baseline. The resulting RMSE values enable direct comparison of how interference affects radar estimation and how effectively learning-based denoising mitigates this degradation, as summarized in Table 2. Under high interference (Noise Factor = 3.0), the baseline network exhibits substantial performance deterioration, with the largest degradation observed in azimuth estimation (RMSE = 10.77°). This indicates that the baseline model struggles to extract stable angle-related features when the input is heavily corrupted by noise and clutter. The post-denoising results show partial recovery; however, non-negligible residual errors remain under severe interference. Overall, these outcomes highlight key limitations of the conventional architecture, including insufficient robustness to high-intensity noise, limited angle-sensitive feature learning, and reduced adaptability to environmental variations such as occlusion and changes in reflectivity.

By contrast, the improved neural network—augmented with residual learning and batch normalization-achieves consistently lower RMSE across range, azimuth, and deflection angle estimates. The enhanced model demonstrates stronger noise suppression and more stable estimation behavior in challenging conditions, validating the effectiveness of the architectural and training refinements, as reported in Table 3. Under Noise Factor = 3.0, the improved model outperforms the baseline across all radar targets. For the front radar, the baseline model yields a range RMSE of 5.48 m and an azimuth RMSE of 18.95°, reflecting limited tolerance to severe interference. In comparison, the improved model reduces the range RMSE to 0.56 m and the azimuth RMSE to 0.46°, while also lowering the deflection-angle RMSE from 0.93° to 0.73°. These results demonstrate that the proposed architecture maintains stable and accurate estimation under strong interference, providing a more reliable foundation for autonomous vehicle perception and intelligent sensing in occluded urban environments. In this study, radar-based distance and angle estimations between the vehicle and target obstacles were derived through simulation using software tools. These estimations were obtained by modeling signal reflections from surrounding objects, followed by the calculation of the corresponding Root Mean Square Error (RMSE). A lower RMSE value indicates higher estimation accuracy. To examine the relationship between interference intensity and detection accuracy, simulations were conducted under a high interference level (Noise Factor = 3.0). Three scenarios were considered: the original clean environment, the environment with added interference, and the denoised output using a conventional deep learning model. RMSE was employed as the primary evaluation metric to quantitatively compare the detection performance across different conditions. This enabled a clear analysis of the impact of interference on radar signal estimation and the effectiveness of neural network-based denoising approaches, as shown in Table 2. Fig. 7 complements Tables 2 and 3 by providing an at-a-glance visualization of the RMSE reduction and the corresponding spatial convergence toward the ground truth under severe interference.

images

Figure 7: Radar performance evaluation under severe interference (level/Noise Factor = 3.0). (A) RMSE comparison between the original (baseline) and improved neural networks after interference mitigation for range, angle, and azimuth deviation. (B) Spatial mapping in the XY plane showing that the corrected estimates of the improved model converge closer to the ground truth compared with the baseline model.

4.3 Comparison of Signal-to-Noise Ratio (SNR) across Radar Units under Varying Noise Factors

We further evaluate denoising effectiveness using the signal-to-noise ratio (SNR) measured at three radar units (front, left, and right) under three interference levels (Noise Factor = 0.5, 1.5, and 3.0). For each radar, SNR is compared across two conditions: (i) the denoised output produced by the AlexNet-based baseline model, and (ii) the denoised output produced by the proposed improved network. The results are summarized in Table 4. Overall, the proposed model yields higher SNR than the baseline AlexNet-based model across all radar positions and noise levels, indicating more effective suppression of interference while preserving the underlying signal components. However, the magnitude of SNR improvement decreases as interference intensity increases. Under low interference (Noise Factor = 0.5), denoising yields a pronounced SNR gain, indicating that the dominant corruption can be effectively removed without substantially distorting the target echo structure. In contrast, under severe interference (Noise Factor = 3.0), the SNR gain becomes more modest, implying that high-intensity noise and complex clutter components are more difficult to separate from the signal, even with the improved architecture.

images

In addition, SNR gains differ across radar locations (front vs. lateral units), suggesting that denoising performance depends on the spatial configuration and the corresponding propagation/interference characteristics. This observation is consistent with intersection scenarios where reflection geometry and occluding structures can produce radar-dependent multipath patterns and non-uniform interference distributions. These findings motivate further investigation into position-aware modeling and adaptive strategies that account for radar placement and environment-specific propagation conditions. The SNR (Signal-to-Noise Ratio) performance of three radar units (front, left, and right) was evaluated under three interference levels (Noise Factor = 0.5, 1.5, 3.0). Comparisons were made between the baseline (no interference), the original Alex Net-based model, and the improved neural network, as shown in Table 4.

4.4 Computational Complexity and Runtime Considerations

To substantiate the deployment-oriented discussion, we report the parameter count, multiply–accumulate operations (MACs), and FP32 weight memory for batch-1 inference on a single 300-sample chirp input. Compared with the baseline AlexNet (1D), the proposed residual + batch-normalization upgrade increases the parameter count from 21.50 to 21.89 M (≈1.81%) and increases MACs from 107.05 to 136.54 M (≈27.54%). Under the common convention that 1 MAC = 2 floating-point operations (FLOPs), this corresponds to 214.10 and 273.08 M FLOPs, respectively. Despite the increased MACs, the FP32 weight memory changes only modestly, as the overall parameter budget remains dominated by the fully connected layers. The computational complexity and memory footprint are summarized in Table 5.

images

Absolute inference latency is hardware-dependent (CPU/GPU/automotive system-on-chip (SoC)) and also depends on implementation details and the end-to-end radar processing pipeline. In practice, the weight memory footprint can be further reduced using FP16/INT8 quantization; therefore, we frame real-time suitability as platform-specific and leave full embedded latency benchmarking (including preprocessing and postprocessing) as future work.

4.5 Limitations and Future Work

Although the proposed framework demonstrates the feasibility of reflected-path NLOS detection under controlled simulation settings, several limitations remain before real-world deployment. Future work will focus on (i) unified benchmarking, (ii) improved environmental realism, (iii) deployment efficiency, and (iv) sim-to-real robustness.

     i)  First, we will establish a unified benchmarking protocol to systematically compare our method with representative published NLOS radar approaches, including classical signal-processing baselines (e.g., adaptive filtering, wavelet denoising, matrix-pencil methods) and modern learning-based alternatives, using consistent scenarios/datasets and standardized metrics (e.g., RMSE, SNR gain, precision/recall) for fair and reproducible evaluation.

    ii)  Second, while the current simulation adopts additive Gaussian noise as a controlled baseline, we will enhance realism by incorporating physics-informed urban propagation and interference models, including material-aware specular/diffuse reflections (e.g., glass/concrete/metal façades), ray-tracing-based multipath generation, and non-Gaussian time-varying clutter/interference, as well as dynamic occlusions and moving objects. We will further validate the framework using measured 77-GHz FMCW radar data collected in realistic intersection scenarios.

   iii)  Third, we will benchmark compact backbones (e.g., ResNet-18, MobileNetV3, lightweight transformer variants) and report not only accuracy but also latency, FLOPs, and memory/energy costs on automotive-grade SoCs to clarify deployment trade-offs for safety-critical functions.

    iv)  Finally, we will conduct systematic ablation studies (residual connections, batch normalization, angle-aware loss) and investigate sim-to-real generalization across diverse urban layouts and installation perturbations (yaw/pitch/roll) via domain adaptation and stronger augmentation, with potential extensions toward cooperative perception (e.g., V2I-assisted sensing).

5  Conclusion

This study presented an urban-intersection-oriented non-line-of-sight (NLOS) perception framework using 77-GHz FMCW radar with building-reflection-assisted propagation. It formulated interference suppression as chirp-level echo restoration and introduced a minimally modified AlexNet-derived regressor (residual learning and batch normalization) to improve robustness prior to FFT-based range and azimuth estimation. In MATLAB-based simulations under the most severe interference setting (Noise Factor = 3.0), the proposed method reduced the range/azimuth/azimuth-deviation RMSEs to 0.56 m/0.46°/0.73°, outperforming the baseline network.

Future work will include: (i) validation using real 77-GHz FMCW radar measurements collected at multiple urban intersections; (ii) more realistic urban propagation and interference modeling, including material-aware specular/diffuse reflections, ray-tracing-based multipath generation, and non-Gaussian/time-varying clutter under diverse weather conditions; (iii) systematic benchmarking against representative alternative methods (e.g., U-Net variants, temporal CNNs, Transformers, and published NLOS radar pipelines) under consistent scenarios/datasets with standardized evaluation metrics and side-by-side quantitative visualizations; and (iv) deployment-oriented reporting of end-to-end latency, memory footprint, and throughput on representative automotive hardware platforms.

Acknowledgement: Not applicable.

Funding Statement: The author would like to thank the National Science and Technology Council, Taiwan, for financially supporting this research (grant No. NSTC 114-2221-E-018-003) and the Ministry of Education’s Teaching Practice Research Program, Taiwan (PSK1142780).

Author Contributions: Conceptualization, Shih-Lin Lin and Yi-Hsuan Chen; Methodology, Shih-Lin Lin and Yi-Hsuan Chen; Formal analysis, Yi-Hsuan Chen; Investigation, Shih-Lin Lin; Software, Yi-Hsuan Chen; Validation, Yi-Hsuan Chen; Visualization, Yi-Hsuan Chen and Shih-Lin Lin; Resources, Shih-Lin Lin; Funding acquisition, Shih-Lin Lin; Project administration, Shih-Lin Lin; Supervision, Shih-Lin Lin; Writing—original draft, Shih-Lin Lin and Yi-Hsuan Chen; Writing—review and editing, Shih-Lin Lin. All authors reviewed and approved the final version of the manuscript.

Availability of Data and Materials: The data used and analyzed during the current study are available from the corresponding author upon reasonable request.

Ethics Approval: Not applicable.

Conflicts of Interest: The authors declare no conflicts of interest.

References

1. Lin DJ, Yang JR, Liu HH, Chiang HS, Wang LY. Analysis of environmental factors on intersection accidents. Sustainability. 2022;14(3):1764. doi:10.3390/su14031764. [Google Scholar] [CrossRef]

2. Cantillo V, Garcés P, Márquez L. Factors influencing the occurrence of traffic accidents in urban roads: a combined GIS-empirical Bayesian approach. Dyna. 2016;83(195):21–8. doi:10.15446/dyna.v83n195.47229. [Google Scholar] [CrossRef]

3. Yao S, Guan R, Huang X, Li Z, Sha X, Yue Y, et al. Radar-camera fusion for object detection and semantic segmentation in autonomous driving: a comprehensive review. IEEE Trans Intell Veh. 2024;9(1):2094–128. doi:10.1109/TIV.2023.3307157. [Google Scholar] [CrossRef]

4. Feng D, Haase-Schütz C, Rosenbaum L, Hertlein H, Gläser C, Timm F, et al. Deep multi-modal object detection and semantic segmentation for autonomous driving: datasets, methods, and challenges. IEEE Trans Intell Transp Syst. 2021;22(3):1341–60. doi:10.1109/TITS.2020.2972974. [Google Scholar] [CrossRef]

5. Wang Y, Jiang Z, Li Y, Hwang JN, Xing G, Liu H. RODNet: a real-time radar object detection network cross-supervised by camera-radar fused object 3D localization. IEEE J Sel Top Signal Process. 2021;15(4):954–67. doi:10.1109/JSTSP.2021.3058895. [Google Scholar] [CrossRef]

6. Wang X, Xu L, Sun H, Xin J, Zheng N. On-road vehicle detection and tracking using MMW radar and monovision fusion. IEEE Trans Intell Transp Syst. 2016;17(7):2075–84. doi:10.1109/TITS.2016.2533542. [Google Scholar] [CrossRef]

7. Cui Y, Chen R, Chu W, Chen L, Tian D, Li Y, et al. Deep learning for image and point cloud fusion in autonomous driving: a review. IEEE Trans Intell Transp Syst. 2022;23(2):722–39. doi:10.1109/TITS.2020.3023541. [Google Scholar] [CrossRef]

8. Rangaraj PA, Alkanat T, Pandharipande A. RAIDS: radar range-azimuth map estimation from image, depth, and semantic descriptions. IEEE Sens J. 2025;25(7):12381–8. doi:10.1109/JSEN.2025.3540660. [Google Scholar] [CrossRef]

9. Li P, Wang P, Berntorp K, Liu H. Exploiting temporal relations on radar perception for autonomous driving. In: Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2022 Jun 18–24; New Orleans, LA, USA. p. 17050–9. doi:10.1109/CVPR52688.2022.01656. [Google Scholar] [CrossRef]

10. Bijelic M, Gruber T, Mannan F, Kraus F, Ritter W, Dietmayer K, et al. Seeing through fog without seeing fog: deep multimodal sensor fusion in unseen adverse weather. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 Jun 13–19; Seattle, WA, USA. p. 11679–89. doi:10.1109/cvpr42600.2020.01170. [Google Scholar] [CrossRef]

11. Qian K, Zhu S, Zhang X, Li LE. Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals. In: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2021 Jun 20–25; Nashville, TN, USA. p. 444–53. doi:10.1109/cvpr46437.2021.00051. [Google Scholar] [CrossRef]

12. Lin SL, Wu BH. Application of Kalman filter to improve 3D LiDAR signals of autonomous vehicles in adverse weather. Appl Sci. 2021;11(7):3018. doi:10.3390/app11073018. [Google Scholar] [CrossRef]

13. Lin SL, Li XQ. Improving LiDAR object classification based on PointNet in noisy environments. Fluct Noise Lett. 2022;21(6):2250057. doi:10.1142/s0219477522500572. [Google Scholar] [CrossRef]

14. Lin SL, Wu JY. Enhancing lidar-based 3D classification through an improved deep learning framework with residual connections. IEEE Access. 2025;13(4):42836–49. doi:10.1109/ACCESS.2025.3547942. [Google Scholar] [CrossRef]

15. Krolik JL, Farrell J, Steinhardt A. Exploiting multipath propagation for GMTI in urban environments. In: Proceedings of the 2006 IEEE Conference on Radar; 2006 Apr 24–27; Verona, NY, USA. 4 p. doi:10.1109/RADAR.2006.1631777. [Google Scholar] [CrossRef]

16. Setlur P, Negishi T, Devroye N, Erricolo D. Multipath exploitation in non-LOS urban synthetic aperture radar. IEEE J Sel Top Signal Process. 2014;8(1):137–52. doi:10.1109/JSTSP.2013.2287185. [Google Scholar] [CrossRef]

17. Sume A, Gustafsson M, Herberthson M, Janis A, Nilsson S, Rahm J, et al. Radar detection of moving targets behind corners. IEEE Trans Geosci Remote Sens. 2011;49(6):2259–67. doi:10.1109/TGRS.2010.2096471. [Google Scholar] [CrossRef]

18. Zetik R, Eschrich M, Jovanoska S, Thoma RS. Looking behind a corner using multipath-exploiting UWB radar. IEEE Trans Aerosp Electron Syst. 2015;51(3):1916–26. doi:10.1109/TAES.2015.140303. [Google Scholar] [CrossRef]

19. Kadambi A, Zhao H, Shi B, Raskar R. Occluded imaging with time-of-flight sensors. ACM Trans Graph. 2016;35(2):1–12. doi:10.1145/2836164. [Google Scholar] [CrossRef]

20. Thai KPH, Rabaste O, Bosse J, Poullin D, Hinostroza I, Letertre T, et al. Around-the-corner radar: detection and localization of a target in non-line of sight. In: Proceedings of the 2017 IEEE Radar Conference (RadarConf); 2017 May 8–12; Seattle, WA, USA. p. 842–7. doi:10.1109/RADAR.2017.7944320. [Google Scholar] [CrossRef]

21. Rong Y, Aubry A, De Maio A, Tang M. Diffuse multipath exploitation for adaptive detection of range distributed targets. IEEE Trans Signal Process. 2020;68:1197–212. doi:10.1109/TSP.2020.2967144. [Google Scholar] [CrossRef]

22. Chen J, Guo S, Luo H, Li N, Cui G. Non-line-of-sight multi-target localization algorithm for driver-assistance radar system. IEEE Trans Veh Technol. 2023;72(4):5332–7. doi:10.1109/TVT.2022.3227971. [Google Scholar] [CrossRef]

23. Chen Z, Zhou Y, Zhou Z, Sun B. All-in-one network for NLOS mm-wave radar object detection based on transformer. In: IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium; 2023 Jul 16–21; Pasadena, CA, USA. p. 6141–4. doi:10.1109/IGARSS52108.2023.10282816. [Google Scholar] [CrossRef]

24. Chen J, Yang X, Qiu C, Zhu Z, Wu P, Xu Z, et al. Joint localization of LOS and NLOS targets with clutter mitigation via multipath exploitation radar. IEEE Trans Radar Syst. 2025;3(1):549–61. doi:10.1109/TRS.2025.3550023. [Google Scholar] [CrossRef]

25. Patole SM, Torlak M, Wang D, Ali M. Automotive radars: a review of signal processing techniques. IEEE Signal Process Mag. 2017;34(2):22–35. doi:10.1109/MSP.2016.2628914. [Google Scholar] [CrossRef]

26. Hakobyan G, Yang B. High-performance automotive radar: a review of signal processing algorithms and modulation schemes. IEEE Signal Process Mag. 2019;36(5):32–44. doi:10.1109/MSP.2019.2911722. [Google Scholar] [CrossRef]

27. Engels F, Heidenreich P, Wintermantel M, Stäcker L, Al Kadi M, Zoubir AM. Automotive radar signal processing: research directions and practical challenges. IEEE J Sel Top Signal Process. 2021;15(4):865–78. doi:10.1109/JSTSP.2021.3063666. [Google Scholar] [CrossRef]

28. Geng Z, Yan H, Zhang J, Zhu D. Deep-learning for radar: a survey. IEEE Access. 2021;9:141800–18. doi:10.1109/ACCESS.2021.3119561. [Google Scholar] [CrossRef]

29. Venon A, Dupuis Y, Vasseur P, Merriaux P. Millimeter wave FMCW RADARs for perception, recognition and localization in automotive applications: a survey. IEEE Trans Intell Veh. 2022;7(3):533–55. doi:10.1109/TIV.2022.3167733. [Google Scholar] [CrossRef]

30. Fuchs J, Gardill M, Lübke M, Dubey A, Lurz F. A machine learning perspective on automotive radar direction of arrival estimation. IEEE Access. 2022;10:6775–97. doi:10.1109/ACCESS.2022.3141587. [Google Scholar] [CrossRef]

31. Hsu WT, Lin SL. Using FMCW in autonomous cars to accurately estimate the distance of the preceding vehicle. Int J Automot Technol. 2022;23(6):1755–62. doi:10.1007/s12239-022-0153-4. [Google Scholar] [CrossRef]

32. He J, Wang J, Yang B, Zhao K, Sun J. Grating lobe/sidelobe suppression method for near-field distributed MIMO imaging array. IEEE Trans Antennas Propag. 2025;73(3):1674–87. doi:10.1109/TAP.2024.3503921. [Google Scholar] [CrossRef]

33. Wang D, Lu D, Zhao J, Li W, Li H, Xu J, et al. Multiscale pillars fusion for 4-D radar object detection with radar data enhancement. IEEE Sens J. 2025;25(3):5102–15. doi:10.1109/JSEN.2024.3516786. [Google Scholar] [CrossRef]

34. Lin SL. Advanced multi-channel echo separation techniques for high-interference automotive radars. Comput Mater Contin. 2025;85(1):1365–82. doi:10.32604/cmc.2025.067764. [Google Scholar] [CrossRef]

35. Lin SL. Single-layer 79 GHz microstrip patch array for all-weather automotive radar. J Mod Mech Eng Technol. 2025;12:18–24. doi:10.31875/2409-9848.2025.12.03. [Google Scholar] [CrossRef]

36. Chen M, Chen CC. RCS patterns of pedestrians at 76–77 GHz. IEEE Antennas Propag Mag. 2014;56(4):252–63. doi:10.1109/MAP.2014.6931711. [Google Scholar] [CrossRef]

37. Villeval S, Bilik I, Gürbuz SZ. Application of a 24 GHz FMCW automotive radar for urban target classification. In: Proceedings of the 2014 IEEE Radar Conference; 2014 May 19–23; Cincinnati, OH, USA. p. 1237–40. doi:10.1109/RADAR.2014.6875787. [Google Scholar] [CrossRef]

38. Lee S, Yoon YJ, Lee JE, Kim SC. Human–vehicle classification using feature-based SVM in 77-GHz automotive FMCW radar. IET Radar Sonar Navig. 2017;11(10):1589–96. doi:10.1049/iet-rsn.2017.0126. [Google Scholar] [CrossRef]

39. Tavanti E, Rizik A, Fedeli A, Caviglia DD, Randazzo A. A short-range FMCW radar-based approach for multi-target human-vehicle detection. IEEE Trans Geosci Remote Sens. 2022;60:2003816. doi:10.1109/TGRS.2021.3138687. [Google Scholar] [CrossRef]

40. Maaref M, Kassas ZM. Ground vehicle navigation in GNSS-challenged environments using signals of opportunity and a closed-loop map-matching approach. IEEE Trans Intell Transp Syst. 2020;21(7):2723–38. doi:10.1109/TITS.2019.2907851. [Google Scholar] [CrossRef]

41. Hu Y, Li X, Dong X, Kong D, Xu Q, Sun Y. A reliable cooperative fusion positioning methodology for intelligent vehicle in non-line-of-sight environments. IEEE Trans Instrum Meas. 2022;71:1007111. doi:10.1109/TIM.2022.3205664. [Google Scholar] [CrossRef]

42. Zhou X, Wang C, Xie Q, Qiu T. V2I-coop: accurate object detection for connected automated vehicles at accident black spots with V2I cross-modality cooperation. IEEE Trans Mob Comput. 2025;24(3):2043–55. doi:10.1109/TMC.2024.3486758. [Google Scholar] [CrossRef]

43. Lin Z, Xiao L, Chen H, Lv Z. Collaborative perception against data fabrication attacks in vehicular networks. IEEE Trans Mob Comput. 2025;24(10):10654–67. doi:10.1109/TMC.2025.3571013. [Google Scholar] [CrossRef]

44. Lin SL, Chen YH. Technique on vehicle damage assessment after collisions using optical radar technology and iterative closest point algorithm. IEEE Access. 2024;12(2):174507–18. doi:10.1109/ACCESS.2024.3495721. [Google Scholar] [CrossRef]

45. Lin SL, Lin BC. Enhancing safety in autonomous vehicle navigation: an optimized path planning approach leveraging model predictive control. Comput Mater Contin. 2024;80(3):3555–72. doi:10.32604/cmc.2024.055456. [Google Scholar] [CrossRef]

46. Caesar H, Bankiti V, Lang AH, Vora S, Liong VE, Xu Q, et al. nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 Jun 13–19; Seattle, WA, USA. p. 11618–28. doi:10.1109/cvpr42600.2020.01164. [Google Scholar] [CrossRef]

47. Paek DH, Kong SH, Wijaya KT. K-radar: 4D radar object detection for autonomous driving in various weather conditions. In: NeurIPS 2022 Datasets and Benchmarks Track; 2022 Nov 28–Dec 9; New Orleans, LA, USA. [Google Scholar]

48. Zhang X, Wang L, Chen J, Fang C, Yang G, Wang Y, et al. Dual radar: a multi-modal dataset with dual 4D radar for autononous driving. Sci Data. 2025;12(1):439. doi:10.1038/s41597-025-04698-2. [Google Scholar] [PubMed] [CrossRef]

49. Yang L, Zhang X, Li J, Wang C, Ma J, Song Z, et al. V2X-radar: a multi-modal dataset with 4D radar for cooperative perception. arXiv:2411.10962. 2024. doi:10.48550/arXiv.2411.10962. [Google Scholar] [CrossRef]

50. Fan L, Wang J, Chang Y, Li Y, Wang Y, Cao D. 4D mmWave radar for autonomous driving perception: a comprehensive survey. IEEE Trans Intell Veh. 2024;9(4):4606–20. doi:10.1109/TIV.2024.3380244. [Google Scholar] [CrossRef]

51. Wang N, Shang D, Gong Y, Hu X, Song Z, Yang L, et al. Collaborative perception datasets for autonomous driving: a review. IEEE Sens J. 2025;25(16):30255–74. doi:10.1109/JSEN.2025.3582040. [Google Scholar] [CrossRef]

52. Kong H, Huang C, Yu J, Shen X. A survey of mmWave radar-based sensing in autonomous vehicles, smart homes and industry. IEEE Commun Surv Tutor. 2025;27(1):463–508. doi:10.1109/COMST.2024.3409556. [Google Scholar] [CrossRef]

53. Stove AG. Linear FMCW radar techniques. IEE Proc F Radar Signal Process. 1992;139(5):343–50. doi:10.1049/ip-f-2.1992.0048. [Google Scholar] [CrossRef]

54. Richards MA. Fundamentals of radar signal processing. 2nd ed. New York, NY, USA: McGraw-Hill Education; 2014. [Google Scholar]

55. de Berg M, Cheong O, van Kreveld M, Overmars M. Computational geometry: algorithms and applications. Berlin/Heidelberg, Germany: Springer; 2008. doi:10.1007/978-3-540-77974-2. [Google Scholar] [CrossRef]

56. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2012;60(6):84–90. doi:10.1145/3065386. [Google Scholar] [CrossRef]


Cite This Article

APA Style
Lin, S., Chen, Y. (2026). Deep Learning–Aided Frequency-Modulated Continuous-Wave Radar for Around-the-Corner Non-Line-of-Sight Perception at Urban Intersections. Computer Modeling in Engineering & Sciences, 147(1), 37. https://doi.org/10.32604/cmes.2026.078862
Vancouver Style
Lin S, Chen Y. Deep Learning–Aided Frequency-Modulated Continuous-Wave Radar for Around-the-Corner Non-Line-of-Sight Perception at Urban Intersections. Comput Model Eng Sci. 2026;147(1):37. https://doi.org/10.32604/cmes.2026.078862
IEEE Style
S. Lin and Y. Chen, “Deep Learning–Aided Frequency-Modulated Continuous-Wave Radar for Around-the-Corner Non-Line-of-Sight Perception at Urban Intersections,” Comput. Model. Eng. Sci., vol. 147, no. 1, pp. 37, 2026. https://doi.org/10.32604/cmes.2026.078862


cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 285

    View

  • 68

    Download

  • 0

    Like

Share Link