Open Access iconOpen Access

ARTICLE

Adversarial Perturbation for Sensor Data Anonymization: Balancing Privacy and Utility

Tatsuhito Hasegawa#,*, Kyosuke Fujino#

Graduate School of Engineering, University of Fukui, Fukui, 910-8507, Japan

* Corresponding Author: Tatsuhito Hasegawa. Email: email
# These authors contributed equally to this work

(This article belongs to the Special Issue: Advances in IoT Security: Challenges, Solutions, and Future Applications)

Computers, Materials & Continua 2025, 84(2), 2429-2454. https://doi.org/10.32604/cmc.2025.066270

Abstract

Recent advances in wearable devices have enabled large-scale collection of sensor data across healthcare, sports, and other domains but this has also raised critical privacy concerns, especially under tightening regulations such as the General Data Protection Regulation (GDPR), which explicitly restrict the processing of data that can re-identify individuals. Although existing anonymization approaches such as the Anonymizing AutoEncoder (AAE) can reduce the risk of re-identification, they often introduce substantial waveform distortions and fail to preserve information beyond a single classification task (e.g., human activity recognition). This study proposes a novel sensor data anonymization method based on Adversarial Perturbations (AP) to address these limitations. By generating minimal yet targeted noise, the proposed method significantly degrades the accuracy of identity classification while retaining essential features for multiple tasks such as activity, gender, or device-position recognition. Moreover, to enhance robustness against frequency-domain analysis, additional models trained on transformed (e.g., short-time Fourier transform (STFT)) representations are incorporated into the perturbation process. A multi-task formulation is introduced that selectively suppresses person-identifying features while reinforcing those relevant to other desired tasks without retraining large autoencoder-based architectures. The proposed framework is, to our knowledge, the first AP-based anonymization technique that (i) defends simultaneously against time- and frequency-domain attacks and (ii) allows per-task trade-off control on a single forward-back-propagation run, enabling real-time, on-device deployment on commodity hardware. On three public datasets, the proposed method reduces person-identification accuracy from 60–90% to near-chance levels (%) while preserving the original activity-recognition F1 both in the time and frequency domains. Compared with the baseline AAE, the proposed method improves downstream task F1 and lowers waveform mean squared error, demonstrating a better privacy-utility trade-off without additional model retraining. These findings underscore the effectiveness and flexibility of AP in privacy-preserving sensor-data processing, offering a practical solution that safeguards user identity while retaining rich, application-critical information.

Keywords

Human activity recognition; privacy-aware IoT; adversarial perturbation

Cite This Article

APA Style
Hasegawa, T., Fujino, K. (2025). Adversarial Perturbation for Sensor Data Anonymization: Balancing Privacy and Utility. Computers, Materials & Continua, 84(2), 2429–2454. https://doi.org/10.32604/cmc.2025.066270
Vancouver Style
Hasegawa T, Fujino K. Adversarial Perturbation for Sensor Data Anonymization: Balancing Privacy and Utility. Comput Mater Contin. 2025;84(2):2429–2454. https://doi.org/10.32604/cmc.2025.066270
IEEE Style
T. Hasegawa and K. Fujino, “Adversarial Perturbation for Sensor Data Anonymization: Balancing Privacy and Utility,” Comput. Mater. Contin., vol. 84, no. 2, pp. 2429–2454, 2025. https://doi.org/10.32604/cmc.2025.066270



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 501

    View

  • 124

    Download

  • 0

    Like

Share Link