Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    Defending Federated Learning System from Poisoning Attacks via Efficient Unlearning

    Long Cai, Ke Gu*, Jiaqi Lei

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 239-258, 2025, DOI:10.32604/cmc.2025.061377 - 26 March 2025

    Abstract Large-scale neural networks-based federated learning (FL) has gained public recognition for its effective capabilities in distributed training. Nonetheless, the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks. Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force. By altering the local model during routine machine learning training, attackers can easily contaminate the global model. Traditional detection and aggregation solutions mitigate certain threats, but they are still insufficient to completely eliminate the influence generated by attackers. Therefore, federated… More >

  • Open Access

    REVIEW

    Ensuring User Privacy and Model Security via Machine Unlearning: A Review

    Yonghao Tang1, Zhiping Cai1,*, Qiang Liu1, Tongqing Zhou1, Qiang Ni2

    CMC-Computers, Materials & Continua, Vol.77, No.2, pp. 2645-2656, 2023, DOI:10.32604/cmc.2023.032307 - 29 November 2023

    Abstract As an emerging discipline, machine learning has been widely used in artificial intelligence, education, meteorology and other fields. In the training of machine learning models, trainers need to use a large amount of practical data, which inevitably involves user privacy. Besides, by polluting the training data, a malicious adversary can poison the model, thus compromising model security. The data provider hopes that the model trainer can prove to them the confidentiality of the model. Trainer will be required to withdraw data when the trust collapses. In the meantime, trainers hope to forget the injected data More >

Displaying 1-10 on page 1 of 2. Per Page