Table of Content

Open Access iconOpen Access

ARTICLE

crossmark

An Explanatory Strategy for Reducing the Risk of Privacy Leaks

Mingting Liu1, Xiaozhang Liu1,*, Anli Yan1, Xiulai Li1,2, Gengquan Xie1, Xin Tang3

1 Hainan University, Haikou, 570228, China
2 Hainan Hairui Zhong Chuang Technol Co., Ltd., Haikou, 570228, China
3 School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore

* Corresponding Author: Xiaozhang Liu. Email: email

Journal of Information Hiding and Privacy Protection 2021, 3(4), 181-192. https://doi.org/10.32604/jihpp.2021.027385

Abstract

As machine learning moves into high-risk and sensitive applications such as medical care, autonomous driving, and financial planning, how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions. Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions. However, these information inevitably leads to the dataset or model into the risk of privacy leaks. We propose a strategy to reduce model privacy leakage for instance interpretability techniques. The following is the specific operation process. Firstly, the user inputs data into the model, and the model calculates the prediction confidence of the data provided by the user and gives the prediction results. Meanwhile, the model obtains the prediction confidence of the interpretation data set. Finally, the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data. Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small, which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data. Finally, we demonstrate the accuracy of the explanatory data. We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model. The results show that the interpretation method has high accuracy and wide applicability.

Keywords


Cite This Article

M. Liu, X. Liu, A. Yan, X. Li, G. Xie et al., "An explanatory strategy for reducing the risk of privacy leaks," Journal of Information Hiding and Privacy Protection, vol. 3, no.4, pp. 181–192, 2021. https://doi.org/10.32604/jihpp.2021.027385



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 877

    View

  • 503

    Download

  • 0

    Like

Share Link