Open Access iconOpen Access

ARTICLE

crossmark

Guided Dropout: Improving Deep Networks Without Increased Computation

Yifeng Liu1, Yangyang Li1,*, Zhongxiong Xu1, Xiaohan Liu1, Haiyong Xie2, Huacheng Zeng3

1 National Engineering Research Center for Risk Perception and Prevention (NERC-RPP), CAEIT, Beijing, 100041, China
2 University of Science and Technology of China, Hefei, Anhui, 230026, China
3 Department of Computer Science and Engineering, Michigan State University, East Lansing, MI, 48824, USA

* Corresponding Author: Yangyang Li. Email: email

Intelligent Automation & Soft Computing 2023, 36(3), 2519-2528. https://doi.org/10.32604/iasc.2023.033286

Abstract

Deep convolution neural networks are going deeper and deeper. However, the complexity of models is prone to overfitting in training. Dropout, one of the crucial tricks, prevents units from co-adapting too much by randomly dropping neurons during training. It effectively improves the performance of deep networks but ignores the importance of the differences between neurons. To optimize this issue, this paper presents a new dropout method called guided dropout, which selects the neurons to switch off according to the differences between the convolution kernel and preserves the informative neurons. It uses an unsupervised clustering algorithm to cluster similar neurons in each hidden layer, and dropout uses a certain probability within each cluster. Thereby this would preserve the hidden layer neurons with different roles while maintaining the model’s scarcity and generalization, which effectively improves the role of the hidden layer neurons in learning the features. We evaluated our approach compared with two standard dropout networks on three well-established public object detection datasets. Experimental results on multiple datasets show that the method proposed in this paper has been improved on false positives, precision-recall curve and average precision without increasing the amount of computation. It can be seen that the increased performance of guided dropout is thanks to shallow learning in the networks. The concept of guided dropout would be beneficial to the other vision tasks.

Keywords


Cite This Article

APA Style
Liu, Y., Li, Y., Xu, Z., Liu, X., Xie, H. et al. (2023). Guided dropout: improving deep networks without increased computation. Intelligent Automation & Soft Computing, 36(3), 2519-2528. https://doi.org/10.32604/iasc.2023.033286
Vancouver Style
Liu Y, Li Y, Xu Z, Liu X, Xie H, Zeng H. Guided dropout: improving deep networks without increased computation. Intell Automat Soft Comput . 2023;36(3):2519-2528 https://doi.org/10.32604/iasc.2023.033286
IEEE Style
Y. Liu, Y. Li, Z. Xu, X. Liu, H. Xie, and H. Zeng, “Guided Dropout: Improving Deep Networks Without Increased Computation,” Intell. Automat. Soft Comput. , vol. 36, no. 3, pp. 2519-2528, 2023. https://doi.org/10.32604/iasc.2023.033286



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1140

    View

  • 605

    Download

  • 0

    Like

Share Link