TY - EJOU
AU - He, Shiming
AU - Li, Zhuozhou
AU - Tang, Yangning
AU - Liao, Zhuofan
AU - Li, Feng
AU - Lim, Se-Jung
TI - Parameters Compressing in Deep Learning
T2 - Computers, Materials \& Continua
PY - 2020
VL - 62
IS - 1
SN - 1546-2226
AB - With the popularity of deep learning tools in image decomposition and natural
language processing, how to support and store a large number of parameters required by
deep learning algorithms has become an urgent problem to be solved. These parameters
are huge and can be as many as millions. At present, a feasible direction is to use the
sparse representation technique to compress the parameter matrix to achieve the purpose
of reducing parameters and reducing the storage pressure. These methods include matrix
decomposition and tensor decomposition. To let vector take advance of the compressing
performance of matrix decomposition and tensor decomposition, we use reshaping and
unfolding to let vector be the input and output of Tensor-Factorized Neural Networks.
We analyze how reshaping can get the best compress ratio. According to the relationship
between the shape of tensor and the number of parameters, we get a lower bound of the
number of parameters. We take some data sets to verify the lower bound.
KW - Deep neural network
KW - parameters compressing
KW - matrix decomposition
KW - tensor decomposition
DO - 10.32604/cmc.2020.06130