|
Variational deep learning model optimization with complexity control
O. S. Grebenkovaa, O. Yu. Bakhteevab, V. V. Strijovca a Moscow Institute of Physics and Technology, 9 Institutskiy Per., Dolgoprudny, Moscow Region 141701, Russian Federation
b Antiplagiat Co., 42-1 Bolshoy Blvd., Moscow 121205, Russian Federation
c A. A. Dorodnicyn Computing Center, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 40 Vavilov Str., Moscow 119333, Russian Federation
Abstract:
This paper investigates the problem of deep learning model optimization. The authors propose a method to control model complexity. The minimum description length is interpreted as the complexity of the model. It acts as the minimal amount of information that is required to transfer information about the model and the dataset. The proposed method is based on representation of a deep learning model. The authors propose a form of a hypernet using the Bayesian inference. A hypernet is a model that generates parameters of an optimal model. The authors introduce probabilistic assumptions about the distribution of parameters of the deep learning model. The paper suggests maximizing the evidence lower bound of the Bayesian model validity. The authors consider the evidence bound as a conditional value that depends on the required model complexity. The authors analyze this method in computational experiments on the MNIST dataset.
Keywords:
model variational optimization, hypernets, deep learning, neural networks, Bayesian inference, model complexity control.
Received: 14.08.2020
Citation:
O. S. Grebenkova, O. Yu. Bakhteev, V. V. Strijov, “Variational deep learning model optimization with complexity control”, Inform. Primen., 15:1 (2021), 42–49
Linking options:
https://www.mathnet.ru/eng/ia710 https://www.mathnet.ru/eng/ia/v15/i1/p42
|
Statistics & downloads: |
Abstract page: | 223 | Full-text PDF : | 106 | References: | 19 |
|