Informatika i Ee Primeneniya [Informatics and its Applications]
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Inform. Primen.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Informatika i Ee Primeneniya [Informatics and its Applications], 2021, Volume 15, Issue 1, Pages 42–49
DOI: https://doi.org/10.14357/19922264210106
(Mi ia710)
 

Variational deep learning model optimization with complexity control

O. S. Grebenkovaa, O. Yu. Bakhteevab, V. V. Strijovca

a Moscow Institute of Physics and Technology, 9 Institutskiy Per., Dolgoprudny, Moscow Region 141701, Russian Federation
b Antiplagiat Co., 42-1 Bolshoy Blvd., Moscow 121205, Russian Federation
c A. A. Dorodnicyn Computing Center, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 40 Vavilov Str., Moscow 119333, Russian Federation
References:
Abstract: This paper investigates the problem of deep learning model optimization. The authors propose a method to control model complexity. The minimum description length is interpreted as the complexity of the model. It acts as the minimal amount of information that is required to transfer information about the model and the dataset. The proposed method is based on representation of a deep learning model. The authors propose a form of a hypernet using the Bayesian inference. A hypernet is a model that generates parameters of an optimal model. The authors introduce probabilistic assumptions about the distribution of parameters of the deep learning model. The paper suggests maximizing the evidence lower bound of the Bayesian model validity. The authors consider the evidence bound as a conditional value that depends on the required model complexity. The authors analyze this method in computational experiments on the MNIST dataset.
Keywords: model variational optimization, hypernets, deep learning, neural networks, Bayesian inference, model complexity control.
Funding agency Grant number
Ministry of Science and Higher Education of the Russian Federation 13/1251/2018
Russian Foundation for Basic Research 19-07-01155
19-07-00875
This paper contains results of the project “Statistical methods of machine learning” which is carried out within the framework of the program “Center of Big Data Storage and Analysis” of the National Technology Initiative Competence Center. It is supported by the Ministry of Science and Higher Education of the Russian Federation according to the agreement between the M. V. Lomonosov Moscow State University and the Foundation of Project Support of the National Technology Initiative from 11.12.2018, No. 13/1251/2018. This research was supported by RFBR (projects 19-07-01155 and 19-07-00875).
Received: 14.08.2020
Document Type: Article
Language: Russian
Citation: O. S. Grebenkova, O. Yu. Bakhteev, V. V. Strijov, “Variational deep learning model optimization with complexity control”, Inform. Primen., 15:1 (2021), 42–49
Citation in format AMSBIB
\Bibitem{GreBakStr21}
\by O.~S.~Grebenkova, O.~Yu.~Bakhteev, V.~V.~Strijov
\paper Variational deep learning model optimization with~complexity control
\jour Inform. Primen.
\yr 2021
\vol 15
\issue 1
\pages 42--49
\mathnet{http://mi.mathnet.ru/ia710}
\crossref{https://doi.org/10.14357/19922264210106}
Linking options:
  • https://www.mathnet.ru/eng/ia710
  • https://www.mathnet.ru/eng/ia/v15/i1/p42
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Информатика и её применения
    Statistics & downloads:
    Abstract page:246
    Full-text PDF :118
    References:33
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024