000 04867ntm a22003857a 4500
003 AT-ISTA
005 20250915092030.0
008 250915s2024 au ||||| m||| 00| 0 eng d
040 _cISTA
100 _aShevchenko, Aleksandr
_91084218
245 _aHigh-dimensional limits in artificial neural networks
260 _bInstitute of Science and Technology Austria
_c2024
500 _aThesis
505 _aAbstract
505 _aAcknowledgements
505 _aAbout the Author
505 _aList of Collaborators and Publications
505 _aTable of Contents
505 _aList of Figures
505 _a1 Introduction
505 _a2 Background
505 _a3 Landscape Connectivity and Dropout Stability of SGD Solutions
505 _a4 Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
505 _a5 Fundamental Limits of Two-layer Autoencoders
505 _a6 Autoencoders: Beyond Gaussian Data
505 _a7 Discussion and Concluding Remarks
505 _aBibliography
505 _aA Appendix for Chapter 3
505 _aB Appendix for Chapter 4
505 _aC Appendix for Chapter 5
505 _aD Appendix for Chapter 6
520 _aIn the modern age of machine learning, artificial neural networks have become an integral part of many practical systems. One of the key ingredients of the success of the deep learning approach is recent computational advances which allowed the training of models with billions of parameters on large-scale data. Such over-parameterized and data-hungry regimes pose a challenge for the theoretical analysis of modern models since “classical” statistical wisdom is no longer applicable. In this view, it is paramount to extend or develop new machinery that will allow tackling the neural network analysis under new challenging asymptotic regimes, which is the focus of this thesis. Large neural network systems are usually optimized via “local” search algorithms, such as stochastic gradient descent (SGD). However, given the high-dimensional nature of the parameter space, it is a priori not clear why such a crude “local” approach works so remarkably well in practice. We take a step towards demystifying this phenomenon by showing that the landscape of the SGD training dynamics exhibits a few beneficial properties for the optimization. First, we show that along the SGD trajectory an over-parameterized network is dropout stable. The emergence of dropout stability allows to conclude that the minima found by SGD are connected via a continuous path of small loss. This in turn means that the high-dimensional landscape of the neural network optimization problem is provably not so unfavourable to gradient-based training, due to mode connectivity. Next, we show that SGD for an over-parameterized network tends to find solutions that are functionally more “simple”. This in turn means that the SGD minima are more robust, since a less complicated solution will less likely overfit the data. More formally, for a prototypical example of a wide two-layer ReLU network on a 1d regression task we show that the SGD algorithm is implicitly selective in its choice of an interpolating solution. Namely, at convergence the neural network implements a piece-wise linear function with the number of linear regions depending only on the amount of training data. This is in contrast to a “smooth”-like behaviour which one would expect given such a severe over-parameterization of the model. Diverging from the generic supervised setting of classification and regression problems, we analyze an auto-encoder model that is commonly used for representation learning and data compression. Despite the wide applicability of the auto-encoding paradigm, the theoretical understanding of their behaviour is limited even in the simplistic shallow case. The related work is restricted to extreme asymptotic regimes in which the auto-encoder is either severely over-parameterized or under-parameterized. In contrast, we provide a tight characterization for the 1-bit compression of Gaussian signals in the challenging proportional regime, i.e., the input dimension and the size of the compressed representation obey the same asymptotics. We also show that gradient-based methods are able to find a globally optimal solution and that the predictions made for Gaussian data extrapolate beyond - to the case of compression of natural images. Next, we relax the Gaussian assumption and study more structured input sources. We show that the shallow model is sometimes agnostic to the structure of the data vii which results in a Gaussian-like behaviour. We prove that making the decoding component slightly less shallow is already enough to escape the “curse” of Gaussian performance.
856 _uhttps://doi.org/10.15479/at:ista:17465
942 _2ddc
999 _c768054
_d768054