Abstract: The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs. An assumption very commonly made in the field states that the pre-activations are Gaussian. Although this convenient Gaussian hypothesis can be justified when the number of neurons per layer tends to infinity, it is challenged by both theoretical and experimental works for finite-width neural networks. Our major contribution is to construct a family of pairs of activation functions and initialization distributions that ensure that the pre-activations remain Gaussian throughout the network's depth, even in narrow neural networks. In the process, we discover a set of constraints that a neural network should fulfill to ensure Gaussian pre-activations. Additionally, we provide a critical review of the claims of the Edge of Chaos line of works and build an exact Edge of Chaos analysis. We also propose a unified view on pre-activations propagation, encompassing the framework of several well-known initialization procedures. Finally, our work provides a principled framework for answering the much-debated question: is it desirable to initialize the training of a neural network whose pre-activations are ensured to be Gaussian?
In this paper, we discuss the hypothesis of Gaussian pre-activations at the initialization of a neural network, which we call the "Gaussian hypothesis". More specifically, we have obtained the following results:
* we perform an empirical study of the propagation of the distribution of the pre-activations in a neural network;
* with a ReLU activation function, the pre-activations are not Gaussian, which contradicts the Gaussian hypothesis;
* the "Edge of Chaos" framework, which indicates how the pre-activations propagate at initialization, is shown to output inconsistent results when using neural networks with a small number of neurons per layer (since it makes use of the Gaussian hypothesis, which is not always valid);
* in order to solve the preceding problems, we construct a family of activation functions and initialization distributions such that the Gaussian hypothesis holds;
* in that case, the Edge of Chaos framework is exact, and does not output inconsistent results;
* in that case, information (back-)propagates better through deep and narrow neural networks (i.e., with a large number of layers and a small number of neurons per layer) than when using ReLU or tanh activation functions with Kaiming or Xavier initialization.
Speaker: Pierre Wolinski is currently a post-doctoral researcher in the Statify team, at the Inria Grenoble (France), under the supervision of Julyan Arbel. Before that, he spent one year of post-doc at the University of Oxford with Judith Rousseau. He did his PhD with Guillaume Charpiat (computer vision) and Yann Ollivier (theory of ML) at the Tau team (Inria Saclay, France), on neural network pruning and Bayesian neural networks.
Now, he is studying information propagation through neural networks, both at initialization and during training.
Affiliation: Inria team Statify
Place of Seminar: Kumpula exactum D122 (in person) & zoom ( Meeting ID: 640 5738 7231 ; Passcode: 825217)